CN112883897A - Distribution robot for man-machine interaction and man-machine interaction method - Google Patents

Distribution robot for man-machine interaction and man-machine interaction method Download PDF

Info

Publication number
CN112883897A
CN112883897A CN202110264539.8A CN202110264539A CN112883897A CN 112883897 A CN112883897 A CN 112883897A CN 202110264539 A CN202110264539 A CN 202110264539A CN 112883897 A CN112883897 A CN 112883897A
Authority
CN
China
Prior art keywords
obstacle
robot
information
pedestrian
distribution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110264539.8A
Other languages
Chinese (zh)
Inventor
宋增
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Yogo Robot Co Ltd
Original Assignee
Shanghai Yogo Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Yogo Robot Co Ltd filed Critical Shanghai Yogo Robot Co Ltd
Priority to CN202110264539.8A priority Critical patent/CN112883897A/en
Publication of CN112883897A publication Critical patent/CN112883897A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Economics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Development Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Manipulator (AREA)

Abstract

The invention provides a distribution robot for human-computer interaction and a human-computer interaction method. The laser sensor is used for detecting the distance information of the obstacles in front of the distribution robot; the front camera shoots image information of obstacles in front of the distribution robot; the processor judges the type of the obstacle according to the image information of the obstacle; and the control layer controls the moving action and the interaction mode of the distribution robot according to the task distribution information, the obstacle distance information and the obstacle type. According to the invention, the laser sensor and the front camera are arranged on the delivery robot, the obstacle in front of the delivery robot is identified, when the pedestrian is identified, the delivery robot is controlled to walk around, and interaction is carried out on the pedestrian, so that the intelligence and the experience of the delivery robot are improved.

Description

Distribution robot for man-machine interaction and man-machine interaction method
Technical Field
The invention relates to the field of robot distribution, in particular to a distribution robot for human-computer interaction and a human-computer interaction method.
Background
The existing building distribution robot (for example, express delivery or take-out) can stably execute distribution tasks, but under a complex scene that people are dense at noon and peak in an office building, the probability that the robot encounters behaviors such as pedestrian watching, blocking, playing and the like is very high, and at this time, the distribution robot needs to actively send some actions to ensure that the distribution tasks can be continuously and smoothly executed in time or communicate with pedestrians to achieve an interaction effect.
Disclosure of Invention
The present invention provides a dispensing robot for human-computer interaction and a human-computer interaction method that overcome the above problems or at least partially solve the above problems.
According to a first aspect of the invention, a delivery robot for human-computer interaction is provided, the delivery robot comprises an identification layer and a control layer, wherein the identification layer comprises a laser sensor installed on the delivery robot, a front camera installed in front of the delivery robot and a processor; the laser sensor is used for detecting the distance information of the obstacles in front of the distribution robot; the front camera is used for shooting image information of obstacles in front of the distribution robot; the processor is used for judging the type of the obstacle according to the image information of the obstacle, and sending the distance information of the obstacle and the type of the obstacle to the control layer through WebSocket; and the control layer is used for controlling the moving action and the interaction mode of the distribution robot according to the task distribution information, the obstacle distance information and the obstacle type.
On the basis of the technical scheme, the invention can be improved as follows.
Optionally, the processor is configured to determine a type of an obstacle according to the obstacle image information, and send the obstacle distance information and the type of the obstacle to the control layer, and the method includes: judging whether the type of the obstacle is an object or a pedestrian according to the image information of the obstacle shot by the front camera; correspondingly, the control layer is configured to control a moving action and an interaction mode of the delivery robot according to task delivery information, the obstacle distance information, and the obstacle type, and includes: if the distribution robot is in the task distribution execution period, if the type of the obstacle is an object, controlling the distribution robot to move around the object according to the obstacle distance information; if the type of the obstacle is a pedestrian, controlling the distribution robot to move around the pedestrian according to the obstacle distance information, and calling the pedestrian according to a first calling mode; and if the distribution robot is in the task-free execution state, calling the pedestrian according to a second calling mode.
Optionally, the distribution robot further includes a human-computer interaction interface layer, and the control layer is further configured to: receiving user operation screen information passing through a human-computer interaction interface layer, and judging whether a user issues delivery task information or not according to the user operation screen information, wherein the delivery task information comprises a task order and a task delivery destination; and planning a driving route of the delivery robot according to the current position of the delivery robot and the task delivery destination so that the delivery robot can deliver the tasks according to the planned driving route during the task delivery execution period.
Optionally, the processor is further configured to: when the obstacle type is judged to be a pedestrian, analyzing the behavior information of the pedestrian, and sending the behavior information of the pedestrian to a control layer; correspondingly, the control layer is further configured to: and if the distribution robot is in the task distribution execution period, controlling the distribution robot to move around the pedestrian according to the obstacle distance information, and performing corresponding voice broadcast according to the behavior information of the pedestrian.
Optionally, if the distribution robot is in the task-free execution state, calling the pedestrian according to a second calling mode, including: when the distribution robot meets pedestrians, the control layer pops up applet codes through the interactive interface layer so that the pedestrians can register as service users of the distribution robot through scanning.
Optionally, a face recognition module is further installed on the distribution robot, and the face recognition module is in bidirectional communication connection with the processor; the face recognition module is used for starting the face recognition module when the processor judges that the type of the obstacle is the pedestrian, acquiring face recognition information of the pedestrian and sending the face recognition information to the processor; the processor is used for matching the face identification information with face information in a user identity information base to obtain identity information of pedestrians, wherein the identity information of the pedestrians comprises familiar persons or strangers, and the identity information of the identified pedestrians is sent to the control layer.
Optionally, the control layer is configured to place a call with the pedestrian according to a second call placing manner if the distribution robot is in the task-free execution state, and further includes: according to the identity information of the pedestrian, different distribution announcement contents are played for the pedestrian, and the announcement contents are in an audio form, a video form or a text information form.
According to a second aspect of the invention, a man-machine interaction method based on a delivery robot is provided, which comprises the following steps: detecting obstacle distance information in front of a distribution robot, and shooting obstacle image information in front of the distribution robot; judging the type of the obstacle according to the obstacle image information; and controlling the moving action and the interaction mode of the distribution robot according to the task distribution information, the obstacle distance information and the obstacle type.
Optionally, the determining the type of the obstacle according to the obstacle image information includes: judging whether the type of the obstacle is an object or a pedestrian according to the shot image information of the obstacle; correspondingly, the controlling the moving action and the interaction mode of the delivery robot according to the task delivery information, the obstacle distance information and the obstacle type comprises the following steps: if the distribution robot is in the task distribution execution period, if the type of the obstacle is an object, controlling the distribution robot to move around the object according to the obstacle distance information; if the type of the obstacle is a pedestrian, controlling the distribution robot to move around the pedestrian according to the obstacle distance information, and calling the pedestrian according to a first calling mode; and if the distribution robot is in the task-free execution state, calling the pedestrian according to a second calling mode.
According to the delivery robot and the man-machine interaction method for man-machine interaction, the laser sensor and the front camera are mounted on the delivery robot, so that the obstacle in front of the delivery robot is identified, when the pedestrian is identified, the delivery robot is controlled to bypass walking, interaction is performed on the pedestrian, and the intelligence and the experience of the delivery robot are improved.
Drawings
Fig. 1 is a schematic structural diagram of a delivery robot for human-computer interaction according to the present invention;
fig. 2 is a schematic structural diagram of an identification robot of the distribution robot;
FIG. 3 is a flow chart of a man-machine interaction method based on a distribution robot provided by the invention;
fig. 4 is an overall flowchart of a human-computer interaction method based on a delivery robot provided by the invention.
Detailed Description
The following detailed description of embodiments of the present invention is provided in connection with the accompanying drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
Fig. 1 is a schematic structural diagram of a delivery robot for human-computer interaction according to an embodiment of the present invention, where the delivery robot includes an identification layer and a control layer, the identification layer includes a laser sensor mounted on the delivery robot, a front camera mounted in front of the delivery robot, and a processor.
The intelligent distribution robot mainly comprises an identification layer and a control layer, wherein the identification layer is formed by combining hardware and software, and the hardware comprises a laser sensor and a front camera of the robot and is responsible for acquiring data. The software takes data acquired by the hardware, identifies specific objects and pedestrians through an image algorithm, and pushes information to the control layer in real time through WebSocket when information of the pedestrians is identified. The control layer is mainly responsible for storing and executing tasks, controlling the robot to avoid and playing voice and the like.
Among them, the laser sensor is a sensor that performs measurement using a laser technique, and can recognize an object and measure a distance to the object. The front camera is positioned at the camera right in front of the robot and can acquire image information in real time. The processor can identify information such as a wall body, an elevator shaft and pedestrians according to image information acquired by the front camera of the robot. WebSocket is a protocol for full-duplex communication on a single TCP connection, and the WebScoket can be used for enabling a server to actively push data to a client, so that data exchange between the client and the server is simpler.
Specifically, the laser sensor is used for detecting the distance information of the obstacles in front of the distribution robot; the front camera is used for shooting image information of obstacles in front of the distribution robot; the processor is used for judging the type of the obstacle according to the image information of the obstacle, and sending the obstacle distance information and the type of the obstacle to the control layer through the WebSocket; and the control layer is used for controlling the moving action and the interaction mode of the distribution robot according to the task distribution information, the obstacle distance information and the obstacle type.
In a possible embodiment, the processor is configured to determine an obstacle type according to obstacle image information, and send the obstacle distance information and the obstacle type to the control layer, and the processor includes: judging whether the type of the obstacle is an object or a pedestrian according to the image information of the obstacle shot by the front camera; accordingly, the control layer is configured to: if the distribution robot is in the task distribution execution period, if the type of the obstacle is an object, controlling the distribution robot to move around the object according to the obstacle distance information; if the type of the obstacle is a pedestrian, controlling the distribution robot to move around the pedestrian according to the obstacle distance information, and calling the pedestrian according to a first calling mode; and if the distribution robot is in the task-free execution state, calling the pedestrian according to a second calling mode.
It will be appreciated that, referring to fig. 2, the recognition layer mainly comprises a laser sensor, a front camera and a processor, wherein the laser sensor recognizes an obstacle in front of the dispensing robot and determines the distance thereof. The front camera shoots image information of obstacles in front of the distribution robot in real time. The processor judges the type of the obstacle, namely whether the obstacle belongs to an object or a pedestrian according to the shot image information of the obstacle.
After the identification layer identifies the type of the obstacle, if the distribution robot is in the task distribution execution period, the distribution robot needs to bypass the obstacle, and specifically, if the type of the obstacle is an object, the control layer controls the distribution robot to move around the object according to the obstacle distance information. If the type of the obstacle is a pedestrian, the control layer controls the distribution robot to move around the pedestrian according to the obstacle distance information, and the distribution robot makes a call with the pedestrian according to the first call mode and interacts with the pedestrian. And if the distribution robot is in the task-free execution state, calling the pedestrian according to a second calling mode.
In a possible implementation manner, the dispensing robot further includes a human-computer interaction interface layer, and the control layer is further configured to: receiving user operation screen information passing through a human-computer interaction interface layer, and judging whether a user issues delivery task information or not according to the user operation screen information, wherein the delivery task information comprises a task order and a task delivery destination; and planning a driving route of the delivery robot according to the current position of the delivery robot and the task delivery destination so that the delivery robot can deliver the tasks according to the planned driving route during the task delivery execution period.
It can be understood that the distribution robot further comprises a human-computer interaction interface layer, a user can input task information through the human-computer interaction interface layer, the control layer receives information input by the user through a screen of the human-computer interaction interface layer, judges whether the user issues the distribution task information, and if the user issues the distribution task information, obtains task order information and a task distribution destination in the task distribution information.
And planning a distribution route of the distribution robot according to the current position of the distribution robot and the task distribution destination so that the distribution robot can distribute according to the planned distribution route when carrying out task distribution. When the delivery robot delivers according to the delivery route, if an obstacle is met, the obstacle needs to be bypassed, and when the obstacle meets a pedestrian, the obstacle is called in a certain mode.
In a possible implementation manner, the processor is further configured to: when the obstacle type is judged to be a pedestrian, analyzing the behavior information of the pedestrian, and sending the behavior information of the pedestrian to a control layer; correspondingly, the control layer is further configured to: and if the distribution robot is in the task distribution execution period, controlling the distribution robot to move around the pedestrian according to the obstacle distance information, and performing corresponding voice broadcast according to the behavior information of the pedestrian.
It can be understood that when the processor determines that the type of the obstacle is a pedestrian, the processor analyzes the behavior information of the pedestrian and performs different voice broadcasts according to the analyzed behavior information of the pedestrian. For example, when the delivery robot executes the delivery task and meets the pedestrian obstruction, the delivery robot plays a voice of 'please give a lead', and makes a moving action of bypassing the pedestrian; when a pedestrian plays, a voice of ' I can not play with you ' when the pedestrian plays ' is played, and a moving action of bypassing the pedestrian is made. The distribution robot plays different voices and calls with pedestrians according to different behavior information of the pedestrians, and the interest and the humanization of the distribution robot are increased.
In a possible embodiment, if the delivery robot is in the task-free execution state, calling the pedestrian according to the second calling mode, including: when the distribution robot meets pedestrians, the control layer pops up the applet codes through the interactive interface layer so that the pedestrians can register as service users of the distribution robot through scanning.
It can be understood that when the delivery robot is in the non-delivery task execution state, when a pedestrian blocks or surrounds the delivery robot, the delivery robot can play 'sweep code to help you go upstairs', and the human-computer interaction interface layer is informed to pop up the applet code, and the pedestrian can register as the robot service user through the code sweep.
In a possible embodiment, the distribution robot is further provided with a face recognition module, and the face recognition module is in bidirectional communication connection with the processor; the face recognition module is used for starting the face recognition module when the processor judges that the barrier type is the pedestrian, acquiring face recognition information of the pedestrian and sending the face recognition information to the processor; and the processor is used for matching the face identification information with face information in a user identity information base to obtain identity information of a pedestrian, wherein the identity information of the pedestrian comprises a familiar person or a stranger, and the identified identity information of the pedestrian is sent to the control layer.
It can be understood that the distribution robot is also provided with a face recognition module, when the processor determines that the type of the obstacle is the pedestrian, the face recognition module on the distribution robot is started to recognize the face of the pedestrian, the face recognition information of the pedestrian is obtained, and the face recognition information is sent to the processor.
The processor matches the face recognition information with face information in the user identity information base to obtain identity information of pedestrians. The identity information of the pedestrian comprises a familiar person or a stranger, and the identified identity information of the pedestrian is sent to the control layer.
The controller plays different delivery announcement contents to the pedestrian according to the identity information of the pedestrian, wherein the announcement contents are in an audio form, a video form or a text information form. For example, when the pedestrian is a familiar person, the delivery service does not need to be announced to the pedestrian, only the voice of 'scanning the code to help you go upstairs' needs to be played, and the human-computer interaction interface layer is informed to pop up the applet code for the pedestrian to scan the code. If the pedestrian is a stranger, the task distribution service is announced to the stranger, wherein when the task distribution service is announced, the task distribution service can be announced in an audio form, a video form or a text message form.
Fig. 3 is a human-computer interaction method based on a distribution robot provided by the present invention, and as shown in fig. 3, the method includes: 301. detecting obstacle distance information in front of the distribution robot, and shooting image information of obstacles in front of the distribution robot; 302. judging the type of the obstacle according to the image information of the obstacle; 303. and controlling the moving action and the interaction mode of the distribution robot according to the task distribution information, the obstacle distance information and the obstacle type.
Wherein, according to the obstacle image information, judging the type of the obstacle comprises: judging whether the type of the obstacle is an object or a pedestrian according to the shot image information of the obstacle; correspondingly, the controlling the moving action and the interaction mode of the delivery robot according to the task delivery information, the obstacle distance information and the obstacle type comprises the following steps: if the distribution robot is in the task distribution execution period, if the type of the obstacle is an object, controlling the distribution robot to move around the object according to the obstacle distance information; if the type of the obstacle is a pedestrian, controlling the distribution robot to move around the pedestrian according to the obstacle distance information, and calling the pedestrian according to a first calling mode; and if the distribution robot is in the task-free execution state, calling the pedestrian according to a second calling mode.
It can be understood that, the human-computer interaction method based on the delivery robot provided by the present invention corresponds to the delivery robot for human-computer interaction provided by the foregoing embodiments, and the relevant technical features of the human-computer interaction method based on the delivery robot may refer to the relevant technical features of the delivery robot for human-computer interaction, and are not described herein again.
Referring to fig. 4, a method for realizing human-computer interaction by the delivery robot provided by the invention is described. The identification layer identifies the type of the barrier as a pedestrian and the behavior action thereof and sends the type of the barrier to the control layer. The control layer receives information input by a user through the human-computer interaction interface layer to judge whether a distribution task is executed or not, and if the distribution task is executed at the moment, voice is played to remind pedestrians and an avoidance action is executed; and if the distribution task is not executed at the moment, playing interactive voice to attract pedestrians and informing a human-computer interaction interface layer to pop up an applet code for the pedestrians to scan the code and register as a service user of the distribution robot.
According to the delivery robot and the man-machine interaction method for man-machine interaction, the laser sensor and the front camera are mounted on the delivery robot, so that the obstacle in front of the delivery robot is identified, when the pedestrian is identified, the delivery robot is controlled to walk around, interaction is performed on the pedestrian, and the intelligence and the experience of the delivery robot are improved.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (9)

1. A delivery robot for human-computer interaction is characterized by comprising an identification layer and a control layer, wherein the identification layer comprises a laser sensor arranged on the delivery robot, a front camera arranged in front of the delivery robot and a processor;
the laser sensor is used for detecting the distance information of the obstacles in front of the distribution robot;
the front camera is used for shooting image information of obstacles in front of the distribution robot;
the processor is used for judging the type of the obstacle according to the image information of the obstacle, and sending the distance information of the obstacle and the type of the obstacle to the control layer through WebSocket;
and the control layer is used for controlling the moving action and the interaction mode of the distribution robot according to the task distribution information, the obstacle distance information and the obstacle type.
2. The dispensing robot as claimed in claim 1, wherein the processor for determining an obstacle type according to the obstacle image information and transmitting the obstacle distance information and the obstacle type to a control layer comprises:
judging whether the type of the obstacle is an object or a pedestrian according to the image information of the obstacle shot by the front camera;
correspondingly, the control layer is configured to control a moving action and an interaction mode of the delivery robot according to task delivery information, the obstacle distance information, and the obstacle type, and includes:
if the distribution robot is in the task distribution execution period, if the type of the obstacle is an object, controlling the distribution robot to move around the object according to the obstacle distance information; if the type of the obstacle is a pedestrian, controlling the distribution robot to move around the pedestrian according to the obstacle distance information, and calling the pedestrian according to a first calling mode;
and if the distribution robot is in the task-free execution state, calling the pedestrian according to a second calling mode.
3. The dispensing robot of claim 2, further comprising a human-machine interaction interface layer, the control layer further configured to:
receiving user operation screen information passing through a human-computer interaction interface layer, and judging whether a user issues delivery task information or not according to the user operation screen information, wherein the delivery task information comprises a task order and a task delivery destination;
and planning a driving route of the delivery robot according to the current position of the delivery robot and the task delivery destination so that the delivery robot can deliver the tasks according to the planned driving route during the task delivery execution period.
4. The dispensing robot of claim 2, wherein the processor is further configured to:
when the obstacle type is judged to be a pedestrian, analyzing the behavior information of the pedestrian, and sending the behavior information of the pedestrian to a control layer;
correspondingly, the control layer is further configured to:
and if the distribution robot is in the task distribution execution period, controlling the distribution robot to move around the pedestrian according to the obstacle distance information, and performing corresponding voice broadcast according to the behavior information of the pedestrian.
5. The delivery robot of claim 2, wherein said soliciting calls from said pedestrian in a second soliciting call mode if said delivery robot is in a non-mission execution, comprises:
when the distribution robot meets pedestrians, the control layer pops up applet codes through the interactive interface layer so that the pedestrians can register as service users of the distribution robot through scanning.
6. The delivery robot according to claim 2 or 5, further comprising a face recognition module mounted on the delivery robot, wherein the face recognition module is connected to the processor in a two-way communication manner;
the face recognition module is used for starting the face recognition module when the processor judges that the type of the obstacle is the pedestrian, acquiring face recognition information of the pedestrian and sending the face recognition information to the processor;
the processor is used for matching the face identification information with face information in a user identity information base to obtain identity information of pedestrians, wherein the identity information of the pedestrians comprises familiar persons or strangers, and the identity information of the identified pedestrians is sent to the control layer.
7. The delivery robot of claim 6, wherein the control layer is configured to invite a call to the pedestrian in a second call-out manner if the delivery robot is in the mission-free execution state, further comprising:
according to the identity information of the pedestrian, different distribution announcement contents are played for the pedestrian, and the announcement contents are in an audio form, a video form or a text information form.
8. A man-machine interaction method based on a distribution robot is characterized by comprising the following steps:
detecting obstacle distance information in front of a distribution robot, and shooting obstacle image information in front of the distribution robot;
judging the type of the obstacle according to the obstacle image information;
and controlling the moving action and the interaction mode of the distribution robot according to the task distribution information, the obstacle distance information and the obstacle type.
9. The human-computer interaction method according to claim 8, wherein the determining the type of the obstacle according to the obstacle image information comprises:
judging whether the type of the obstacle is an object or a pedestrian according to the shot image information of the obstacle;
correspondingly, the controlling the moving action and the interaction mode of the delivery robot according to the task delivery information, the obstacle distance information and the obstacle type comprises the following steps:
if the distribution robot is in the task distribution execution period, if the type of the obstacle is an object, controlling the distribution robot to move around the object according to the obstacle distance information; if the type of the obstacle is a pedestrian, controlling the distribution robot to move around the pedestrian according to the obstacle distance information, and calling the pedestrian according to a first calling mode;
and if the distribution robot is in the task-free execution state, calling the pedestrian according to a second calling mode.
CN202110264539.8A 2021-03-11 2021-03-11 Distribution robot for man-machine interaction and man-machine interaction method Pending CN112883897A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110264539.8A CN112883897A (en) 2021-03-11 2021-03-11 Distribution robot for man-machine interaction and man-machine interaction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110264539.8A CN112883897A (en) 2021-03-11 2021-03-11 Distribution robot for man-machine interaction and man-machine interaction method

Publications (1)

Publication Number Publication Date
CN112883897A true CN112883897A (en) 2021-06-01

Family

ID=76040837

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110264539.8A Pending CN112883897A (en) 2021-03-11 2021-03-11 Distribution robot for man-machine interaction and man-machine interaction method

Country Status (1)

Country Link
CN (1) CN112883897A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180120852A1 (en) * 2016-09-20 2018-05-03 Shenzhen Silver Star Intelligent Technology Co., Ltd. Mobile robot and navigating method for mobile robot
CN208514497U (en) * 2018-06-07 2019-02-19 广东数相智能科技有限公司 One kind can avoidance make an inventory robot

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180120852A1 (en) * 2016-09-20 2018-05-03 Shenzhen Silver Star Intelligent Technology Co., Ltd. Mobile robot and navigating method for mobile robot
CN208514497U (en) * 2018-06-07 2019-02-19 广东数相智能科技有限公司 One kind can avoidance make an inventory robot

Similar Documents

Publication Publication Date Title
US10095315B2 (en) System and method for distant gesture-based control using a network of sensors across the building
EP3285160A1 (en) Intention recognition for triggering voice recognition system
EP3287873A2 (en) System and method for distant gesture-based control using a network of sensors across the building
US20190139547A1 (en) Interactive Method and Device
US20200008639A1 (en) Artificial intelligence monitoring device and method of operating the same
JP5366048B2 (en) Information provision system
TW201923569A (en) Task processing method and device
US11467581B2 (en) Autonomous moving body for moving between floors using an elevator, control program of autonomous moving body, method of controlling autonomous moving body, and system server for controlling autonomous moving body from remote place
US20210334640A1 (en) Artificial intelligence server and method for providing information to user
CN111833509A (en) Intelligent queuing management method and device
US20220059090A1 (en) Communication with user presence
US20210174187A1 (en) Ai apparatus and method for managing operation of artificial intelligence system
KR20190104488A (en) Artificial intelligence robot for managing movement of object using artificial intelligence and operating method thereof
US20240077870A1 (en) Robot device, method for controlling same, and recording medium having program recorded thereon
CN112883897A (en) Distribution robot for man-machine interaction and man-machine interaction method
JP4764377B2 (en) Mobile robot
JP2004234631A (en) System for managing interaction between user and interactive embodied agent, and method for managing interaction of interactive embodied agent with user
CN110363695B (en) Robot-based crowd queue control method and device
US11146522B1 (en) Communication with user location
CN108734262A (en) Smart machine control method, device, smart machine and medium
CN109543638A (en) A kind of face identification method, device, equipment and storage medium
CN117083234A (en) Method for operating an elevator system and elevator system
US11368497B1 (en) System for autonomous mobile device assisted communication
CN115139320A (en) Distribution robot and notification method
CN109129460B (en) Robot management system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination