CN117461043A - Autonomous robot system and control method for autonomous robot - Google Patents

Autonomous robot system and control method for autonomous robot Download PDF

Info

Publication number
CN117461043A
CN117461043A CN202280041033.3A CN202280041033A CN117461043A CN 117461043 A CN117461043 A CN 117461043A CN 202280041033 A CN202280041033 A CN 202280041033A CN 117461043 A CN117461043 A CN 117461043A
Authority
CN
China
Prior art keywords
user
job
robot
information
autonomous robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280041033.3A
Other languages
Chinese (zh)
Inventor
久乡纪之
平泽园子
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panasonic Intellectual Property Management Co Ltd
Original Assignee
Panasonic Intellectual Property Management Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Panasonic Intellectual Property Management Co Ltd filed Critical Panasonic Intellectual Property Management Co Ltd
Publication of CN117461043A publication Critical patent/CN117461043A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Economics (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Development Economics (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Manipulator (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

A system for allowing an autonomous robot to execute a job requested by a user can be inexpensively constructed without newly providing a dedicated system for requesting the autonomous robot by the user in addition to a system for controlling the autonomous robot. The device is provided with: a working robot (1) that travels within a facility; a robot control server (2) that controls the operation of the work robot; and a seat management server (7) for managing the seat status of the user in the office in the facility, wherein the robot control server acquires the input information of the job request input by the user joining the SNS via the SNS system, acquires the information related to the request content based on the input information of the job request, acquires the position information of the user as the destination from the seat management server, and controls the operation of the job robot based on the information related to the request content and the position information of the user.

Description

Autonomous robot system and control method for autonomous robot
Technical Field
The present disclosure relates to an autonomous robot system and a control method of an autonomous robot that cause the autonomous robot to execute a requested job in accordance with a request for the job from a user.
Background
In facilities, many things occur that objects such as data are delivered from one person to another. In this case, when the work robot performs a work of delivering the object to the goods of another person instead of the person, the burden on the person can be reduced.
As a technology for distributing ginseng and goods in such a working machine, the following technologies have been known in the past: a person who is a consigner of goods delivery delivers goods to a work robot and consigns the goods, whereby the work robot reaches a person who is a delivery destination (see patent document 1). In particular, in this technique, a person who is a consignor who is a consignment of goods delivers goods to a work robot which is responsible for own as a delivery start point, which moves to a place where the work robot which is responsible for a person who is a recipient as a delivery destination stands by, delivers goods to the work robot which is a delivery destination, moves to a place where the person who is a recipient is located, and delivers goods to the person.
Prior art literature
Patent literature
Patent document 1: japanese patent laid-open publication No. 2003-340762
Disclosure of Invention
Problems to be solved by the invention
Then, according to the conventional technique, when a person who is a consignor of goods is delivered to a work robot which is a delivery start point, the person operates an input device provided to the robot to consign the goods. Therefore, sometimes a person who is a consignee for delivery of goods cannot sit on his or her seat to simply consignee for delivery of goods. When the work robot serving as the delivery start point is not in the vicinity of the person who is the consignor for delivering the goods, the work robot is moved to a place where the work robot stands by, and the goods are delivered to the work robot. Accordingly, the conventional technology is inconvenient for a person who is a consignor who distributes goods, and a system capable of eliminating such inconvenience is desired.
However, in order to eliminate such inconvenience, a new system for entrusting cargo delivery to the work robot by the user needs to be provided in addition to the system for controlling the work robot. Therefore, there is a problem that a system for causing the work robot to execute a job requested by a user cannot be inexpensively constructed.
Accordingly, a main object of the present disclosure is to provide an autonomous robot system and a control method of an autonomous robot capable of constructing a system for causing the autonomous robot to execute a job requested by a user at low cost without newly providing a dedicated system for requesting the autonomous robot for the job by the user in addition to a system for controlling the autonomous robot.
Solution for solving the problem
The autonomous robot system of the present disclosure is of the following structure: an autonomous robot system for causing an autonomous robot to execute a requested job in response to a request for the job from a user, the autonomous robot system comprising: a stay management server that manages the location of a user staying in a facility; the autonomous robot walking in a facility; and a control unit that controls the operation of the autonomous robot, wherein the control unit performs: acquiring input information of a job delegate input by a user joining the SNS via an SNS (social networking service: social network service) system; acquiring job delegation information related to the content of a job delegated by a user based on the input information of the job delegation; acquiring location information of a user as a destination from the stay management server; and controlling an operation of the autonomous robot based on the entrusted job information and the location information of the user.
In addition, the control method of the autonomous robot of the present disclosure has the following structure: in a control method of an autonomous robot for executing a requested job in response to a request for the job from a user, a control unit for controlling an operation of the autonomous robot traveling in a facility performs: acquiring input information of a job request input by a user joining an SNS via an SNS system; acquiring job delegation information related to the content of a job delegated by a user based on the input information of the job delegation; acquiring location information of a user as a destination from a stay management server that manages locations of users staying within a facility; and controlling an operation of the autonomous robot based on the entrusted job information and the location information of the user.
ADVANTAGEOUS EFFECTS OF INVENTION
According to the present disclosure, a system for causing an autonomous robot to execute a job requested by a user can be inexpensively constructed without newly providing a dedicated system for requesting the job from the user to the autonomous robot in addition to a system for controlling the autonomous robot.
Drawings
Fig. 1 is an overall configuration diagram of an autonomous robot system according to the present embodiment.
Fig. 2 is an explanatory diagram showing an outline of the operation of the work robot in the case where the work robot requests the work of the cargo distribution.
Fig. 3 is an explanatory diagram showing a schematic configuration of the work robot.
Fig. 4 is a block diagram showing an outline configuration of a work robot, a robot control server, a cloud server, a seat management server, and a face matching server.
Fig. 5 is an explanatory diagram showing a screen of a chat service displayed on a user terminal.
Fig. 6 is an explanatory diagram showing the content of operation instruction information for the work robot in the case of delivery of the cargo.
Fig. 7 is a flowchart showing a procedure of processing performed by the robot control server.
Fig. 8 is an explanatory diagram showing an outline of the operation of the work robot in the case where the work robot requests the work of order dispatch.
Fig. 9 is an explanatory diagram showing the content of operation instruction information for the work robot in the case of order dispatch.
Fig. 10 is an explanatory diagram showing an outline of the operation of the work robot in the case where the work to be guided is requested to the work robot.
Fig. 11 is an explanatory diagram showing the content of operation instruction information for the work robot in the case of the exit guidance.
Fig. 12 is an explanatory diagram showing a state of job request using a user terminal and a state of departure guidance by a work robot.
Detailed Description
A first invention completed to solve the above problems is the following structure: an autonomous robot system for causing an autonomous robot to execute a requested job in response to a request for the job from a user, the autonomous robot system comprising: a stay management server that manages the location of a user staying in a facility; the autonomous robot walking in a facility; and a control unit that controls the operation of the autonomous robot, wherein the control unit performs: acquiring input information of a job request input by a user joining an SNS via an SNS system; acquiring job delegation information related to the content of a job delegated by a user based on the input information of the job delegation; acquiring location information of a user as a destination from the stay management server; and controlling an operation of the autonomous robot based on the entrusted job information and the location information of the user.
Accordingly, it is not necessary to newly provide a dedicated system for requesting a job from a user to the autonomous robot in addition to the system for controlling the autonomous robot, and it is possible to inexpensively construct a system for causing the autonomous robot to execute the job requested by the user.
The control unit may be configured as a control device (robot control server) capable of communicating with the autonomous robot, separately from the autonomous robot.
The second invention has the following structure: the robot further includes a face matching server that performs face matching as a personal check of the user, and the autonomous robot includes a camera that photographs the face of the user, and the control unit supplies the face matching server with a face image obtained by photographing by the camera and identification information of the user obtained from input information of the job request, and causes the face matching server to perform face matching and obtains a face matching result from the face matching server.
Accordingly, face matching as a confirmation of the user himself/herself is performed. Therefore, for example, in the case of delivery of goods, it is possible to suppress erroneous delivery, that is, receiving goods from an incorrect user or giving goods to an incorrect user. In addition, in the case of delivering ordered items, it is possible to suppress delivery of ordered items to incorrect users.
The third invention has the following structure: when the job requested by the user is delivery of the goods to the delivery destination by another user, the control unit acquires position information of the user who is the request and position information of the user who is the delivery destination, and instructs the autonomous robot to perform a movement based on the position information.
Accordingly, the work of delivering the goods between the two users can be reliably performed.
The fourth invention has the following structure: when the job requested by the user is order dispatch, the control unit acquires position information of the user who is the requester and position information of a place where the order is acquired, and instructs the autonomous robot to move based on the position information.
Accordingly, the order item can be reliably delivered.
The fifth invention has the following structure: when the job requested by the user is a departure guidance, the control unit acquires position information of a location where the user is to be located, and instructs the autonomous robot to move based on the position information.
Accordingly, the job of the departure guidance can be reliably performed.
In addition, the sixth invention has the following structure: when the job requested by the user is normally completed, the control unit notifies the user as a requester of a job completion report via the SNS system.
Accordingly, the user who is the consignor can easily confirm that the consigned job is normally completed.
The seventh invention has the following structure: in a control method of an autonomous robot for executing a requested job in response to a request for the job from a user, a control unit for controlling an operation of the autonomous robot traveling in a facility performs: acquiring input information of a job request input by a user joining an SNS via an SNS system; acquiring job delegation information related to the content of a job delegated by a user based on the input information of the job delegation; acquiring location information of a user as a destination from a stay management server that manages locations of users staying within a facility; and controlling an operation of the autonomous robot based on the entrusted job information and the location information of the user.
Accordingly, as in the first invention, it is not necessary to newly provide a dedicated system for requesting a job from a user to an autonomous robot in addition to a system for controlling the autonomous robot, and it is possible to inexpensively construct a system for causing the autonomous robot to execute a job requested by the user.
Embodiments of the present disclosure are described below with reference to the accompanying drawings.
Fig. 1 is an overall configuration diagram of an autonomous robot system according to the present embodiment.
The autonomous robot system causes the work robot 1 to execute the requested work in response to the request of the work from the user. The system includes a work robot 1 (autonomous robot), a robot control server 2 (control unit), a user terminal 3, an intelligent speaker 4 (audio input terminal), a cloud server 5, a camera 6, a seat management server 7 (stay management server), and a face matching server 8. The work robot 1, the robot control server 2, the user terminal 3, the intelligent speaker 4, the cloud server 5, the camera 6, the seat management server 7, and the face matching server 8 are connected via a network.
The work robot 1 is configured to be capable of autonomous travel. The work robot 1 executes a work requested by a user in a facility in accordance with an instruction from the robot control server 2.
Further, the work robot 1 includes a camera 11. The camera 11 photographs persons staying in the facility, particularly persons at an office. The image captured by the camera 11 is used by the face matching server 8, and the camera 11 and the face matching server 8 together form a face matching system.
The robot control server 2 controls the operation of the work robot 1. The robot control server 2 receives a job request from a user by the user terminal 3 and the intelligent speaker 4, and instructs the work robot 1 to perform an operation related to the requested job.
The user terminal 3 is operated by a person staying in the facility, particularly a person (user) in an office. The user terminal 3 is constituted by a PC, a tablet terminal, or the like. The user terminal 3 can perform an operation for requesting a job from the work robot 1 by a user. The user terminal 3 can perform job requests to the work robot 1 by using the chat service provided by the cloud server 5.
The smart speaker 4 outputs sound distributed from the cloud server 5 or the like. The intelligent speaker 4 can receive a speech of a user, and convert the speech into text information by voice recognition, and functions as a voice input device. In the intelligent speaker 4, the task request to the work robot 1 can be performed by using the audio input/output service provided by the cloud server 5.
The cloud server 5 provides SNS. The user can join SNS using the user terminal 3 and the smart speaker 4. The SNS application is installed in the user terminal 3 and the intelligent speaker 4. The cloud server 5 constitutes an SNS system together with the user terminal 3 and the intelligent speakers 4. Specifically, the cloud server 5 controls the chat service using the user terminal 3. The cloud server 5 controls the audio input/output service using the intelligent speakers 4. In addition, the cloud server 5 controls Web services provided to the user terminal 3.
The camera 6 is provided at an appropriate position (e.g., ceiling) within the facility for photographing a person staying within the facility. The camera 6 captures an image, which is used by the seat management server 7, and the camera 6 constitutes a seat management system together with the seat management server 7. Further, the work robot 1 may capture images of persons standing in the facility, and the face matching result using the captured images may be used by the seat management server 7.
The seating management server 7 manages the seating condition of the person in the office based on the captured image of the camera 6. The presence management server 7 has a function of a distribution server, and distributes presence information about the presence status of a person in an office to the user terminal 3. Thus, the other user can confirm where the user sits, and can confirm whether the user is now sitting.
The face matching server 8 performs face matching for determining a person at an office based on the captured image obtained by the camera 11 of the work robot 1.
In the present embodiment, the configuration in which the robot control server 2 is provided as the control unit for controlling the work robot 1 has been described, but the work robot 1 itself may be provided with a control unit for controlling the present apparatus, that is, with the function of the robot control server 2.
In the present embodiment, the seating state of the person in the office is managed by the seating management server 7, but the present invention is not limited to this, and the present invention may be applied to a residence management server that manages the residence state of the person in the facility, in particular, the location of the user staying in the facility.
Next, a case where the work robot 1 requests the delivery of the cargo will be described. Fig. 2 is an explanatory diagram showing an outline of the operation of the work robot 1 in the case where the work robot 1 requests the work for delivering the goods.
First, the user requests the robot control server 2 for a job of delivering the cargo. Here, the job request to the robot control server 2 is performed via SNS. In particular, the job request to the robot control server 2 is performed by using the user terminal 3 and the chat service provided by the cloud server 5.
The user inputs a chat message requesting the contents of the job of delivering the goods on the screen of the chat service in the user terminal 3. The chat message is provided to the robot control server 2 via the cloud server 5.
When receiving a job request for delivering goods from a user based on a chat message, the robot control server 2 instructs the work robot 1 to perform operations necessary for delivering goods. Thereby, the work robot 1 starts the work of the cargo distribution.
At this time, the work robot 1 first moves from the standby place to the position of the user who is the client. Then, the goods are received from the user as a consignor. Next, the work robot 1 holds the cargo and moves to the position of the user as the delivery destination. Then, the goods are delivered to the user as the delivery destination. Thereby, the requested work for delivering the cargo is completed, and the work robot 1 returns to the standby position.
When the requested cargo distribution operation is completed, a report indicating the completion of cargo distribution is sent from the robot control server 2 to the user terminal 3 via the SNS. The job completion report is also made using the chat service.
If the user as the delivery destination is not present, the system returns to the position of the user as the consignor and returns the goods to the user as the consignor. Here, in order to avoid the need to return the goods to the user who is the consigner, each user can set a placement/dispatch (japanese "take" assignment) so that the goods can be received even if the user is not present. For example, a designated place such as a desk or a shared receiving box where a user stays in a facility can be registered in association with an account of an SNS or the like in advance. When the placement and dispatch are performed, it is preferable to record an image of the cargo when the cargo is placed by using the camera 11 mounted on the work robot 1.
When receiving a job request from a user, the robot control server 2 sequentially registers the job request in a request job list, and processes the job request in the order in which the job request was received based on the request job list.
Here, in the present embodiment, the user can input a chat message in a natural language in the user terminal 3. The robot control server 2 performs a process (natural language process) of analyzing a chat message based on natural language to acquire information (requested job information) on the content of a job requested by a user, specifically, acquires the name of the user whose job content is cargo delivery and which is a delivery destination.
Here, when receiving goods from a user who is a consignor, face matching is performed as a confirmation of the consignor's user. In addition, when delivering goods to a user who is a delivery destination, face matching is performed as a personal check to the user who is the delivery destination.
At this time, the work robot 1 photographs the face of the subject (the user who is the consignor and the user who is the delivery destination). Then, the robot control server 2 acquires a face image of the subject from the captured image, and makes a request for face matching including the face image of the subject to the face matching server 8.
Here, the user who is the client and the user who is the delivery destination stay in an office made of a free address. In an office without a fixed seat, the user can freely select an office position to perform an office, and thus the seating condition of the user changes at any time.
Accordingly, the seat management server 7 generates seat information on which office each user sits at based on the captured image obtained by the camera 6, and manages the seat condition of each user. The robot control server 2 can check where the user who is the consignor and the user who is the delivery destination are by querying the seat management server 7 for the seat status of the user who is the consignor and the user who is the delivery destination.
Although an example of an office without a fixed seat is described here, information about the seating condition managed by the seat management server 7 can be used in the case of an office with a fixed seat. For example, when receiving a response from the seat management server 7 to the effect that the user who is the delivery destination is not present, the request for delivery of the cargo may be denied to the user who is the consignor.
Next, a schematic configuration of the work robot 1 will be described. Fig. 3 is an explanatory diagram showing a schematic configuration of the work robot 1.
The work robot 1 includes a camera 11, a speaker 12, a traveling unit 13, a manipulator 14, and a controller 15.
The camera 11 photographs the face of the user according to the control of the controller 15. A face image for face matching is obtained from the image captured by the camera 11.
The speaker 12 outputs various sounds of guidance, notification, and the like in accordance with the control of the controller 15.
The traveling unit 13 includes wheels, motors, and the like. The traveling unit 13 performs autonomous traveling to move to a destination under the control of the controller 15.
The robot 14 holds an object (such as a cargo) and performs an operation of receiving and delivering the object (such as a cargo) with a user in accordance with the control of the controller 15.
The controller 15 controls each section of the work robot 1 based on an operation instruction from the robot control server 2.
In the present embodiment, the work robot 1 is provided with the manipulator 14, but the present invention is not limited to this configuration, and for example, a basket for accommodating a load may be provided only in the work robot 1.
Next, the outline configuration of the work robot 1, the robot control server 2, the cloud server 5, the seat management server 7, and the face matching server 8 will be described. Fig. 4 is a block diagram showing an outline configuration of the work robot 1, the robot control server 2, the cloud server 5, the seat management server 7, and the face matching server 8.
As described above, the work robot 1 includes the camera 11, the speaker 12, the traveling unit 13, the manipulator 14, and the controller 15. The controller 15 includes a communication unit 16, a storage unit 17, and a processor 18.
The communication unit 16 communicates with the robot control server 2.
The storage unit 17 stores a program or the like executed by the processor 18.
The processor 18 executes a program stored in the storage unit 17 to perform various processes. In the present embodiment, the processor 18 performs autonomous travel control processing and the like.
In the autonomous travel control process, the processor 18 determines a travel route for avoiding an obstacle based on an image captured by the camera 11, a detection result of a distance sensor, not shown, and the like, and controls the travel unit 13 so as to travel toward a destination instructed by the robot control server 2.
Further, the processor 18 controls the camera 11 to cause the camera 11 to capture the face of the user. In addition, the processor 18 controls the speaker 12 to output sounds of various guidance, notification, and the like from the speaker 12. The processor 18 controls the robot 14 to cause the robot 14 to perform an operation of receiving and delivering the object with the user.
The robot control server 2 includes a communication unit 21, a storage unit 22, and a processor 23.
The communication unit 21 communicates with the work robot 1, the cloud server 5, the seat management server 7, and the face matching server 8.
The storage unit 22 stores a program or the like executed by the processor 23.
The processor 23 executes programs stored in the storage unit 22 to perform various processes. In the present embodiment, the processor 23 performs an operation control process, a message analysis process, a seating condition confirmation process, a face matching request process, and the like.
In the operation control process, the processor 23 controls the operation of the work robot 1. Specifically, the operation instruction information including the content of the operation to be executed by the work robot 1 and the information necessary for the operation is generated and transmitted to the work robot 1. For example, the work robot 1 is instructed to perform movement to a destination, imaging for face matching, and reception and delivery of an object (such as a cargo). When the work robot 1 is instructed to move to the destination, the position information of the destination is provided to the work robot 1.
In the message analysis processing, the processor 23 analyzes (natural language processing) a message (text information) of a job request based on natural language input by a user as a principal to acquire information about the content of the requested job. In the message analysis processing, a language model constructed by machine learning such as deep learning may be used. In the case of a job for which delivery of goods is requested, the name of the user who requested the job for delivery of goods and who is the delivery destination is acquired by message analysis.
In the seating condition confirmation process, the processor 23 inquires of the seating management server 7 about the seating condition of the subject person (the user who is the consignor and the user who is the delivery destination). Specifically, a query of the presence status including identification information (name, user ID, etc.) of the subject person is transmitted to the presence management server 7, and an answer to the query transmitted from the presence management server 7 is received. In the case where the subject person is present, the answer to the query contains the location information of the subject person.
In the face matching request processing, the processor 23 requests face matching of the subject person as principal confirmation of the subject person (the user who is the principal and the user who is the delivery destination) to the face matching server 8. Specifically, the face image of the subject is extracted from the captured image acquired from the work robot 1, and a request for face matching including the face image of the subject and the identification information (name, user ID, etc.) of the subject is transmitted to the face matching server 8. Then, based on the response of the face matching received from the face matching server 8, it is determined whether the face matching of the subject person is successful.
The cloud server 5 includes a communication unit 51, a storage unit 52, and a processor 53.
The communication unit 51 communicates with the robot control server 2, the user terminal 3, and the intelligent speaker 4.
The storage unit 52 stores a program or the like executed by the processor 53.
The processor 53 executes a program stored in the storage unit 52 to perform various processes. In the present embodiment, the processor 23 performs chat service control processing, voice input/output service control processing, web service control processing, and the like.
In the chat service control process, the processor 53 controls the chat service using the user terminal 3. In the voice input/output service control process, the processor 53 controls the voice input/output service using the intelligent speaker 4. In the Web service control process, the processor 53 controls the Web service provided to the user terminal 3.
The seat management server 7 includes a communication unit 71, a storage unit 72, and a processor 73.
The communication unit 71 communicates with the robot control server 2 and the camera 6.
The storage unit 72 stores a program or the like executed by the processor 73.
The processor 73 performs various processes by executing programs stored in the storage unit 72. In the present embodiment, the processor 73 performs a seat detection process or the like.
In the seat detection process, the processor 73 performs person detection on the captured image of the camera 6 to determine whether or not a person is present in each seat. In addition, in the seating detection process, the user who is seated may be determined using a face matching result based on the face image of the user captured by the camera 11 of the work robot 1.
The face matching server 8 includes a communication unit 81, a storage unit 82, and a processor 83.
The communication unit 81 communicates with the robot control server 2.
The storage section 82 stores a program or the like executed by the processor 83.
The processor 83 performs various processes by executing programs stored in the storage 82. In the present embodiment, the processor 83 performs face matching processing or the like.
In the face matching process, the processor 83 extracts face feature information of the subject person from the face image of the subject person acquired from the robot control server 2 in accordance with a request for face matching from the robot control server 2, matches the face feature information of the subject person with face feature information of each person registered, and determines the subject person.
Next, an outline of job requests for goods distribution using the chat service will be described. Fig. 5 is an explanatory diagram showing a screen of the chat service displayed on the user terminal 3.
The user displays a screen of the chat service by the SNS application on the user terminal 3, and performs an operation of requesting a job to the work robot 1 on the screen using an input device such as a keyboard.
First, as shown in fig. 5 (a), in the posted message input field 101 in the chat service screen, the user designates the work robot 1 as a chat target, and inputs a chat message indicating a request of a work and its content in a natural language.
At this time, the user terminal 3 transmits, to the cloud server 5, the recipient information indicating that the chat object is the job robot 1, the chat message (text information) indicating the request of the job and the content thereof, and the user information (the user information of the source) registered in advance in the user terminal 3, in accordance with the user operation on the screen of the chat service.
When receiving the recipient information, the chat message, and the user information of the source from the user terminal 3, the cloud server 5 transfers the chat message and the user information of the source to the robot control server 2 based on the recipient information.
When receiving the chat message and the user information of the source from the cloud server 5, the robot control server 2 performs message analysis (natural language processing) on the chat message based on natural language to acquire information on the content of the job requested by the user. Here, the robot control server 2 acquires the job content, which is the delivery of the cargo, and the name of the user who is the delivery destination, by the job for which the delivery of the cargo is requested. In addition, the robot control server 2 acquires the name of the user who is the principal from the user information of the originating source. Thereby, the robot control server 2 can receive a job request for the delivery of the cargo.
When the robot control server 2 normally receives a job request for delivery of a cargo, it transmits a chat message in response to a chat message from a user of the source. As a result, as shown in fig. 5 (B), the user terminal 3 displays a chat message assuming a request as a source of the work robot 1 in the reply message display field 102 on the chat service screen.
When the cargo distribution is completed normally, as shown in fig. 5 (C), a message reporting the completion of the cargo distribution is displayed on the reply message display field 103 on the chat service screen, with the work robot 1 as the source. Thus, the user who is the consignor can easily grasp that the consigned cargo distribution is normally completed.
Next, operation instruction information for the work robot 1 in the case of cargo distribution will be described. Fig. 6 is an explanatory diagram showing the content of the operation instruction information.
The robot control server 2 transmits operation instruction information for causing the work robot 1 to execute a work for delivering goods to the work robot 1. The operation instruction information includes information indicating each operation of the work for delivering the cargo as operation content and detailed information about each operation. The work robot 1 executes each operation of the work for the cargo distribution based on the operation instruction information received from the robot control server 2.
In the operation instruction information, first, as operation contents, an operation is set to move to a destination at a point (a position of a user who is a client) where an object (cargo) is received. The movement operation is added with position information (seat ID, etc.) of the user as a client as detailed information. Next, as the operation content, an operation for capturing a face as a face match to be confirmed by the user who is the consignor is set. Next, as the operation content, an operation of the receiving object (cargo) is set. Next, as the operation content, an operation of moving the object (cargo) to the destination with the point of delivery (the position of the user as the delivery destination) as the destination is set. The movement operation is added with position information (seat ID or the like) of the user as a delivery destination as detailed information. Next, as the operation content, an operation for capturing a face image as a face match to be confirmed by the user who is the delivery destination is set. Next, as the operation content, an operation of the object (cargo) to be delivered is set.
At this time, the robot control server 2 inquires of the seat management server 7 about the seat condition so as to add the identification information of the user who is the consignor, thereby acquiring the position information (seat ID or the like) of the user who is the consignor. In addition, the seat management server 7 is queried for the seat condition so as to add the identification information of the user as the delivery destination, thereby acquiring the position information (seat ID or the like) of the user as the delivery destination.
Further, the robot control server 2 instructs the work robot 1 to perform an operation of capturing an image of a face that is face matching confirmed by the user who is the client, and when an image captured by the work robot 1 is acquired, requests for face matching are made to the face matching server 8. The face image and the identification information (name, etc.) of the user as the delegator are added to the request for face matching at this time. Further, the work robot 1 is instructed to perform an operation of capturing an image of a face for face matching confirmed by the user who is the delivery destination, and when an image captured by the work robot 1 is acquired, a request for face matching is made to the face matching server 8. The face image and the identification information (name, etc.) of the user as the delivery destination are added to the request for face matching at this time.
Next, a procedure of the processing performed by the robot control server 2 will be described. Fig. 7 is a flowchart showing a procedure of the processing performed by the robot control server 2.
When receiving a chat message of a cargo delivery request from a user terminal 3 operated by a user (user who is a client) who is a client of cargo delivery (yes in ST 101), the robot control server 2 performs message analysis (natural language processing) on the chat message to acquire identification information (name, user ID, etc.) of the user who is a delivery destination and requests the cargo delivery as a request job (ST 102). Further, identification information (name, user ID, etc.) of the user as the principal is acquired based on the user information of the originating source added to the chat message (ST 103).
Next, the robot control server 2 inquires of the seat management server 7 about the seat status of the user as the consignor and the user as the delivery destination (ST 104).
At this time, a query of the presence status including identification information of the user as the consignor and the user as the delivery destination is transmitted to the presence management server 7, and an answer to the query transmitted from the presence management server 7 is received. When the user who is the consignor and the user who is the delivery destination are present, the answer to the query includes the location information of the user who is the consignor and the user who is the delivery destination.
Next, the robot control server 2 determines whether or not the positions of the user as the client and the user as the delivery destination are specified (ST 105).
Here, if the positions of the user as the client and the user as the delivery destination are not specified (no in ST 105), the process is terminated.
On the other hand, when the positions of the user who is the consignor and the user who is the delivery destination are specified (yes in ST 105), then the robot control server 2 instructs the work robot 1 to perform an operation of moving to the destination with the point where the load is received (the position of the user who is the consignor) as the destination (ST 106). At this time, the work robot 1 controls the travel of the own apparatus based on the position information of the destination acquired from the robot control server 2, map information held in the own apparatus, and the like.
Next, the robot control server 2 instructs the work robot 1 to perform an operation of face image capturing for face matching that is confirmed by the user as a client (ST 107). At this time, the work robot 1 captures a face of the subject (user who is the client) with the camera 11, and transmits the captured image to the robot control server 2.
Next, the robot control server 2 extracts a face image of the target person from the captured image acquired from the work robot 1, and transmits a request for matching the face, which includes the face image of the target person and identification information (name, user ID, etc.) of the user as the client, to the face matching server 8 (ST 108).
At this time, the face matching server 8 extracts facial feature information of the subject from the face image of the subject, acquires facial feature information of the user as the delegator based on the identification information of the user as the delegator, and matches the facial feature information of the subject with the facial feature information of the user as the delegator. Next, the face matching server 8 transmits a response of face matching including the face matching result to the robot control server 2.
Next, the robot control server 2 determines whether or not the face matching as the principal confirmation to the user as the principal is successful based on the response of the face matching received from the face matching server 8 (ST 109).
Here, if the face matching as a confirmation to the user who is the client fails (no in ST 109), the process is terminated.
On the other hand, if the face matching as confirmed by the user who is the consignor is successful (yes in ST 109), then the work robot 1 is instructed to perform the operation of receiving the load (ST 110). At this time, the work robot 1 drives the manipulator 14 to receive the load from the user who is the consignor.
Next, the robot control server 2 instructs the work robot 1 to move to the destination with the place where the goods are delivered (the position of the user who is the delivery destination) as the destination (ST 111). The operation of the work robot 1 at this time is similar to the case of moving the seat of the user as the delivery destination.
Next, the robot control server 2 instructs the work robot 1 to perform an operation of capturing an image of the face as a face matching to be confirmed by the user who is the delivery destination (ST 112). The operation of the work robot 1 at this time is similar to the case of face matching performed as a personal confirmation to the user who is the consignor.
Next, the robot control server 2 extracts a face image of the subject from the captured image acquired from the work robot 1, and transmits a request for matching the face including the face image of the subject and the name (user ID) of the user as the delivery destination to the face matching server 8 (ST 113). The operation of the face matching server 8 at this time is similar to the face matching performed as the personal confirmation of the user as the client.
Next, the robot control server 2 determines whether or not the face matching, which is the confirmation of the user who is the delivery destination, is successful (ST 114).
Here, when the face matching as confirmed by the user who is the delivery destination is successful (yes in ST 114), the work robot 1 is instructed to perform the operation of delivering the goods (ST 115). At this time, the work robot 1 drives the manipulator 14 to deliver the goods to the user who is the delivery destination.
Next, the robot control server 2 designates the user as a client as a recipient, and transmits a chat message of the delivery completion report to the cloud server 5 (ST 116). The cloud server 5 transmits a chat message to the user terminal 3. In the user terminal 3, a chat message reporting completion of delivery is displayed on a screen of the chat service.
On the other hand, when the face matching as confirmed by the user who is the delivery destination fails, for example, when the user who is the delivery destination is not present (leaves the seat) (no in ST 114), then the robot control server 2 instructs the work robot 1 to move to the destination with the place where the goods are returned (the position of the user who is the consignor) as the destination (ST 117).
Next, the robot control server 2 instructs the work robot 1 to return the load (ST 118). At this time, the work robot 1 drives the manipulator 14 to deliver the goods to the user who is the consignor.
Next, a case where a job for delivering ordered products is requested to the work robot 1 will be described. Fig. 8 is an explanatory diagram showing an outline of the operation of the work robot 1 in the case where the work robot 1 requests the work of order dispatch.
The user can request the work robot 1 to send the commodity sold in the store in the facility to the order item at the location of the user. In particular, here, the work robot 1 takes out ordered articles from shelves of shops and delivers them.
First, the user requests the robot control server 2 for a job of delivering ordered items. Here, the task request to the work robot 1 is performed by using the intelligent speaker 4 and the audio input/output service provided by the cloud server 5.
The user speaks a word into the intelligent speaker 4 requesting the content of the order dispatch job. The intelligent speaker 4 receives a speech of a user through a microphone, performs a speech recognition process on the speech, and acquires a speech message (text information) as a speech recognition result. The spoken message is provided to the robot control server 2 via the cloud server 5.
When receiving a job request for order dispatch from a user based on a talk message, the robot control server 2 instructs the work robot 1 to perform operations necessary for the job for order dispatch. Thereby, the work robot 1 starts the work of ordering the product dispatch.
At this time, the work robot 1 first moves from the standby place to the position of the store (place where ordered products are acquired). Then, the ordered article is taken out from the commodity shelf of the store. Next, the work robot 1 holds the order and moves to the position of the user who is the consignor. The goods are then delivered to the user who is the consignor. Thereby, the work for delivering the ordered product is completed, and the work robot 1 moves to the standby position.
Here, the user speaks a speech of the request job in a natural language. At this time, the user designates the robot as the object of the requested job, and speaks the content of the requested job. In the case of order dispatch, the user speaks a natural language into the utterance including the name of the shop of the order destination and the name of the ordered commodity as the contents of the delegated job.
The intelligent speaker 4 transmits a speaking message (text information) as a result of the voice recognition to the cloud server 5 together with user information (user information of a consignee) registered in advance in the intelligent speaker 4. The voice recognition process may be performed by the cloud server 5.
The cloud server 5, upon receiving the spoken message (text information) from the intelligent speaker 4 and the user information of the originating source, performs a message analysis process (natural language process) on the spoken message to determine a service desired by the user. At this time, when it is determined that the service desired by the user is the service using the work robot 1, the cloud server 5 transmits the speech message and the user information of the source to the robot control server 2.
When receiving the speech message from the cloud server 5 and the user information of the source, the robot control server 2 performs a message analysis process (natural language process) on the speech message to acquire information on the content of the job requested by the user. In the case of order dispatch, the robot control server 2 acquires identification information (store name, store ID, etc.) of the store of the order destination and identification information (trade name, commodity ID, etc.) of the ordered commodity. In addition, identification information (name, user ID, etc.) of the user as the principal is acquired based on the user information of the originating source.
Further, the robot control server 2 can instruct the work robot 1 to move to the position of the store by acquiring the position information of the store based on the identification information of the store based on the database held in the apparatus. Further, the work robot 1 can be instructed to take out ordered items from the commodity shelf by acquiring feature information of the appearance of the commodity based on the identification information of the commodity based on the database held in the present apparatus. The robot control server 2 can also acquire the position information of the user who is the client by inquiring the presence status of the user who is the client to the presence management server 7 so as to add the identification information of the user who is the client.
In addition, when a job of ordering shipment of orders is requested to the work robot 1, settlement (payment of goods) for purchasing goods is required, but the settlement may be performed by face authentication using the camera 11 of the work robot 1.
In the unmanned store, the work robot 1 may take out the ordered product from the commodity shelf, and in the manned store, the ordered product may be transferred to the work robot 1 by a store clerk. Further, the work robot 1 may be requested to retrieve an object other than a commodity in a store, for example, data stored in a remote location.
Next, operation instruction information for the work robot 1 in the case of order dispatch will be described. Fig. 9 is an explanatory diagram showing the content of the operation instruction information.
The robot control server 2 transmits operation instruction information for causing the work robot 1 to execute a work of order dispatch to the work robot 1. The action instruction information includes information indicating each action for order dispatch as action content and detailed information about each action.
In the operation instruction information, first, as operation contents, an operation of moving a destination (store) to which an object (commodity ordered is received) is received is set. The movement operation is added with the position information of the store as the destination as detailed information. Next, as operation contents, an operation of identifying an object for which an object (an ordered commodity) is found from among commodities displayed in a store, and an operation of acquiring an object grip of the found object are sequentially set. Characteristic information of the commodity is added as detailed information to the object recognition and the object holding operation. Next, as the operation contents, a movement to a destination is set with a place (user as a consignor) where the object (commodity) is delivered as the destination. As detailed information about the movement, position information of the user as a destination is added. Next, as the operation content, an operation of delivering the object to the user is set.
At this time, the robot control server 2 acquires the position information of the store of the order destination from the identification information (store name) of the store of the order destination based on the database held in the apparatus. Further, based on a database held in the apparatus, characteristic information of the commodity is acquired from identification information (trade name or the like) of the commodity. The seat management server 7 is also queried for the seat status so as to add the identification information of the user who is the consignor, thereby acquiring the position information (seat ID or the like) of the user who is the consignor.
In addition, face matching may be performed at the time of delivering the commodity. This can prevent the ordered items from being delivered to incorrect users.
In the present embodiment, the object recognition processing for recognizing the ordered article is performed by the work robot 1, but the work robot 1 may capture an object (commodity) only by the camera 11, and the robot control server 2 may perform the object recognition processing based on the commodity database held in the present apparatus.
Next, a case where a job guided to the destination is requested to the work robot 1 will be described. Fig. 10 is an explanatory diagram showing an outline of the operation of the work robot 1 in the case where the work guided to the work robot 1 is requested.
The user can request the work robot 1 for a destination guidance (route guidance) for guiding to a desired place in the facility. For example, when the user is a visitor such as a visitor, the user is not aware of a meeting room, an office, a toilet, or the like in the facility. In addition, in an office without a fixed seat, the visitor does not know where the person who is the meeting object sits. In this case, the work robot 1 performs the work of the outgoing guide according to the request of the user.
First, a user (visitor, etc.) requests the robot control server 2 for a job to be guided. Here, the job request to the work robot 1 is performed by using a Web site.
In the cloud server 5, a Web site that accepts a job request from a user is constructed. In the user terminal 3, a browser is started to access a Web site, and a screen (Web page) of the Web site for job request by the user is displayed. The user operates the display screen of the user terminal 3 to designate a location (location to be guided) as a destination, and requests the work robot 1 to perform destination guidance.
The cloud server 5 acquires operation information of the user regarding job delegation directed to the departure from the user terminal 3, and supplies the operation information to the robot control server 2. When receiving a job request from a user for forward guidance based on the operation information of the user acquired via the cloud server 5, the robot control server 2 instructs the work robot 1 to perform operations necessary for performing the forward guidance. Thereby, the work robot 1 starts the job guided to go.
At this time, the work robot 1 starts moving with the location to be guided as a target. That is, the work robot 1 is brought to guide the user (visitor) to the destination.
Next, operation instruction information for the work robot 1 in the case of the departure guidance will be described. Fig. 11 is an explanatory diagram showing the content of the operation instruction information.
The robot control server 2 transmits operation instruction information for causing the work robot 1 to execute the job guided by the work robot 1.
In the operation instruction information, an operation of moving to a destination with a place to be guided as the destination is set as operation content. The location information of the destination (place to be guided) is added to the movement action as detailed information.
At this time, the robot control server 2 acquires the position information of the destination (the place to be guided) based on the database held in the present apparatus, generates operation instruction information to which the position information of the destination is added, and transmits the operation instruction information to the work robot 1.
Next, a description will be given of a case where a destination guidance is requested from the work robot 1 using a Web site. Fig. 12 is an explanatory diagram showing a status of job request using the user terminal 3 and a status of departure guidance by the work robot 1.
In the user terminal 3, a browser is started to access a Web site, and a screen (Web page) of the Web site for job request by the user is displayed.
At this time, first, a menu screen shown in fig. 12 a is displayed on the user terminal 3 (tablet terminal). In the menu screen, the user can select a service to be delegated. When the user selects the departure guidance as the service, the process proceeds to a robot selection screen shown in fig. 12 (B). In the robot selection screen, the user can select the work robot 1 to be entrusted with the destination guidance. When the user selects the work robot 1, the process shifts to a destination selection screen shown in fig. 12 (C). In the destination selection screen, the user can select a destination to which to guide. When the user selects the destination, the process shifts to a guidance start screen shown in fig. 12 (D). Here, the work robot 1 starts the job guided to go.
On the other hand, in the work robot 1, when a job to be guided is started, as shown in fig. 12 (E), a guide sound for starting the guidance to the destination is output from the speaker 12. When the work robot 1 reaches the destination, a guidance sound to the effect that the work robot has reached the destination is output from the speaker 12 as shown in fig. 12 (F).
At this time, the user terminal 3 transmits user operation information indicating the content operated by the user on each screen to the cloud server 5.
When receiving the user operation information from the user terminal 3, the cloud server 5 transmits a job request directed to the robot control server 2 based on the user operation information. The job request for the departure guidance includes identification information of the job robot 1 performing the departure guidance and information on the destination of the departure guidance.
When receiving the job request for the departure guidance from the cloud server 5, the robot control server 2 transmits the operation instruction information to the job robot 1 selected by the user for the departure guidance. The operation instruction information includes information indicating an operation (movement) of a job for guiding the destination as operation content and position information of a destination as detailed information.
In the job request using the Web site as described above, the user performs only the selection operation, and therefore, the message analysis processing as in the case of the job request using the chat service or the intelligent speaker 4 is not required.
In the present embodiment, the job request for the delivery of the goods is performed by the chat service, the job request for the delivery of the ordered goods is performed by the smart speaker 4, and the job request for the delivery guidance is performed by the Web site, but the present invention is not limited thereto. For example, the job request for order dispatch may be performed by a chat service, the job request for forward guidance may be performed by the intelligent speaker 4, or the job request for goods dispatch and order dispatch may be performed by a Web site.
In the present embodiment, the operations of requesting the delivery of the goods, the delivery of the ordered goods, and the guidance of the departure have been described as examples of the operations executed by the work robot 1, but the present invention is not limited thereto. For example, the work robot 1 may be instructed to send the document to the next person for the person to perform seal approval on the document in order.
As described above, the embodiments are described as an example of the technology disclosed in the present application. However, the technique in the present disclosure is not limited to this, and can be applied to embodiments in which modifications, substitutions, additions, omissions, and the like are made. The components described in the above embodiments may be combined to form a new embodiment.
Industrial applicability
The autonomous robot system and the control method of the autonomous robot according to the present disclosure have an effect that a system for causing the autonomous robot to execute a job requested by a user can be inexpensively constructed without newly providing a dedicated system for requesting the job from the autonomous robot in addition to a system for controlling the autonomous robot, and the autonomous robot system and the control method of the autonomous robot according to the present disclosure are useful as an autonomous robot system and a control method of the autonomous robot for causing the autonomous robot to execute the requested job in accordance with a request from the user.
Description of the reference numerals
1: work robots (autonomous robots); 2: a robot control server (control unit); 3: a user terminal; 4: intelligent speaker (sound input terminal); 5: a cloud server; 6: a camera; 7: a seat management server (stay management server); 8: a face matching server; 11: a camera; 12: a speaker; 13: a walking unit; 14: a controller; 15: a manipulator; 101: a post message input field; 102: a reply message display column; 103: and replying to the message display column.

Claims (7)

1. An autonomous robot system for causing an autonomous robot to execute a requested job in response to a request for the job from a user, the autonomous robot system comprising:
a stay management server that manages the location of a user staying in a facility;
the autonomous robot walking in a facility; and
a control unit that controls the operation of the autonomous robot,
the control unit performs the following processing:
acquiring input information of job delegation input by a user joining the social network service via the social network service system;
acquiring job delegation information related to the content of a job delegated by a user based on the input information of the job delegation;
Acquiring location information of a user as a destination from the stay management server; and
and controlling an operation of the autonomous robot based on the entrusted job information and the position information of the user.
2. The autonomous robotic system as claimed in claim 1, wherein,
and a face matching server for performing face matching as a personal confirmation to the user,
the autonomous robot has a camera for photographing the face of the user,
the control unit supplies the face matching server with the face image acquired by the photographing of the camera and the user identification information acquired from the input information of the job request, and causes the face matching server to perform face matching and acquires a face matching result from the face matching server.
3. The autonomous robotic system as claimed in claim 1, wherein,
in the case where the job requested by the user is the delivery of the goods to the delivery destination by another user,
the control unit acquires position information of a user who is a client and position information of a user who is a delivery destination, and instructs the autonomous robot to move based on the position information.
4. The autonomous robotic system as claimed in claim 1, wherein,
in the case where the job delegated by the user is order dispatch,
the control unit acquires position information of a user who is a client and position information of a place where the ordered product is acquired, and instructs the autonomous robot to move based on the position information.
5. The autonomous robotic system as claimed in claim 1, wherein,
in the case where the job delegated by the user is a outgoing guide,
the control unit acquires position information of a destination location, and instructs the autonomous robot to perform movement based on the position information.
6. The autonomous robotic system as claimed in claim 1, wherein,
in the case where the job delegated by the user is normally completed,
the control unit notifies a user as a principal of a job completion report via the social networking service system.
7. A control method for an autonomous robot, which causes the autonomous robot to execute a requested job in response to a request from a user, the control method comprising,
the control unit that controls the operation of the autonomous robot traveling in the facility performs the following processing:
Acquiring input information of job delegation input by a user joining the social network service via the social network service system;
acquiring job delegation information related to the content of a job delegated by a user based on the input information of the job delegation;
acquiring location information of a user as a destination from a stay management server that manages locations of users staying within a facility; and
and controlling an operation of the autonomous robot based on the entrusted job information and the position information of the user.
CN202280041033.3A 2021-06-08 2022-05-24 Autonomous robot system and control method for autonomous robot Pending CN117461043A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2021-096042 2021-06-08
JP2021096042A JP2022187839A (en) 2021-06-08 2021-06-08 Autonomous robot system and method for controlling autonomous robot
PCT/JP2022/021264 WO2022259863A1 (en) 2021-06-08 2022-05-24 Autonomous robot system, and method for controlling autonomous robot

Publications (1)

Publication Number Publication Date
CN117461043A true CN117461043A (en) 2024-01-26

Family

ID=84425905

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280041033.3A Pending CN117461043A (en) 2021-06-08 2022-05-24 Autonomous robot system and control method for autonomous robot

Country Status (3)

Country Link
JP (1) JP2022187839A (en)
CN (1) CN117461043A (en)
WO (1) WO2022259863A1 (en)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6072727B2 (en) * 2014-05-14 2017-02-01 シャープ株式会社 CONTROL DEVICE, SERVER, CONTROLLED DEVICE, CONTROL SYSTEM, CONTROL DEVICE CONTROL METHOD, SERVER CONTROL METHOD, AND CONTROL PROGRAM
JP2017209769A (en) * 2016-05-27 2017-11-30 グローリー株式会社 Service system and robot
JP6844135B2 (en) * 2016-07-05 2021-03-17 富士ゼロックス株式会社 Mobile robots and mobile control systems

Also Published As

Publication number Publication date
WO2022259863A1 (en) 2022-12-15
JP2022187839A (en) 2022-12-20

Similar Documents

Publication Publication Date Title
US7720685B2 (en) Receptionist robot system
JP6502491B2 (en) Customer service robot and related system and method
JP4786519B2 (en) Method for acquiring information necessary for service for moving object by robot, and object movement service system by robot using the method
US9757002B2 (en) Shopping facility assistance systems, devices and methods that employ voice input
JP2019510291A (en) A method of supporting transactions using a humanoid robot
TW201923569A (en) Task processing method and device
JP2020502649A (en) Intelligent service robot and related systems and methods
JP6719072B2 (en) Customer service device, service method and service system
US11605234B2 (en) Vehicle identification method and system for providing authentication and notification
CN111775160A (en) Method, device, medium and robot for automatically distributing articles
JP2018149645A (en) Information providing device and information providing system
KR20170027061A (en) Method and apparatus for using virtual assistant application on instant messenger
US11954719B2 (en) Personalized voice-based assistance
US11238859B2 (en) Voice-based transaction processing with location-based activation
JP6772686B2 (en) Customer service support program, customer service support method, customer service support system and information processing equipment
US20240048938A1 (en) Location-based curbside delivery
CN117461043A (en) Autonomous robot system and control method for autonomous robot
US20180068321A1 (en) Reception supporting method and device
Moujahid et al. Demonstration of a robot receptionist with multi-party situated interaction
JP2019053650A (en) Self-propelled apparatus
KR102488693B1 (en) Manless parcel management system
JP2022002113A (en) Video processing method, server device, and computer program
KR20220019185A (en) Analyzing method for three dementional image of parcel and method for buglarproofing parcel using the same
JP2021009434A (en) Browser device, information output method and information output program
US20230368793A1 (en) Information processing system, information processing method, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination