CN112247987A - Robot scheduling method and device, robot and storage medium - Google Patents

Robot scheduling method and device, robot and storage medium Download PDF

Info

Publication number
CN112247987A
CN112247987A CN202011050708.XA CN202011050708A CN112247987A CN 112247987 A CN112247987 A CN 112247987A CN 202011050708 A CN202011050708 A CN 202011050708A CN 112247987 A CN112247987 A CN 112247987A
Authority
CN
China
Prior art keywords
robot
user
task
task type
user intention
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011050708.XA
Other languages
Chinese (zh)
Inventor
夏舸
梁朋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Uditech Co Ltd
Original Assignee
Uditech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Uditech Co Ltd filed Critical Uditech Co Ltd
Priority to CN202011050708.XA priority Critical patent/CN112247987A/en
Publication of CN112247987A publication Critical patent/CN112247987A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1661Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Manipulator (AREA)

Abstract

The application is applicable to the field of robot control, and provides a robot scheduling method, a robot scheduling device, a robot and a storage medium. The robot scheduling method comprises the following steps: acquiring voice data of a user based on a first robot, and identifying a user intention pointed by the voice data; determining a task type corresponding to the user intention according to the user intention; judging whether the task type of the associated task which can be processed by the first robot is the same as the task type corresponding to the response user intention; and if the task type of the associated task which can be processed by the first robot is different from the task type corresponding to the response user intention, acquiring the real-time position of the user, and scheduling the second robot to move to the real-time position according to the real-time position so that the second robot responds to the user intention of the user. The embodiment of the application can improve the dispatching efficiency of the robot.

Description

Robot scheduling method and device, robot and storage medium
Technical Field
The application belongs to the field of robot control, and particularly relates to a robot scheduling method and device, a robot and a storage medium.
Background
A robot is an intelligent terminal capable of semi-autonomous or fully-autonomous operation, and is currently used in many scenarios. For example, in application scenarios such as hotels, KTV or office buildings, robots are used to provide services to users. In order to meet the needs of users, robots with different functions, such as a transfer robot, a vending robot, or a greeting robot, are often configured in the same scene, and the robots with different functions can provide different services for the users.
However, in the process of providing services to users by robots, the problems of poor pertinence of robot scheduling and low service efficiency exist at present.
Disclosure of Invention
The embodiment of the application provides a robot scheduling method, a robot scheduling device, a robot and a storage medium, and can solve the problems of low pertinence and low service efficiency of robot scheduling in the prior art.
A first aspect of an embodiment of the present application provides a robot scheduling method, including:
collecting voice data of a user based on a first robot, and identifying user intention pointed by the voice data;
determining a task type corresponding to the user intention according to the user intention;
judging whether the task type of the associated task which can be processed by the first robot is the same as the task type corresponding to the user intention;
and if the task type of the associated task which can be processed by the first robot is different from the task type corresponding to the user intention, acquiring the real-time position of the user, and scheduling a second robot to move to the real-time position according to the real-time position so as to enable the second robot to respond to the user intention of the user.
In a possible implementation manner of the first aspect, the scheduling, according to the real-time location, a second robot to head to the real-time location so that the second robot responds to the user intention of the user includes: screening one or more second robots in an application scene where the first robot is located, wherein the task type of an associated task which can be processed by the second robots is the same as the task type corresponding to the response of the user intention; and scheduling one of the one or more second robots, which is the shortest path with the first robot and has a use state of an idle state, to go to the real-time position, and enabling the second robot to perform human-computer interaction with the user to respond to the user intention of the user, wherein the use state of the second robot comprises the idle state and a scheduled occupied state.
In a possible implementation manner of the first aspect, the scheduling, in the one or more second robots, a shortest path between the second robot and the first robot, and using one of the second robots in an idle state to travel to the real-time location, includes: establishing communication connection with the one or more second robots based on a preset channel, and broadcasting a request reply instruction through the communication connection; receiving feedback information fed back by the second robot in response to the request reply instruction, and determining the second robot with the use state being an idle state according to the feedback information; and scheduling one second robot which is the shortest path with the first robot and is in an idle state to go to the real-time position.
In a possible implementation manner of the first aspect, the request reply instruction is used to instruct the second robot to respond to the request reply instruction and feed back the feedback information if the use state of the second robot is an idle state; the receiving feedback information which is fed back by the second robot in response to the request reply instruction and determining the second robot with the use state being an idle state according to the feedback information includes: and receiving the feedback information, and determining that the second robot which feeds back the feedback information is the second robot with an idle state.
In a possible implementation manner of the first aspect, the method further includes: and if the task type of the associated task which can be processed by the first robot is the same as the task type corresponding to the response of the user intention, responding the user intention of the user based on the first robot.
In one possible implementation manner of the first aspect, the responding, by the second robot, to the user intention of the user includes: the second robot receives a control instruction triggered by the user; executing the control instruction; and acquiring the execution condition of executing the control command, and sending the execution condition to a user terminal associated with the user.
In a possible implementation manner of the first aspect, the determining whether a task type of an associated task that the first robot can process is the same as a task type corresponding to responding to the user intention includes: acquiring a task type of a related task which can be processed by the first robot; judging a relevant matching value of the task type of the associated task which can be processed by the first robot and the task type corresponding to the user intention; if the relevant matching value is larger than or equal to a preset matching value, the task type of the associated task which can be processed by the first robot is the same as the task type corresponding to the user intention; otherwise, the task type of the associated task which can be processed by the first robot is different from the task type corresponding to the user intention.
In a second aspect of the embodiments of the present application, a robot scheduling apparatus includes:
the recognition unit is used for collecting voice data of a user based on the first robot and recognizing user intention pointed by the voice data;
the determining unit is used for determining the task type corresponding to the user intention according to the user intention;
the judging unit is used for judging whether the task type of the related task which can be processed by the first robot is the same as the task type corresponding to the user intention;
and the scheduling unit is used for acquiring the real-time position of the user if the task type of the associated task which can be processed by the first robot is different from the task type corresponding to the user intention, and scheduling a second robot to move to the real-time position according to the real-time position so as to enable the second robot to respond to the user intention of the user.
A third aspect of the embodiments of the present application provides a robot, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the above method when executing the computer program.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of the above method.
A fifth aspect of embodiments of the present application provides a computer program product, which when run on a robot causes the robot to perform the steps of the method.
According to the embodiment of the application, firstly, voice data of a user are collected based on a first robot, and the intention of the user is identified. When the robot is needed to perform service, the user does not need to operate through other terminals or other modes, and the voice is directly input into the first robot, so that the operation complexity of the user is reduced. Second, the task type that the first robot can process may not be the same as the task type corresponding to the user's intention. That is, the user may interact with a first robot having a different task type corresponding to the user's intention, and the first robot collects voice data of the user and schedules a second robot to go to a real-time location to serve the user. Therefore, in practical application, the user does not need to specially search the robot with the same task type corresponding to the user intention, and the operation complexity of the user is reduced. Therefore, the dispatching efficiency of the robots can be effectively improved, the service efficiency of each robot to the user is further improved, and the convenience of the user in using the robots can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart of an implementation of a robot scheduling method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a specific implementation of step S103 according to an embodiment of the present application;
FIG. 3 is a schematic flow chart of an implementation of scheduling a second robot according to an embodiment of the present disclosure;
fig. 4 is a schematic flow chart of an implementation of the screening second robot provided in the embodiment of the present application;
FIG. 5 is a schematic flow chart illustrating implementation of a feedback execution situation of a second robot according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a robot scheduling device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a robot provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
A robot is an intelligent terminal capable of semi-autonomous or fully-autonomous operation, and is currently used in many scenarios. For example, in a hotel, KTV, office building, or the like, robots are used to provide services to users. In order to meet the needs of users, robots with different functions, such as a transfer robot, a vending robot, or a greeting robot, are often configured in the same scene, and the robots with different functions can provide different services for the users.
In order to explain the technical means of the present application, the following description will be given by way of specific examples.
Fig. 1 shows a schematic implementation flow diagram of a robot scheduling method provided in an embodiment of the present application, where the method may be applied to a robot and may be applied to a situation where robot scheduling efficiency needs to be improved.
Specifically, the robot scheduling method may include the following steps S101 to S104.
Step S101, voice data of a user are collected based on a first robot, and user intentions pointed by the voice data are recognized.
In the embodiment of the application, a plurality of robots may be configured in a current application scenario, and each robot may be configured with a voice input device such as a microphone capable of collecting user voice data.
The size of the application scene can be set by a technician according to actual requirements. For example, the system can refer to a floor or a whole floor in a hotel, and can also refer to a whole floor of an office building, and the like.
In some embodiments of the present application, a robot within the current application scenario may utilize a voice input device to collect voice data of a user in real-time. Or, in other embodiments of the present application, the robot may further be configured with an infrared sensor, and when the infrared sensor configured on the robot detects that there is a person within a preset distance range, the robot starts to collect voice data of the user by using the voice input device.
When the user arrives in the current application scene and needs the service of the robot, the voice data can be input to the robot within the preset distance range of any robot in the current application scene. At this time, the voice input device of the robot may collect voice data spoken by the user, and the robot that collects the voice data of the user is the first robot in the embodiment of the present application.
The preset distance range can be adjusted according to actual conditions of noise, pedestrian volume and the like of the current application scene environment. For example, the distance may be within 1 meter of the location of the first robot, within 3 meters of the location of the first robot, etc.
In an embodiment of the application, after the voice data of the user is collected, the first robot may perform semantic recognition on the voice data, and further determine a user intention pointed to by the voice data. The user intention refers to a service task that the user needs to be executed by the robot.
And step S102, determining the task type corresponding to the user intention according to the user intention.
The task type refers to a type of service that the user needs to perform by the robot, such as a type of carrying an article, a type of guiding a guest, or a type of selling goods.
In practical application, based on the result of semantic recognition, the first robot may recognize specific task content of the target task, and then determine a target task type corresponding to the task content. The specific task content of the target task may not be determined, and only the target task type of the target task may be directly identified.
For example, the user says to the first robot within a preset distance range of the position of the first robot: "please take me to the lobby front desk". When the first robot monitors the voice information, the user intention can be identified as 'guiding the user to the lobby foreground' through semantic recognition, and the task type corresponding to the user intention is a guiding welcome type. At the moment, the task content of the target task is clear and definite.
For another example, the user says to the first robot within the preset distance range of the position of the first robot: "I need to carry the service". When the first robot monitors the voice information, the semantic recognition can only know that the user intends to carry the service and cannot know specific content. For example, the starting point and starting point of the transport cannot be known. But according to the keyword 'transport', the task type corresponding to the user intention can be still identified as the type of transported articles.
Step S103, judging whether the task type of the related task which can be processed by the first robot is the same as the task type corresponding to the response user intention.
The related task refers to a task which is handled by the first robot.
In consideration of the limitations of conditions such as hardware and cost of the robot, the field of the current application scene, and the like, the robot in the current application scene is often only responsible for processing tasks of a certain task type or a certain number of task types. For example, a transfer robot and a welcome robot are arranged in the current application scene, wherein the transfer robot is responsible for processing the related task of transferring the article type, and the welcome robot is responsible for processing the related task of guiding the welcome type.
In practical application, because a user often inputs voice data to any one of the robots in the current application scene, the task type of the associated task that the first robot that acquires the voice data of the user is responsible for processing may not be the same as the task type corresponding to the user's intention. In order to meet the requirements of the user, in the embodiment of the present application, it is further required to determine whether the task type of the associated task that can be processed by the first robot is the same as the task type corresponding to the response of the user intention.
And step S104, if the task type of the related task which can be processed by the first robot is different from the task type corresponding to the user intention, acquiring the real-time position of the user, and scheduling the second robot to move to the real-time position according to the real-time position so that the second robot responds to the user intention of the user.
In the embodiment of the present application, if the task type of the associated task that the first robot can handle is different from the task type corresponding to the response user intention, it is described that the first robot cannot handle the service job corresponding to the user intention. In order to meet the requirements of the user, the first robot needs to schedule a second robot capable of responding to the user intention, wherein the task type of the associated task which can be processed by the second robot is the same as the task type corresponding to the user intention. Therefore, the real-time location of the user, which is the current location of the user, can be obtained. The second robot is then scheduled to travel to the real-time location based on the real-time location such that the second robot responds to the user's intent.
For example, after entering a current application scenario, a user inputs voice data into a first robot responsible for handling the associated task of carrying an item type. When the first robot collects voice data of a user and identifies that the task type corresponding to the user intention is a guide welcome type, the task type of the associated task which can be processed by the first robot is different from the task type corresponding to the user intention, so that the real-time position of the user can be obtained, and the second robot is dispatched to the real-time position according to the real-time position to enable the second robot to respond to the user intention of the user.
In the embodiment of the application, the voice data of the user can be collected based on the first robot, and the user intention pointed by the voice data is recognized. And then, according to the user intention, determining the task type corresponding to the response user intention. Next, it is determined whether a task type of the associated task that the first robot can handle is the same as a task type corresponding to the response user intention. And if the task type of the associated task which can be processed by the first robot is different from the task type corresponding to the response user intention, acquiring the real-time position of the user, and scheduling the second robot to move to the real-time position according to the real-time position so that the second robot responds to the user intention of the user. According to the embodiment of the application, firstly, voice data of a user are collected based on a first robot, and the intention of the user is identified. When the robot is needed to perform service, the user does not need to operate through other terminals or other modes, and the voice is directly input into the first robot, so that the operation complexity of the user is reduced. Second, the task type that the first robot can process may not be the same as the task type corresponding to the user's intention. That is, the user may interact with a first robot having a different task type corresponding to the user's intention, and the first robot collects voice data of the user and schedules a second robot to go to a real-time location to serve the user. Therefore, in practical application, the user does not need to specially search the robot with the same task type corresponding to the user intention, and the operation complexity of the user is reduced. Therefore, the dispatching efficiency of the robots can be effectively improved, the service efficiency of each robot to the user is further improved, and the convenience of the user in using the robots can be improved.
In the embodiment of the present application, as shown in fig. 2, the above-mentioned determining whether the task type of the associated task that can be processed by the first robot is the same as the task type corresponding to the response of the user intention may specifically include the following steps S201 to S204.
Step S201, a task type of the associated task that the first robot can process is obtained.
The related task refers to a task which is handled by the first robot, the task type is a task type corresponding to the related task, for example, a carrying robot and a welcome robot are arranged in the current application scene, the carrying robot is responsible for handling the related task of the type of carried articles, and the welcome robot is responsible for handling the related task of the type of guided welcome.
Step S202, judging the relevant matching value of the task type of the relevant task which can be processed by the first robot and the task type corresponding to the response user intention.
In the embodiment of the application, after the task type of the associated task of the first robot is acquired, the relevant matching value of the task type and the task type corresponding to the response user intention can be judged. Wherein the correlation matching value is used to represent the ability of the first robot to respond to the user's intention.
In step S203, if the relevant matching value is greater than or equal to the preset matching value, the task type of the associated task that the first robot can process is the same as the task type corresponding to the response user intention.
The preset matching value is a threshold value used for judging whether the processing capacity of the first robot can respond to the user intention, and the specific value of the preset matching value can be adjusted according to the actual situation.
In the embodiment of the application, if the relevant matching value is greater than or equal to the preset matching value, it is indicated that the capability of the first robot responding to the user intention meets the requirement, that is, the first robot can respond to the user intention and can serve the user. Accordingly, it can be determined that the task type of the associated task processable by the first robot is the same as the task type corresponding to the response user intention.
Step S204, otherwise, the task type of the related task which can be processed by the first robot is different from the task type corresponding to the response user intention.
If the relevant matching value is smaller than the preset matching value, the capability of the first robot responding to the user intention does not meet the requirement, namely the first robot cannot respond to the user intention and cannot serve the user. Accordingly, it can be determined that the task type of the associated task processable by the first robot is not the same as the task type corresponding to the response user intention.
After the task type of the associated task which can be processed by the first robot is judged to be different from the task type corresponding to the user intention, in order to ensure the timely response to the user intention, in the implementation mode of the application, the second robot which can respond to the user intention at the fastest speed can be screened out. Specifically, as shown in fig. 3, the following steps S301 to S302 may be included.
Step S301, one or more second robots are screened out in the application scene where the first robot is located.
And the task type of the related task which can be processed by the second robot is the same as the task type corresponding to the response of the user intention.
In an embodiment of the application, the first robot cannot respond to the user intention because a task type of an associated task that the first robot can process is different from a task type corresponding to the user intention. Therefore, in order to meet the needs of the user, the second robot capable of responding to the user intention needs to be screened out from the current application scene, that is, the task type of the related task that can be processed by the screened second robot should be the same as the task type corresponding to the response of the user intention.
Step S302, one second robot which is the shortest path with the first robot and is in an idle state is scheduled among one or more second robots to go to a real-time position, and the second robot and the user are made to perform human-computer interaction so as to respond to the user intention of the user.
The use state of the second robot comprises an idle state and a scheduling occupation state, and is used for indicating whether the robot is executing a task. When the robot is executing a task, the using state of the robot is a scheduling occupation state; when the robot is not executing the task, the use state is an idle state.
In some embodiments of the present application, if any second robot among the robots in the current scene is determined as a robot responding to the user's intention, the distance between the robot and the first robot may be large. And since the user is entering voice data into the first robot in the vicinity of the first robot, the user's real-time location should be in the vicinity of the first robot. Therefore, the second robot with a larger distance from the first robot can also take longer time to go to the real-time position of the user, so that the waiting time of the user is increased, and the service efficiency of the robot is reduced.
According to the embodiment of the application, the path between each second robot and the first robot can be considered in one or more second robots, so that the time that the second robot goes to the real-time position of the user is short, the user is prevented from waiting for a long time, and the efficiency of the robot for serving the user is improved.
Further, since the second robot in the scheduling occupation state is used to serve the user, the second robot needs to perform the task currently being performed and then go to the real-time location to respond to the user's intention, which may cause the user waiting time to increase and may cause a problem of a large task amount of the same robot load. Therefore, in an embodiment of the present application, the shortest path to the first robot may be scheduled among one or more second robots, and one second robot in an idle state may be used to go to a real-time location in response to the user's user intention.
Specifically, the second robot may travel to a real-time location for human-machine interaction with the user in response to the user's intent.
In some embodiments of the present application, the first robot may send an indication to the second robot carrying the real-time location to inform the second robot of the real-time location of the user. The first robot may output a voice or a text to the user, and may prompt the user about the situation of the second robot. For example, the first robot may output a voice "you are good, you will be served by carrier robot # 001" to the user.
After receiving the indication information, the second robot can go to the real-time position of the user according to the indication information, interact with the user, further confirm the specific content of the user intention, further respond to the user intention according to the specific content, and complete the service for the user.
Specifically, in some embodiments of the application, if the first robot can directly recognize the specific content intended by the user after acquiring the voice data of the user, the specific content may be sent to the second robot, and the second robot confirms to the user. If the specific content intended by the user is not identified after the voice data of the user is collected by the first robot, the second robot can further perform man-machine interaction with the user to obtain the specific content intended by the user, and then service is performed on the user according to the specific content.
For example, the user says to the first robot within a preset distance range of the position of the first robot: "I need to carry the service". When the first robot collects the voice data, the task type corresponding to the user intention can be identified as the type of the carried article through semantic identification, but the specific content of the user intention is not identified. Since the task type of the associated task that the first robot can handle is a non-carrier item type, the first robot can schedule the second robot (carrier robot) to go to the user's real-time location. The second robot, after arriving at the real-time location, may output the voice "do you like, happy to serve you, ask what needs to be carried? "," ask for a question to carry to what position? "and the like. And acquiring specific content intended by the user through man-machine interaction, and further serving the user according to the specific content.
According to the embodiment of the application, one or more second robots can be screened out in the application scene where the first robot is located. And the task type of the related task which can be processed by the second robot is the same as the task type corresponding to the response of the user intention. Then, one second robot which is the shortest path with the first robot and is in an idle state is scheduled among the one or more second robots to go to a real-time position, and the second robot and the user are made to perform human-computer interaction to respond to the user intention of the user, so that the second robot responding to the user intention can meet the user requirement, the time for reaching the real-time position of the user is shortest, the user intention can be responded immediately, and the efficiency of providing the service for the user by the second robot is improved.
In order to determine the second robot whose use state is the idle state, the first robot may establish a connection with the second robot. Specifically, as shown in fig. 4, the scheduling of the shortest path between the first robot and one of the one or more second robots, where the second robot is in an idle state and travels to the real-time location, may include the following steps S401 to S403.
Step S401, establishing communication connection with one or more second robots based on a preset channel, and broadcasting a request reply instruction through the communication connection.
In some embodiments of the present application, after screening out one or more second robots, the first robot and the second robot may establish a communication connection based on a preset channel, for example, a communication connection with the second robot is established through wireless technologies such as WiFi, bluetooth, or internet of things.
After establishing a communication connection with one or more second robots based on a preset channel, the first robot may broadcast a request reply instruction for instructing the second robots to feed back information to the first robot through the communication connection.
And step S402, receiving feedback information fed back by the second robot responding to the request reply instruction, and determining the second robot with the idle state according to the feedback information.
That is, in some embodiments of the present application, after the first robot broadcasts a request reply instruction, the second robot may send feedback information to the first robot in response to the instruction. The first robot can determine the second robot with the use state being the idle state according to the feedback information fed back by the second robot.
In some embodiments of the application, the request reply instruction is used to instruct the second robot to respond to the request reply instruction and feed back the feedback information if the use state of the second robot is an idle state. The first robot may receive the feedback information after broadcasting the request reply command, and determine that the second robot that feeds back the feedback information is the second robot in the idle state.
That is, if the use state of the second robot is the idle state, the second robot may respond to the request reply command and feed back the feedback information to the first robot. And if the use state of the second robot is the scheduling occupation state, the second robot does not feed back the feedback information to the first robot after receiving the request reply instruction. Therefore, when the first robot receives the feedback information, it can be determined that the second robot that feeds back the feedback information is the second robot whose use state is the idle state.
Step S403, a second robot in the shortest path with the first robot and in an idle state is scheduled to go to the real-time location.
In the embodiment of the application, the first robot establishes a communication connection with the second robot, and broadcasts the request reply instruction through the communication connection. And then receiving feedback information fed back by the second robot in response to the request reply instruction, and determining the second robot in an idle state according to the feedback information. And then scheduling a second robot which is the shortest path with the first robot and is in an idle state to go to a real-time position. Therefore, the embodiment of the application can realize the scheduling of the robot through the interaction between the first robot and the second robot.
Some abnormal situations may occur during the second robot's response to the user's intent. For example, when the second robot is a transfer robot, an article being transferred by the second robot slips off a rack, and the like. Therefore, in order to enable the user to know the abnormal condition in time, in some embodiments of the application, as shown in fig. 5, the second robot responds to the user' S intention, and may include the following steps S501 to S503.
In step S501, the second robot receives a control instruction triggered by a user.
In an embodiment of the application, after the second robot reaches the real-time location where the user is located, a control instruction triggered by the user may be received. The control instruction is used for instructing the robot to execute the service corresponding to the user intention.
The specific triggering mode of the instruction can be selected according to actual conditions. For example, in some embodiments of the present application, the user may send the control instruction to the second robot through a user terminal carried by the user. Or, in other embodiments of the present application, the user may also directly perform human-computer interaction with the second robot, and input voice or text to the second robot, so that the second robot receives the control instruction.
Step S502, a control instruction is executed.
Specifically, in some embodiments of the application, the control instruction may carry specific content intended by the user, and after receiving the control instruction triggered by the user, the second robot may execute the specific content carried in the instruction according to the control instruction.
Step S503, acquiring an execution condition of the execution control command, and sending the execution condition to the user terminal associated with the user.
The user terminal can be a smart phone, a computer and the like of a user.
In some embodiments of the present application, a user may bind a user terminal with its own user information in advance. After acquiring the execution condition of the control instruction, the second robot may send the task execution condition to a user terminal associated with the user information of the user.
The above-mentioned execution mode may be selected according to actual conditions.
In some embodiments of the present application, the second robot may be configured with a camera, and in the process of executing the control instruction, send an image collected by the camera to a user terminal associated with the user.
In another embodiment of the present application, the second robot may be provided with a pressure sensor when the robot is transported. The second robot can determine the weight of the goods carried on the goods shelf or the storage bin by utilizing the pressure sensor of the second robot. And then, acquiring the position of the control command, determining the execution condition of the control command according to the position of the control command and the weight of the goods, and sending the execution condition to a user terminal associated with the user.
For example, if the distance between the position of the second robot and the target position pointed by the target task is less than or equal to the preset distance, and the weight of the goods carried on the shelf is 0, it is determined that the execution of the control command is completed. If the distance between the position of the second robot and the target position pointed by the user is larger than the preset distance, but the weight of the goods carried on the goods shelf is reduced, the goods fall or are taken away by other people, so that the control instruction can be determined not to be completely executed, and an abnormality occurs in the execution process. In other embodiments, in order to ensure the article safety of the user, the second robot can monitor the article through a camera or lock the article through a storage bin to prevent the article from being taken away by people when the second robot helps the user to transport the article.
In other embodiments of the present application, when the second robot is a guest greeting robot, the second robot may acquire a location of the second robot, and send location information of the location to a user terminal associated with a user. Information such as the location name of the location, the distance between the location and the target location that the user intends to point to, etc. is sent to the user terminal associated with the user.
In an embodiment of the application, the second robot receives a control instruction triggered by a user and executes the control instruction. And then, acquiring the execution condition of the execution control instruction, and sending the execution condition to a user terminal associated with the user, so that the user can acquire the execution condition of the control instruction by the second robot through the user terminal, the use experience of the user is improved, and the user can timely know the abnormal condition so as to timely process the abnormal condition of the task.
Further, in some embodiments of the present application, the second robot may further obtain an additional instruction triggered by the user in the process of responding to the user's intention, and execute the additional instruction.
The addition command is a control command that is issued to the robot when the user needs to supplement the original intention. The task to which the instruction is directed may be a supplement to the specific content of the user's original intent or may be another task different from the user's original intent.
When the task to which the additional instruction points is another task different from the original intention of the user, the second robot may identify the priority of the task to which the additional instruction points and the original intention, and determine the execution order according to the priority.
Taking the second robot as an example of the transfer robot, when the transfer robot executes a control command "transfer baggage into room a", the user observes, through the user terminal, that the transfer robot has transferred baggage into room a. At this time, the user can send an addition command to the second robot via the user terminal, and cause the transfer robot to transfer the baggage to the wardrobe in room a. After acquiring the additional command, the second robot may execute the additional command to carry the baggage into the wardrobe in room a.
Taking the second robot as a guest greeting robot, the user may say "go to room C before" to the second robot in the process of guiding the user to go to room B by the second robot. At this time, the second robot may recognize that the additional instruction is "go to room C", and the priority of the task to which the additional instruction is directed is higher than the original intention of the user, so that the additional instruction may be preferentially executed.
In other embodiments of the present application, if the task type of the associated task that the first robot can process is the same as the task type corresponding to the response user intention, it indicates that the first robot can respond to the user intention, and therefore, the first robot can respond to the user intention of the user based on the user intention of the first robot, and complete the service for the user.
Specifically, if the first robot can directly recognize the specific content intended by the user after acquiring the voice data of the user, the user can be directly served according to the specific content. If the specific content intended by the user is not identified after the voice data of the user is collected by the first robot, man-machine interaction with the user can be further carried out, the specific content intended by the user can be obtained, and then the user can be served according to the specific content.
For the specific operation of the first robot responding to the user's intention, reference may be made to the specific description in fig. 5, which is not described in detail herein.
It should be noted that, for simplicity of description, the foregoing method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts, as some steps may, in accordance with the present application, occur in other orders.
Fig. 6 is a schematic structural diagram of a robot scheduling apparatus 600 according to an embodiment of the present disclosure, where the robot scheduling apparatus 600 is disposed on a mobile robot. The robot scheduling apparatus 600 may include: a recognition unit 601, a determination unit 602, a judgment unit 603, and a scheduling unit 604.
The recognition unit 601 is used for collecting voice data of a user based on a first robot and recognizing a user intention pointed by the voice data;
a determining unit 602, configured to determine, according to the user intention, a task type corresponding to the user intention;
a judging unit 603, configured to judge whether a task type of an associated task that can be processed by the first robot is the same as a task type corresponding to the response of the user intention;
a scheduling unit 604, configured to, if the task type of the associated task that the first robot can process is different from the task type corresponding to responding to the user intention, obtain a real-time location of the user, and schedule a second robot to go to the real-time location according to the real-time location, so that the second robot responds to the user intention of the user.
In some embodiments of the present application, the scheduling unit 604 is further specifically configured to: screening one or more second robots in an application scene where the first robot is located, wherein the task type of an associated task which can be processed by the second robots is the same as the task type corresponding to the response of the user intention; and scheduling one of the one or more second robots, which is the shortest path with the first robot and has a use state of an idle state, to go to the real-time position, and enabling the second robot to perform human-computer interaction with the user to respond to the user intention of the user, wherein the use state of the second robot comprises the idle state and a scheduled occupied state.
In some embodiments of the present application, the scheduling unit 604 is further specifically configured to: establishing communication connection with the one or more second robots based on a preset channel, and broadcasting a request reply instruction through the communication connection; receiving feedback information fed back by the second robot in response to the request reply instruction, and determining the second robot with the use state being an idle state according to the feedback information; and scheduling one second robot which is the shortest path with the first robot and is in an idle state to go to the real-time position.
In some embodiments of the application, the request reply instruction is configured to instruct the second robot to respond to the request reply instruction and feed back the feedback information if the use state of the second robot is an idle state, and the scheduling unit 603 is further specifically configured to: and receiving the feedback information, and determining that the second robot which feeds back the feedback information is the second robot with an idle state.
In some embodiments of the present application, the robot scheduling apparatus 600 further includes a task execution unit configured to: and if the task type of the associated task which can be processed by the first robot is the same as the task type corresponding to the response of the user intention, responding the user intention of the user based on the first robot.
In some embodiments of the present application, the robot scheduling apparatus 600 further includes a monitoring unit, configured to receive, by the second robot, a control instruction triggered by the user; executing the control instruction; and acquiring the execution condition of executing the control command, and sending the execution condition to a user terminal associated with the user.
In some embodiments of the present application, the determining unit 603 is further configured to: acquiring a task type of a related task which can be processed by the first robot; judging a relevant matching value of the task type of the associated task which can be processed by the first robot and the task type corresponding to the user intention; if the relevant matching value is larger than or equal to a preset matching value, the task type of the associated task which can be processed by the first robot is the same as the task type corresponding to the user intention; otherwise, the task type of the associated task which can be processed by the first robot is different from the task type corresponding to the user intention.
It should be noted that, for convenience and simplicity of description, the specific working process of the robot scheduling apparatus 600 may refer to the corresponding process of the method described in fig. 1 to fig. 5, and is not described herein again.
Fig. 7 is a schematic view of a robot according to an embodiment of the present disclosure. The robot 7 may include: a processor 70, a memory 71 and a computer program 72, such as a robot dispatcher, stored in said memory 71 and operable on said processor 70. The processor 70, when executing the computer program 72, implements the steps in the various robot scheduling method embodiments described above, such as the steps S101 to S104 shown in fig. 1. Alternatively, the processor 70, when executing the computer program 72, implements the functions of each module/unit in each device embodiment described above, for example, the functions of the units 601 to 604 shown in fig. 6.
The computer program may be divided into one or more modules/units, which are stored in the memory 71 and executed by the processor 70 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program in the robot.
For example, the computer program may be divided into an identification unit, a determination unit, a judgment unit and a scheduling unit. The specific functions of each unit are as follows: the recognition unit is used for collecting voice data of a user based on the first robot and recognizing user intention pointed by the voice data; the determining unit is used for determining the task type corresponding to the user intention according to the user intention; the judging unit is used for judging whether the task type of the related task which can be processed by the first robot is the same as the task type corresponding to the user intention; and the scheduling unit is used for acquiring the real-time position of the user if the task type of the associated task which can be processed by the first robot is different from the task type corresponding to the user intention, and scheduling a second robot to move to the real-time position according to the real-time position so as to enable the second robot to respond to the user intention of the user.
The robot may include, but is not limited to, a processor 70, a memory 71. Those skilled in the art will appreciate that fig. 7 is merely an example of a robot and is not intended to be limiting and may include more or fewer components than those shown, or some components in combination, or different components, for example the robot may also include input output devices, network access devices, buses, etc.
The Processor 70 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 71 may be an internal storage unit of the robot, such as a hard disk or a memory of the robot. The memory 71 may also be an external storage device of the robot, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the robot. Further, the memory 71 may also include both an internal storage unit and an external storage device of the robot. The memory 71 is used for storing the computer program and other programs and data required by the robot. The memory 71 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/robot and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/robot are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A robot scheduling method, comprising:
collecting voice data of a user based on a first robot, and identifying user intention pointed by the voice data;
determining a task type corresponding to the user intention according to the user intention;
judging whether the task type of the associated task which can be processed by the first robot is the same as the task type corresponding to the user intention;
and if the task type of the associated task which can be processed by the first robot is different from the task type corresponding to the user intention, acquiring the real-time position of the user, and scheduling a second robot to move to the real-time position according to the real-time position so as to enable the second robot to respond to the user intention of the user.
2. The robot scheduling method of claim 1, wherein said scheduling a second robot to travel to the live location according to the live location for the second robot to respond to the user's intent of the user comprises:
screening one or more second robots in an application scene where the first robot is located, wherein the task type of an associated task which can be processed by the second robots is the same as the task type corresponding to the response of the user intention;
and scheduling one of the one or more second robots, which is the shortest path with the first robot and has a use state of an idle state, to go to the real-time position, and enabling the second robot to perform human-computer interaction with the user to respond to the user intention of the user, wherein the use state of the second robot comprises the idle state and a scheduled occupied state.
3. The robot scheduling method of claim 2, wherein said scheduling one of the one or more second robots that is shortest path to the first robot and uses the one of the second robots in an idle state to travel to the real-time location comprises:
establishing communication connection with the one or more second robots based on a preset channel, and broadcasting a request reply instruction through the communication connection;
receiving feedback information fed back by the second robot in response to the request reply instruction, and determining the second robot with the use state being an idle state according to the feedback information;
and scheduling one second robot which is the shortest path with the first robot and is in an idle state to go to the real-time position.
4. The robot scheduling method according to claim 3, wherein the request reply instruction is used to instruct the second robot to respond to the request reply instruction and feed back the feedback information if its own use state is an idle state;
the receiving feedback information which is fed back by the second robot in response to the request reply instruction and determining the second robot with the use state being an idle state according to the feedback information includes:
and receiving the feedback information, and determining that the second robot which feeds back the feedback information is the second robot with an idle state.
5. The robot scheduling method of any one of claims 1-4, wherein the method further comprises:
and if the task type of the associated task which can be processed by the first robot is the same as the task type corresponding to the response of the user intention, responding the user intention of the user based on the first robot.
6. The robot scheduling method of any of claims 1-4, wherein the second robot responding to the user intent of the user comprises:
the second robot receives a control instruction triggered by the user;
executing the control instruction;
and acquiring the execution condition of executing the control command, and sending the execution condition to a user terminal associated with the user.
7. The robot scheduling method of any one of claims 1 to 4, wherein the determining whether the task type of the associated task that the first robot can handle is the same as the task type corresponding to the response to the user intention comprises:
acquiring a task type of a related task which can be processed by the first robot;
judging a relevant matching value of the task type of the associated task which can be processed by the first robot and the task type corresponding to the user intention;
if the relevant matching value is larger than or equal to a preset matching value, the task type of the associated task which can be processed by the first robot is the same as the task type corresponding to the user intention;
otherwise, the task type of the associated task which can be processed by the first robot is different from the task type corresponding to the user intention.
8. A robot scheduling apparatus, comprising:
the recognition unit is used for collecting voice data of a user based on the first robot and recognizing user intention pointed by the voice data;
the determining unit is used for determining the task type corresponding to the user intention according to the user intention;
the judging unit is used for judging whether the task type of the related task which can be processed by the first robot is the same as the task type corresponding to the user intention;
and the scheduling unit is used for acquiring the real-time position of the user if the task type of the associated task which can be processed by the first robot is different from the task type corresponding to the user intention, and scheduling a second robot to move to the real-time position according to the real-time position so as to enable the second robot to respond to the user intention of the user.
9. A robot comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1 to 7 are implemented when the computer program is executed by the processor.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202011050708.XA 2020-09-29 2020-09-29 Robot scheduling method and device, robot and storage medium Pending CN112247987A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011050708.XA CN112247987A (en) 2020-09-29 2020-09-29 Robot scheduling method and device, robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011050708.XA CN112247987A (en) 2020-09-29 2020-09-29 Robot scheduling method and device, robot and storage medium

Publications (1)

Publication Number Publication Date
CN112247987A true CN112247987A (en) 2021-01-22

Family

ID=74233987

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011050708.XA Pending CN112247987A (en) 2020-09-29 2020-09-29 Robot scheduling method and device, robot and storage medium

Country Status (1)

Country Link
CN (1) CN112247987A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114267356A (en) * 2021-12-30 2022-04-01 重庆特斯联智慧科技股份有限公司 Man-machine interaction logistics robot and control method thereof
WO2024010217A1 (en) * 2022-07-05 2024-01-11 삼성전자주식회사 Robot for performing specific service and control method thereof

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130068603A (en) * 2011-12-15 2013-06-26 한국전자통신연구원 Method for assigning roles to multi robot
KR101668078B1 (en) * 2015-04-23 2016-10-19 국방과학연구소 Autonomous robot collaborative system and method
CN106647763A (en) * 2017-01-06 2017-05-10 深圳优地科技有限公司 Robot scheduling method, apparatus and server
CN109986563A (en) * 2019-05-01 2019-07-09 湖南大学 A kind of multiple mobile robot's work compound method and system
US20200023511A1 (en) * 2019-05-30 2020-01-23 Lg Electronics Inc. Master robot for controlling slave robot and driving method thereof
CN111191931A (en) * 2019-12-30 2020-05-22 深圳优地科技有限公司 Method and device for distributing tasks of multiple robots and terminal equipment
WO2020141635A1 (en) * 2019-01-03 2020-07-09 엘지전자 주식회사 Control method for robot system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130068603A (en) * 2011-12-15 2013-06-26 한국전자통신연구원 Method for assigning roles to multi robot
KR101668078B1 (en) * 2015-04-23 2016-10-19 국방과학연구소 Autonomous robot collaborative system and method
CN106647763A (en) * 2017-01-06 2017-05-10 深圳优地科技有限公司 Robot scheduling method, apparatus and server
WO2020141635A1 (en) * 2019-01-03 2020-07-09 엘지전자 주식회사 Control method for robot system
CN109986563A (en) * 2019-05-01 2019-07-09 湖南大学 A kind of multiple mobile robot's work compound method and system
US20200023511A1 (en) * 2019-05-30 2020-01-23 Lg Electronics Inc. Master robot for controlling slave robot and driving method thereof
CN111191931A (en) * 2019-12-30 2020-05-22 深圳优地科技有限公司 Method and device for distributing tasks of multiple robots and terminal equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114267356A (en) * 2021-12-30 2022-04-01 重庆特斯联智慧科技股份有限公司 Man-machine interaction logistics robot and control method thereof
CN114267356B (en) * 2021-12-30 2024-04-02 重庆特斯联智慧科技股份有限公司 Man-machine interaction logistics robot and control method thereof
WO2024010217A1 (en) * 2022-07-05 2024-01-11 삼성전자주식회사 Robot for performing specific service and control method thereof

Similar Documents

Publication Publication Date Title
CN109095297B (en) Ladder taking method, intelligent device and cloud server
CN112247987A (en) Robot scheduling method and device, robot and storage medium
CN106240385B (en) Method and apparatus for charging station monitoring
CN109905545B (en) Message processing method, terminal and computer readable storage medium
CN101996099A (en) Method and system for processing information
EP3157003B1 (en) Terminal control method and device, voice control device and terminal
EP3824615B1 (en) Call management system for a command center
CN111942977B (en) High-rise elevator control method, device, equipment and readable storage medium
CN109704161B (en) Intelligent equipment-based elevator calling method, device, system and storage medium
CN110790097A (en) Generating control signals to a conveyor system
CN111880887B (en) Message interaction method and device, storage medium and electronic equipment
CN115545586B (en) OHT (overhead hoist transport vehicle) scheduling method, device and terminal
CN112561362A (en) Order scheduling method, system, terminal and storage medium for unmanned delivery system
CN113705943B (en) Task management method and system based on voice intercom function and mobile device
US10708429B2 (en) Call management system for a dispatch center
CN103281402B (en) A kind of exhibition long distance service system based on intelligent showcase terminal and method thereof
CN103822433A (en) Information processing method and refrigerator
CN110969384B (en) Method and device for distributing articles, storage medium and electronic equipment
CN112497212A (en) Robot elevator taking method and device, electronic equipment and storage medium
CN114677076A (en) Takeout relay dispatching method, device and system based on robot and storage medium
CN114819921A (en) Mold ex-warehouse method, device, equipment and readable storage medium
CN113935629A (en) Task processing method and device and storage medium
CN112456260B (en) Elevator control method, system, electronic device and storage medium
CN114429679B (en) Service place management method and system
CN109754204B (en) Block synchronization method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210122

RJ01 Rejection of invention patent application after publication