CN112894824B - Robot control method and robot - Google Patents

Robot control method and robot Download PDF

Info

Publication number
CN112894824B
CN112894824B CN202110172924.XA CN202110172924A CN112894824B CN 112894824 B CN112894824 B CN 112894824B CN 202110172924 A CN202110172924 A CN 202110172924A CN 112894824 B CN112894824 B CN 112894824B
Authority
CN
China
Prior art keywords
robot
prompt
user
controlling
operation instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110172924.XA
Other languages
Chinese (zh)
Other versions
CN112894824A (en
Inventor
李泽华
张涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Pudu Technology Co Ltd
Original Assignee
Shenzhen Pudu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Pudu Technology Co Ltd filed Critical Shenzhen Pudu Technology Co Ltd
Priority to CN202110172924.XA priority Critical patent/CN112894824B/en
Publication of CN112894824A publication Critical patent/CN112894824A/en
Application granted granted Critical
Publication of CN112894824B publication Critical patent/CN112894824B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

The embodiment of the invention discloses a robot control method and a robot, which are applied to the technical field of robots.

Description

Robot control method and robot
Technical Field
The invention belongs to the technical field of robots, and particularly relates to a robot control method and a robot.
Background
The robot is a high and new technology appearing in the recent automatic control field, and is more and more widely applied to various places such as restaurants, hotels, shopping malls, libraries, families and the like. For the user to use the robot better, the setting of the user interaction interface of the robot is very important.
In the prior art, a use instruction is set on the body of the robot or at a position where the robot is parked, but since users have different degrees of familiarity with the work execution flow of the robot, especially new users who initially contact the robot, are not familiar with the work execution flow of the robot, and many temporary situations may occur when the robot completes tasks, all control operations for enabling the robot to normally work cannot be completed only by the use instruction, so that the convenience of operation and control is affected, and the operation effect of the robot is reduced.
Disclosure of Invention
The invention provides a robot control method and a robot, and aims to solve the problems that a user is poor in convenience for operating the robot and the robot is poor in operation effect due to poor operation.
The embodiment of the invention provides a robot control method, which comprises the following steps: when the robot runs to a preset key step, detecting the waiting time for a user to send an operation instruction; if the waiting time length exceeds the preset time length, detecting a trigger condition of an operation instruction corresponding to the current key step; and controlling the robot to send prompt information corresponding to the trigger condition, wherein the prompt information is used for prompting the user to send an operation instruction for triggering the robot to perform the next operation.
An embodiment of the present invention further provides a robot, including: the first detection module is used for detecting the waiting time of waiting for the user to send an operation instruction when the robot runs to a preset key step; the second detection module is used for detecting a trigger condition of an operation instruction corresponding to the current key step if the waiting time length exceeds a preset time length; and the control module is used for controlling the robot to send prompt information corresponding to the trigger condition, and the prompt information is used for prompting the user to send an operation instruction for triggering the robot to perform the next operation.
An embodiment of the present invention further provides a robot, including: a memory and a processor; the memory stores executable program code; the processor, coupled to the memory, invokes the executable program code stored in the memory to perform the robot control method as described above.
It can be known from the foregoing embodiments of the present invention that, when a robot runs to a preset key step, a waiting time for a user to send an operation instruction is detected, if the waiting time exceeds the preset time, a trigger condition of the operation instruction corresponding to the current key step is detected, the robot is controlled to send a prompt message corresponding to the trigger condition, where the prompt message is used to prompt the user to send an operation instruction for triggering the robot to perform a next operation, and in the key step of the robot running, the user is prompted to send a next operation instruction through information, so that the robot can smoothly perform the next operation, the convenience of operating the robot is improved, the efficiency of the robot running is improved, and the effect of the robot in completing a task is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention.
Fig. 1 is a flowchart of a robot control method according to an embodiment of the present invention;
fig. 2 is a flowchart of a robot control method according to another embodiment of the present invention;
FIG. 3 is a schematic diagram of a robot interaction interface provided by an embodiment of the invention;
fig. 4 is a flowchart of a robot control method according to another embodiment of the present invention;
FIG. 5 is a schematic diagram of a robot according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a hardware structure of a robot according to an embodiment of the present invention.
Detailed Description
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, an implementation flowchart of a robot control method according to an embodiment of the present invention is provided. The method is applicable to a robot, an execution subject of the method is a controller of the robot, and the controller may specifically be a Central Processing Unit (CPU) of the robot, as shown in fig. 1, and the method specifically includes:
s101, when the robot runs to a preset key step, detecting the waiting time for the user to send an operation instruction;
the method comprises the steps of presetting key steps in a robot system, wherein the key steps are steps which require a user to input an operation instruction and can be continuously operated by the robot. Key steps may include, but are not limited to: the method comprises the steps of starting a task of the robot, inputting a next operation instruction, confirming the next operation instruction, finishing the task of the robot and the like.
The robot can learn the current step of executing the task through sensors such as a touch sensor, a visual sensor, a force sensor, a proximity sensor, an ultrasonic sensor and an auditory sensor, and when the current step is a preset key step, the waiting time is detected and is the time for waiting for a user to send an operation instruction.
S102, if the waiting time exceeds a preset time, detecting a trigger condition of an operation instruction corresponding to the current key step;
the preset time is a threshold value of the waiting time, the preset time is preset in the robot system, and the triggering condition for detecting the operation instruction corresponding to the current key step is triggered when the waiting time exceeds the preset time.
Different key steps may have different waiting time lengths or the same waiting time length. For example, waiting for the user to click the "go" button, the waiting time is 10 seconds; and waiting for the user to input the meal delivery seat number, wherein the waiting time is 15 seconds.
And S103, controlling the robot to play a prompt voice corresponding to the trigger condition, wherein the prompt voice is used for prompting the user to send an operation instruction for triggering the robot to perform the next operation.
And controlling a voice broadcasting device of the robot to play preset prompt voice corresponding to the trigger condition. Specifically, a plurality of prompt voices are stored in the memory of the robot, and the content of each prompt voice corresponds to a different trigger condition, for example, if the trigger condition is to input a food delivery seat number, the content of the prompt voice corresponding to the trigger condition is to "please input the food delivery seat number"; if the triggering condition is that the article is taken away and the current task is confirmed to be finished, the prompt voice content corresponding to the triggering condition is 'please take the article away and confirm to be finished'.
In step S102, if the trigger condition of the operation instruction corresponding to the current key step is detected, the robot is controlled to send out a prompt message corresponding to the trigger condition, so as to prompt the user to send out an operation instruction capable of triggering the robot to perform the next operation.
The prompting message can comprise a voice prompt and a prompting picture and text, and the prompting picture and text comprises prompting words and/or prompting pictures.
In the embodiment of the invention, when the robot runs to a preset key step, the waiting time for the operation instruction sent by a user is detected, if the waiting time exceeds the preset time, the triggering condition of the operation instruction corresponding to the current key step is detected, the robot is controlled to send the prompt information corresponding to the triggering condition, the prompt information is used for prompting the user to send the operation instruction for triggering the robot to carry out the next operation, and in the key step of the robot running, the user is prompted to send the next operation instruction through the information, so that the robot can smoothly execute the next operation, the convenience of the robot is improved, the running efficiency of the robot is improved, and the effect of the robot for completing tasks is improved.
Referring to fig. 2, a flowchart of a robot control method according to another embodiment of the present invention is shown. The method can be applied to a robot, and the controller with the robot as the main execution body is shown in fig. 2, and the method specifically comprises the following steps:
s201, confirming a task scene of the task, and acquiring information of a preset key step corresponding to the task scene;
the task scenario refers to the external environment where the robot is required to run to complete the task. The key steps of different task scenes are different, for example, the task of delivering food in a restaurant is different from the task of delivering daily supplies on different floors of a hotel, and the key steps of delivering the daily supplies in the hotel are related to getting on and off the elevator.
According to the task name, the task target point or other keywords related to the task scene of the task executed by the robot, the task scene of the task is confirmed, and preset information of key steps corresponding to the task scene is obtained in the robot system, wherein the information of the key steps comprises information such as the number, the name, the content and the execution sequence of the key steps.
S202, when the robot runs to a preset key step, detecting the waiting time for the user to send an operation instruction;
s203, if the waiting time exceeds the preset time, detecting a trigger condition of an operation instruction corresponding to the current key step;
and S204, if the triggering condition is that the operation instruction is set on the interactive interface, detecting the information of the current interactive interface, controlling the robot to play a prompt voice corresponding to the operation instruction set on the interactive interface, and/or controlling the robot to display a prompt image and text.
Specifically, if the current interactive interface comprises an operation information input interface, and no operation information is detected or wrong operation information unmatched with the operation instruction is detected on the task information input interface, indicating that the user has not started to issue the operation instruction or issues a foreseeable operation instruction incapable of being executed, the robot is controlled to play a first prompt voice, and/or the robot is controlled to display a first prompt image and text, and the first prompt voice and the first prompt image and text prompt the user to input the operation information, namely the operation information of the next operation of the robot is input, so that a next operation instruction is issued. For example, the current interactive interface includes an input interface of a target point of the current task, after waiting for 15 seconds, if it is not detected that the user inputs the target point of the current task on the operation interface, or it is detected that the user inputs no target point information or an incorrect target point, a first prompt voice corresponding to the target point of the current task is played, or a first prompt image-text corresponding to the target point of the current task is displayed on the display interface, or the first prompt image-text is displayed while the first prompt voice is played, and the contents of the first prompt voice and the first image-text may be "please input a seat number" or "please input a correct seat number".
Further, if the current interactive interface includes an operation information input interface, and operation information matched with the operation instruction is detected on the operation information input interface, indicating that the user has input correct operation information but does not confirm to send the operation instruction, the robot is controlled to play a second prompt voice, and/or the robot is controlled to display a second prompt image-text, wherein the second prompt voice and the second prompt image-text are used for prompting the user to trigger a confirmation instruction corresponding to the input operation information, so that an operation instruction of the next operation is sent, in order to guide the user to conveniently operate, an operation area corresponding to the confirmation instruction is highlighted, and the operation area can be a virtual key.
For example, as shown in fig. 3, the current interactive interface 10 includes an input interface 11 for a target point of the task, and when it is detected that the correct target point is input by the user on the target point input interface 11, but the operation area 12 corresponding to the confirmation instruction is not clicked to confirm execution of the next operation, the robot is controlled to play a second prompt voice, the content of the second prompt voice is "please click to start", and the operation area 12 displays "start" and lights up to highlight or flash the operation area 12.
In the embodiment of the invention, the task scene of the task is confirmed, the information of the preset key step corresponding to the task scene is acquired, different key steps are prompted according to different task scenes, the prompting intelligence is improved, when the robot runs to the preset key steps, if the waiting time for the operation instruction sent by a user exceeds the preset time, the triggering condition of the operation instruction corresponding to the current key step is detected, if the triggering condition is that the operation instruction is set on an interactive interface, the robot is controlled to play the prompting voice corresponding to the operation instruction set on the interactive interface, and/or the robot is controlled to display the prompting image and text corresponding to the operation instruction set on the interactive interface, and the prompting voice and the prompting image and text are used for prompting the user to set the operation instruction on the interactive interface so as to trigger the robot to carry out the next operation, so that the convenience of the robot is improved, the running efficiency of the robot is improved, and the task completing effect of the robot is improved.
Referring to fig. 4, a flowchart of a robot control method according to another embodiment of the present invention is shown. The method can be applied to a robot, and is implemented by a controller with the robot as a main body, as shown in fig. 4, the method specifically includes:
s301, confirming a task scene of the task, and acquiring information of a preset key step corresponding to the task scene;
s302, when the robot runs to a preset key step, detecting the waiting time for the user to send an operation instruction;
s303, if the waiting time exceeds a preset time, detecting a trigger condition of an operation instruction corresponding to the current key step;
and S304, if the triggering condition comprises the change of the robot state, controlling the robot to repeatedly play a prompting voice and/or a prompting image-text corresponding to the prompt of the user to change the robot state according to a preset prompting period.
Specifically, the robot state change includes a load change or a pose change, wherein the load change includes that the load weight becomes heavy or light, or the load number becomes more or less; the posture change includes a position change or a direction change.
If the triggering condition comprises that the robot state changes, controlling the robot to repeatedly play a third prompting voice according to a preset prompting period, and/or controlling the robot to repeatedly display a third prompting image-text for prompting a user to change the robot load, wherein the third prompting voice and the third prompting image-text are used for prompting the user to change the robot load so as to trigger the robot to perform the next operation, and when the user is required to input a confirmation instruction, performing lighting highlighting or flashing display on an operation area corresponding to the confirmation instruction.
When the triggering condition is that the load of the robot changes, in a specific implementation manner, when the robot starts at a starting point, a user is required to place the load of the task on the robot, the change of the load of the robot is the triggering condition for inputting a target point of the task next time, if the waiting time for the load increase of the robot exceeds the preset time, the third prompting voice is repeatedly played and/or third prompting pictures and texts are displayed according to a preset prompting period, wherein the prompting period can be 5 seconds, the content of the third prompting voice and the third prompting pictures and texts can be 'please place articles on a tray of the robot', namely, the third prompting voice is played once every 5 seconds and/or the third prompting pictures and texts are displayed once under the condition that the load increase is not detected.
In another specific embodiment, when the robot reaches the target point, the user is required to take the load of the task away from the robot, the change of the robot load is a trigger condition for ending the task next step, if it is detected that the waiting time for waiting for the reduction of the robot load exceeds the preset time, the third prompt voice is repeatedly played according to a preset prompt period, which may be 10 seconds, and/or the third prompt graphic is displayed, and the contents of the third prompt voice and the third prompt graphic may be "please take the object away from the tray of the robot and click to complete", that is, the third prompt voice is played once every 10 seconds and/or the third prompt graphic is displayed once under the condition that the reduction of the load is not detected. And simultaneously, carrying out lighting highlight or flashing display on a 'finish' button on the interactive interface.
When the robot posture changes, in a specific embodiment, when the robot encounters an obstacle and cannot pass through, or the robot has a fault which can be eliminated on site, the user is required to change the walking direction or move the robot away from the current position so as to continue to move forward around the obstacle, or after the user is required to change the walking direction or move the robot away from the current position, the fault is eliminated by restarting the robot, at this time, the walking direction or the moving away from the current position of the robot is a trigger condition for continuing to plan a path and walking along the planned path next step, if the waiting time for waiting for the change of the direction or the change of the position of the robot exceeds the preset time, according to a preset prompt period, the prompt period may be 15 seconds, a fourth prompt voice is repeatedly played, and/or the fourth prompt graphic text is displayed, the contents of the fourth prompt voice and the fourth prompt may be 'please help the robot change the direction and bypass the obstacle', or 'please move the robot to a working area', or 'place the robot every 5 seconds', or click the fourth graphic text, the fourth prompt is displayed, or the robot is restarted, the status is not displayed, and the status is changed, or the status is detected, and the status is not clicked.
In the embodiment of the invention, the task scene of the task is confirmed, the information of the preset key step corresponding to the task scene is acquired, different key steps are prompted according to different task scenes, the prompting intelligence is improved, when the robot runs to the preset key steps, if the waiting time for the operation instruction sent by the user exceeds the preset time, the triggering condition of the operation instruction corresponding to the current key step is detected, if the triggering condition is that the state of the robot changes, the robot is controlled to repeatedly play the prompting voice and/or the prompting image-text which prompt the user to change the state of the robot according to the preset prompting period so as to trigger the robot to perform the next operation, the convenience of operating the robot can be improved, the running efficiency of the robot is improved, and the effect of the robot for completing the task is improved.
Referring to fig. 5, a schematic structural diagram of a robot according to an embodiment of the present invention is provided. For convenience of explanation, only portions related to the embodiments of the present invention are shown. The robot includes:
the first detection module 40 is used for detecting the waiting time of waiting for the user to send an operation instruction when the robot runs to a preset key step;
a second detecting module 402, configured to detect a trigger condition of an operation instruction corresponding to the current key step if the waiting duration exceeds a preset duration;
and a control module 403, configured to control the robot to send prompt information corresponding to the trigger condition, where the prompt information is used to prompt the user to send an operation instruction for triggering the robot to perform the next operation.
Further, if the triggering condition is that the operation instruction is set on the interactive interface, the control module 403 is further configured to detect information of the current interactive interface, control the robot to play a prompt voice corresponding to the operation instruction set on the interactive interface, and/or control the robot to display the prompt text.
The control module 403 is further configured to, if the current interactive interface includes an operation information input interface, and no operation information is detected or operation information that is not matched with the operation instruction is detected on the operation information input interface, control the robot to play a first prompt voice prompting the user to input the operation information, and/or control the robot to display a first prompt image-text prompting the user to input the operation information.
The control module 403 is further configured to, if the current interaction interface includes an operation information input interface, and operation information matched with the operation instruction is detected on the operation information input interface, control the robot to play a second prompt voice, and/or control the robot to display a second prompt image-text, where the second prompt voice and the second prompt image-text are used to prompt a user to trigger a confirmation instruction corresponding to the operation information, so as to complete setting of the operation instruction.
Further, the control module 403 is further configured to highlight or flash the operation area corresponding to the confirmation instruction.
Further, if the triggering condition includes that the robot load changes, the control module 403 is further configured to control the robot to repeatedly play a third prompt voice prompting the user to change the robot load according to a preset prompt period, and/or control the robot to repeatedly display a third prompt text prompting the user to change the robot load.
Further, if the triggering condition includes that the position and posture of the robot changes, the control module 403 is further configured to control the robot to play a fourth prompt voice prompting the user to change the current direction or the current position of the robot, and/or control the robot to display a fourth prompt icon prompting the user to change the current direction or the current position of the robot.
Further, the control module 403 is further configured to confirm a task scene of the task, and acquire information of the preset key step corresponding to the task scene.
For details of this embodiment, reference is made to the description of the embodiment shown in fig. 1 to 4.
In the embodiment of the invention, when the robot runs to a preset key step, the waiting time for the operation instruction sent by a user is detected, if the waiting time exceeds the preset time, the triggering condition of the operation instruction corresponding to the current key step is detected, the robot is controlled to send the prompt information corresponding to the triggering condition, the prompt information is used for prompting the user to send the operation instruction for triggering the robot to carry out the next operation, and in the key step of the robot running, the user is prompted to send the next operation instruction through the information, so that the robot can smoothly execute the next operation, the convenience of the robot is improved, the running efficiency of the robot is improved, and the effect of the robot for completing tasks is improved.
Further, an embodiment of the present invention further provides a robot, including: a memory 100 and a processor 200, wherein the memory 100 stores executable program codes, and the processor 200 coupled to the memory 100 calls the executable program codes stored in the memory 100 to execute the robot control method provided by the embodiments shown in fig. 1, fig. 2 and fig. 4. The processor 200 may be a controller of the robot control method that executes the main body robot.
Wherein the executable program code comprises modules in the robot as described in the embodiment shown in fig. 5 above, such as: a first detection module 401, a second detection module 402 and a control module 403.
Further, an embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium may be a memory provided in the robot in the foregoing embodiments shown in fig. 6. The computer-readable storage medium has stored thereon a computer program which, when executed by a processor, implements the robot control method described in the foregoing embodiments shown in fig. 1, 2 and 4. Further, the computer-readable storage medium may be various media that can store program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a RAM, a magnetic disk, or an optical disk.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In view of the above description of the robot control method and the robot provided by the present invention, those skilled in the art will recognize that changes may be made in the embodiments and applications of the invention, and the description of the invention should not be construed as limiting the invention.

Claims (9)

1. A robot control method, comprising:
confirming a task scene of the task, and acquiring information of preset key steps corresponding to the task scene, wherein the task scene is an external environment where a robot required by the task runs, and each task scene has corresponding preset key steps;
when the robot runs to the preset key step, detecting the waiting time for the user to send an operation instruction; the key step is that the robot can continue to operate only by inputting an operation instruction by a user; the operation instruction is input by a user on an interactive interface of the robot and is used for instructing the next operation of the robot;
if the waiting time length exceeds the preset time length, detecting a trigger condition of an operation instruction corresponding to the current key step;
and controlling the robot to send prompt information corresponding to the trigger condition, wherein the prompt information is used for prompting the user to send the operation instruction.
2. The method of claim 1, wherein the prompt message includes a prompt voice and a prompt text, and if the trigger condition is that the operation command is set on an interactive interface, the controlling the robot to issue the prompt message corresponding to the trigger condition includes:
and detecting the information of the current interactive interface, controlling the robot to play a prompt voice corresponding to the operation instruction set on the interactive interface, and/or controlling the robot to display the prompt image-text.
3. The method according to claim 2, wherein the controlling the robot to play a prompt voice corresponding to the operation instruction set on the interactive interface, and/or the controlling the robot to display the prompt text comprises:
and if the current interactive interface comprises an operation information input interface, and no operation information is detected or operation information which is not matched with the operation instruction is detected on the operation information input interface, controlling the robot to play a first prompt voice for prompting a user to input the operation information, and/or controlling the robot to display a first prompt image-text for prompting the user to input the operation information.
4. The method according to claim 2, wherein the controlling the robot to play a prompt voice corresponding to the operation instruction set on the interactive interface, and/or the controlling the robot to display the prompt text comprises:
and if the current interactive interface comprises an operation information input interface, and operation information matched with the operation instruction is detected on the operation information input interface, controlling the robot to play a second prompt voice and/or controlling the robot to display a second prompt image-text, wherein the second prompt voice and the second prompt image-text are used for prompting a user to trigger a confirmation instruction corresponding to the operation information so as to complete the setting of the operation instruction.
5. The method of claim 4, further comprising: and carrying out lamplight highlighting or flickering display on the operation area corresponding to the confirmation instruction.
6. The method of claim 1, wherein if the trigger condition comprises a change in the robot load, controlling the robot to issue a prompt corresponding to the trigger condition comprises:
and controlling the robot to repeatedly play a third prompt voice for prompting a user to change the robot load and/or controlling the robot to repeatedly display a third prompt image-text for prompting the user to change the robot load according to a preset prompt period.
7. The method of claim 1, wherein if the trigger condition comprises a change in the pose of the robot, controlling the robot to issue a prompt corresponding to the trigger condition comprises:
and controlling the robot to play a fourth prompt voice for prompting a user to change the current direction or the current position of the robot, and/or controlling the robot to display a fourth prompt image-text for prompting the user to change the current direction or the current position of the robot.
8. A robot, comprising:
the first detection module is used for detecting the waiting time of waiting for the user to send an operation instruction when the robot runs to a preset key step; the key step is that the robot can continue to operate only by inputting an operation instruction by a user; the operation instruction is input by a user on an interactive interface of the robot and is used for instructing the next operation of the robot;
the second detection module is used for detecting a trigger condition of an operation instruction corresponding to the current key step if the waiting time length exceeds a preset time length;
the control module is used for controlling the robot to send prompt information corresponding to the trigger condition, and the prompt information is used for prompting the user to send the operation instruction;
the control module is further configured to confirm a task scene of the task, and acquire information of preset key steps corresponding to the task scene, where the task scene is an external environment where the robot is required to complete the task, and each task scene has corresponding preset key steps.
9. A robot, comprising:
a memory and a processor;
the memory stores executable program code;
the processor, coupled to the memory, invokes the executable program code stored in the memory to perform the robot control method of any of claims 1-7.
CN202110172924.XA 2021-02-08 2021-02-08 Robot control method and robot Active CN112894824B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110172924.XA CN112894824B (en) 2021-02-08 2021-02-08 Robot control method and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110172924.XA CN112894824B (en) 2021-02-08 2021-02-08 Robot control method and robot

Publications (2)

Publication Number Publication Date
CN112894824A CN112894824A (en) 2021-06-04
CN112894824B true CN112894824B (en) 2022-11-29

Family

ID=76123977

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110172924.XA Active CN112894824B (en) 2021-02-08 2021-02-08 Robot control method and robot

Country Status (1)

Country Link
CN (1) CN112894824B (en)

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679366A (en) * 2013-12-11 2014-03-26 腾讯科技(深圳)有限公司 Method and device for quitting user group
CN106327681B (en) * 2015-06-19 2019-03-01 昆达电脑科技(昆山)有限公司 Point of sale device and point of sales system
US10169006B2 (en) * 2015-09-02 2019-01-01 International Business Machines Corporation Computer-vision based execution of graphical user interface (GUI) application actions
CN207256248U (en) * 2017-04-20 2018-04-20 长荣玩具(东莞)有限公司 The controling circuit structure of dining room meal delivery robot
CN108542227A (en) * 2018-04-28 2018-09-18 四川化工职业技术学院 Automatic food delivery and meal method is received in a kind of intelligent shop towards little Wei food and beverage enterprises
CN108897579A (en) * 2018-06-29 2018-11-27 联想(北京)有限公司 A kind of information processing method, electronic equipment and system
CN108890647A (en) * 2018-08-07 2018-11-27 北京云迹科技有限公司 A kind of automatic food delivery method, apparatus and robot
CN110554696B (en) * 2019-08-14 2023-01-17 深圳银星智能集团股份有限公司 Robot system, robot and robot navigation method based on laser radar

Also Published As

Publication number Publication date
CN112894824A (en) 2021-06-04

Similar Documents

Publication Publication Date Title
US20170242578A1 (en) Method and a device for controlling a moving object, and a mobile apparatus
Correa et al. Multimodal interaction with an autonomous forklift
TWI571792B (en) Touch control method and device for multi - touch terminal
US9182838B2 (en) Depth camera-based relative gesture detection
CN109491562B (en) Interface display method of voice assistant application program and terminal equipment
US11354009B2 (en) Method and apparatus for using gestures across multiple devices
CN105940385B (en) Controlling primary and secondary displays from a single touch screen
EP3194316B1 (en) System and method of initiating elevator service by entering an elevator call
US20100079677A1 (en) Input Apparatus
US20210081029A1 (en) Gesture control systems
KR20150002786A (en) Interacting with a device using gestures
Bischoff et al. Dependable multimodal communication and interaction with robotic assistants
US20170355556A1 (en) System and method of initiating elevator service by entering an elevator call
JP5776544B2 (en) Robot control method, robot control device, and robot
US20170341903A1 (en) System and method of initiating elevator service by entering an elevator call
CN112463000B (en) Interaction method, device, system, electronic equipment and vehicle
EP3371088B1 (en) System and method for initiating elevator service by entering an elevator call
CN109260713A (en) Virtual objects remote assistance operating method and device, storage medium, electronic equipment
CN104184890A (en) Information processing method and electronic device
CN114180428B (en) Method and device for recovering tasks of robot
Pourmehr et al. A robust integrated system for selecting and commanding multiple mobile robots
WO2019087638A1 (en) Information processing device and information processing method
CN112894824B (en) Robot control method and robot
JP4717098B2 (en) Display operation device
WO2024207600A1 (en) Control method for underwater robot, underwater robot, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant