CN117260688A - Robot, control method and device thereof, and storage medium - Google Patents

Robot, control method and device thereof, and storage medium Download PDF

Info

Publication number
CN117260688A
CN117260688A CN202311380045.1A CN202311380045A CN117260688A CN 117260688 A CN117260688 A CN 117260688A CN 202311380045 A CN202311380045 A CN 202311380045A CN 117260688 A CN117260688 A CN 117260688A
Authority
CN
China
Prior art keywords
task
robot
detection
triggered
level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311380045.1A
Other languages
Chinese (zh)
Inventor
尚子涵
杜坤
刘凯
文林风
易鹏
丁松
白忠星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Robot Technology Co ltd
Original Assignee
Beijing Xiaomi Robot Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Robot Technology Co ltd filed Critical Beijing Xiaomi Robot Technology Co ltd
Priority to CN202311380045.1A priority Critical patent/CN117260688A/en
Publication of CN117260688A publication Critical patent/CN117260688A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/08Programme-controlled manipulators characterised by modular constructions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1661Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed

Abstract

The control method comprises the steps of responding to the fact that when the robot executes a first task, detecting that a second task located on the upper layer of the first task is triggered, stopping all task processes below the level where the second task is located, and executing task processes corresponding to the second task. In the embodiment of the disclosure, the multi-task of the robot is divided into a multi-level structure, the multi-task control is realized according to the priority of the level structure, the multi-task capacity of the bionic robot is improved, the automation requirement of the bionic robot in a living activation scene is met, and the bionic attribute and the family attribute of the robot are improved.

Description

Robot, control method and device thereof, and storage medium
Technical Field
The disclosure relates to the technical field of bionic robots, in particular to a robot, a control method and device thereof, and a storage medium.
Background
Currently, commercial landing cases for robots are still mainly focused on industrialized scenarios, where only a single capability of the robot in a fixed scenario is required, for example, a service robot only needs to provide a capability of performing a preset service response in a preset area, and a delivery robot only needs to provide a capability of performing a cargo delivery task on a preset route.
With the technical development of the bionic robot, the bionic robot is possible to enter home life of people, and more manufacturers are focused on researching a landing scheme of the bionic robot in a home scene. Different from the traditional industrialized scene, the multi-task integration capability of the robot in the home scene becomes a control difficulty of the robot.
Disclosure of Invention
In order to improve the multi-task capability of a bionic robot, the embodiment of the disclosure provides a robot, a control method, a control device and a storage medium thereof.
In a first aspect, embodiments of the present disclosure provide a robot control method, where an execution task of a robot includes a plurality of levels, the method including:
responding to the fact that when the robot executes a first task, a second task located on the upper layer of the first task is detected to be triggered, and stopping all task processes below the level where the second task is located;
and executing the task process corresponding to the second task.
In some embodiments, where the second task is a top-level task, the process by which the second task is triggered includes:
acquiring current residual electric quantity information of the robot and estimated power consumption of all tasks currently executed;
And determining that the second task is triggered in response to the remaining power information being less than the estimated power consumption.
In some embodiments, the executing the task process corresponding to the second task includes:
and controlling the robot to navigate to a preset charging position for charging.
In some embodiments, in a case where the second task is a detection task located below a top-level task, the process in which the second task is triggered includes:
traversing all target detection subtasks included in the second task in the current scene to obtain a detection result of each target detection subtask;
and responding to the detection result of any one target detection subtask as a preset result, and determining that the second task is triggered.
In some embodiments, the executing the task process corresponding to the second task includes:
controlling the robot to generate alarm information; or,
and controlling the robot to generate alarm information and sending the alarm information to terminal equipment matched in advance.
In some embodiments, in a case that the second task is a patrol task located below the detection task, the executing a task process corresponding to the second task includes:
Determining a next target position of the robot according to the pre-constructed map information and the current position of the robot;
controlling the robot to move to the target position, and executing preset actions at the target position;
and controlling the robot to move to a preset initial position until all target positions are traversed.
In some embodiments, in a case where the second task is an interactive task located at an upper layer of the inspection task, the process in which the second task is triggered includes:
in response to detecting a user trigger instruction, determining that the second task is triggered, the user trigger instruction including at least one of a voice instruction, a touch instruction, and a terminal control instruction;
the task process corresponding to the second task is executed, including:
and controlling the robot to enter an interactive interaction mode, wherein the interactive interaction mode represents the interaction mode of the robot and a user.
In a second aspect, embodiments of the present disclosure provide a robot control device, an execution task of a robot including a plurality of tiers, the device including:
the task detection module is configured to respond to the fact that when the robot executes a first task, a second task located on the upper layer of the first task is detected to be triggered, and all task processes below the level where the second task is located are stopped;
And the task execution module is configured to execute the task process corresponding to the second task.
In some embodiments, the task detection module is configured to:
acquiring current residual electric quantity information of the robot and estimated power consumption of all tasks currently executed;
and determining that the top-level task is triggered in response to the remaining power information being less than the estimated power consumption.
In some embodiments, the task execution module is configured to:
and controlling the robot to navigate to a preset charging position for charging.
In some embodiments, the task detection module is configured to:
traversing all target detection subtasks included in the detection task in the current scene to obtain a detection result of each target detection subtask;
and responding to the detection result of any one target detection subtask as a preset result, and determining that the detection task is triggered.
In some embodiments, the task execution module is configured to:
controlling the robot to generate alarm information; or,
and controlling the robot to generate alarm information and sending the alarm information to terminal equipment matched in advance.
In some embodiments, the task execution module is configured to:
Determining a next target position of the robot according to the pre-constructed map information and the current position of the robot;
controlling the robot to move to the target position, and executing preset actions at the target position;
and controlling the robot to move to a preset initial position until all target positions are traversed.
In some embodiments, the task detection module is configured to:
and in response to detecting a user trigger instruction, determining that the interactive task is triggered, wherein the user trigger instruction comprises at least one of a voice instruction, a touch instruction and a terminal control instruction.
In some embodiments, the task execution module is configured to:
and controlling the robot to enter an interactive interaction mode, wherein the interactive interaction mode represents the interaction mode of the robot and a user.
In a third aspect, embodiments of the present disclosure provide a robot comprising:
a processor; and
a memory storing computer instructions for causing a processor to perform the method of any embodiment of the first aspect.
In a fourth aspect, an embodiment of the present disclosure provides a storage medium storing computer instructions for causing a processor of a robot to perform the method according to any embodiment of the first aspect.
According to the robot control method, the execution task of the robot comprises a plurality of layers of methods, and the method comprises the steps of responding to the fact that when the robot executes a first task, detecting that a second task located on the upper layer of the first task is triggered, stopping all task processes below the layer where the second task is located, and executing task processes corresponding to the second task. In the embodiment of the disclosure, the multi-task of the robot is divided into a multi-level structure, the multi-task control is realized according to the priority of the level structure, the multi-task capacity of the bionic robot is improved, the automation requirement of the bionic robot in a living activation scene is met, and the bionic attribute and the family attribute of the robot are improved.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the prior art, the drawings that are required in the detailed description or the prior art will be briefly described, it will be apparent that the drawings in the following description are some embodiments of the present disclosure, and other drawings may be obtained according to the drawings without inventive effort for a person of ordinary skill in the art.
Fig. 1 is a schematic view of an application scenario of a robot in some embodiments according to the present disclosure.
Fig. 2 is a block diagram of a robot in accordance with some embodiments of the present disclosure.
Fig. 3 is a flow chart of a robot control method in accordance with some embodiments of the present disclosure.
Fig. 4 is a flow chart of a robot control method in accordance with some embodiments of the present disclosure.
Fig. 5 is a flow chart of a robot control method in accordance with some embodiments of the present disclosure.
Fig. 6 is a schematic diagram of a robot control method according to some embodiments of the present disclosure.
Fig. 7 is a flow chart of a robot control method in accordance with some embodiments of the present disclosure.
Fig. 8 is a flow chart of a robot control method in accordance with some embodiments of the present disclosure.
Fig. 9 is a block diagram of a robot control device according to some embodiments of the present disclosure.
Fig. 10 is a block diagram of a robot in accordance with some embodiments of the present disclosure.
Detailed Description
The following description of the embodiments of the present disclosure will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the described embodiments are some, but not all, of the embodiments of the present disclosure. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure are intended to be within the scope of this disclosure. In addition, technical features related to different embodiments of the present disclosure described below may be combined with each other as long as they do not make a conflict with each other.
Currently, commercial landing cases for robots are still mainly focused on industrialized scenarios, in which only a single capability of the robot in a fixed scenario is required, for example, a service robot only needs to provide a capability of performing a preset service response task in a preset area, and a delivery robot only needs to provide a capability of performing a delivery task on a preset route.
With the technical development of the bionic robot, the bionic robot is possible to enter home life of people, and more manufacturers are focused on researching a landing scheme of the bionic robot in a home scene.
In order to improve the practicality and family properties of the bionic robot in a family scene, a lot of capabilities are often required to be provided for the bionic robot. For example, a bionic robot dog may have user interaction capabilities, voice recognition capabilities, face recognition capabilities, navigation capabilities, limb movement capabilities, and the like. When the robot is required to perform a task, one or more capabilities may be invoked for task programming.
Unlike the single task mode of the traditional industrialized scene, the bionic robot in the home scene often needs to execute a plurality of tasks, some tasks are generated based on user instructions, some tasks are generated by the robot in order to maintain the survivability of the robot, and the multi-task control capability of the robot in the home scene becomes a control difficulty of the robot.
Based on the above, the embodiment of the disclosure provides a robot, a control method, a control device and a storage medium thereof, which aim to improve the multi-task capability of the generating robot, so as to meet the automation requirement of the bionic robot in a living activation scene and improve the bionic attribute and the family attribute of the robot.
Fig. 1 illustrates a schematic view of a robot scenario in some embodiments of the present disclosure, and as illustrated in fig. 1, a bionic robot according to an example of the present disclosure is a quadruped robot dog 100 imitating a dog animal shape, and the robot dog 100 may be connected to a terminal device 200 by means of wireless communication. The terminal device 200 may be, for example, a smart phone, a tablet computer, a wearable device, an infrared remote control, etc., to which the present disclosure is not limited.
In the examples of the present disclosure, the user may interact with the machine dog 100 directly through operations of voice, gestures, touch, etc., or may interact with the machine dog 100 indirectly through the terminal device 200, which is not limited in this disclosure.
Fig. 2 illustrates a functional module architecture of a robot in some embodiments of the present disclosure, and the robot of the present disclosure is described below with reference to fig. 2.
As shown in fig. 2, in the example of the present disclosure, the robot includes several functional modules in total:
1) A power management module (BMS) for providing battery information and status of the robot;
2) The automatic charging module (Automatic Charging) is used for providing an automatic charging function for the robot and completing the requirement of low-power automatic charging;
3) A Touch pad module (Touch) for providing and collecting Touch pad data through which a user can interact with the robot;
4) A depth sensor module (ToF) for providing robot head depth data, activating a user-stroking interaction function according to the depth data;
5) A Navigation module (Navigation) for providing autonomous Navigation capability of the robot;
6) A Motion module (Motion) for providing a robot Motion capability;
7) A Face recognition module (Face) for providing Face recognition capabilities for identifying family members;
8) A voiceprint recognition module (Voice) for providing Voice and voiceprint recognition and Voice broadcasting capabilities;
9) A patrol module (Security Monitoring) for providing a robot patrol capability;
10 A behavior recognition module (Action Recognition) for providing behavior recognition capabilities, recognizing abnormal behavior in the scene.
The foregoing is only a simple description of the structure and the respective functional modules of the robot, and those skilled in the art will certainly understand and fully implement the foregoing with reference to the related art, so that the disclosure is not repeated herein.
On the basis of the bionic robot, the embodiment of the disclosure provides a control method of the bionic robot, and the method can be executed and processed by a processor of the bionic robot.
As shown in fig. 3, in some embodiments, the robot control method of the examples of the present disclosure includes:
and S310, responding to the fact that when the robot executes a first task, detecting that a second task located on the upper layer of the first task is triggered, and stopping all task processes below the level where the second task is located.
S320, executing a task process corresponding to the second task.
In the embodiment of the disclosure, in the process of controlling the robot task, the robot task is divided into a multi-level structure, tasks at different levels correspond to different task priorities, and the higher the task level is, the higher the priority corresponding to the task level is, and the lower the task level is, the lower the priority corresponding to the task level is.
It will be appreciated that the number of tasks that the robot needs to perform may include a plurality. For example, the task to be performed by the robot dog 100 shown in fig. 1 may include a patrol task, a survival task, a detection task, an interaction task, and the like.
In the embodiment of the disclosure, rather than performing the tasks as loosely coupled independent tasks, the tasks are divided into multiple levels of structures in advance, so that different tasks may correspond to different levels, i.e., different priorities.
For example, in one example, the control task hierarchy of a robot may be as shown in table one below:
list one
Top-level tasks Task a
Secondary tasks Task b; task c
Three-level task Task d; task e; task f
…… ……
In the example shown, the plurality of hierarchical tasks includes a top-level task at the top level, the top-level task being a primary task with the highest priority, the top-level task generally being a highest priority task, e.g., in the example, the top-level task includes task a. The next layer of top-level tasks is a secondary task, which may include one or more tasks, such as task b and task c in the example shown. The next layer of secondary tasks is tertiary tasks, which may likewise include one or more tasks, such as task d, task e, and task f in the example shown.
In the embodiment of the disclosure, the higher the task level, the higher the task priority, and conversely, the lower the task level, the lower the task priority. Of course, the task level described in the present disclosure is not limited to the table one example, which is merely an exemplary illustration of the present disclosure, and those skilled in the art will understand that the present disclosure is not repeated here.
In some embodiments, taking the machine dog 100 shown in fig. 1 as an example, the multi-task hierarchy performed by the machine dog 100 may be as shown in table two below:
watch II
Top-level tasks Survival task
Secondary tasks Detecting a task; interactive task
Three-level task Inspection task
In the table two example, the top-level tasks of the machine dog 100 include only "survival tasks", the second-level tasks include "detection tasks" and "interaction tasks", and the third-level tasks include "inspection tasks".
Living tasks refer to tasks that affect the normal operation of a robot, such tasks typically being automatically generated by the robot. For example, in one example, the survival task may be: the current residual capacity of the robot meets the task execution requirement. For another example, the survival task may be: each functional module of the robot passes the self-check.
Detection tasks refer to target detection tasks performed by a robot, which tasks may generally be selected autonomously by a user or set freely. For example, the detection tasks may include stranger detection, flame/smoke detection, gas leak detection, abnormal behavior detection, and so forth.
An interactive task refers to a task initiated by a user that interacts with a robot, such tasks typically being triggered by a user instruction. For example, in one example, a user may initiate an interaction instruction by voice, gesture, touch, terminal control, etc., the robot enters an interaction mode, and corresponding behavior is generated during the interaction mode according to the user instruction.
The inspection task refers to a navigation task which is executed on a preset path by the robot periodically or aperiodically, and the task can be executed by the robot by being triggered by the robot by the user by automatically selecting an inspection route or range.
Of course, it will be understood by those skilled in the art that the multi-level task structure of the embodiment of the present disclosure is not limited to the table two examples, and the level task architecture may be freely set according to the scene requirement or the user requirement, which is not described in detail in the present disclosure.
In the disclosed example, when the robot is executing a first task, in response to detecting that a second task located at an upper layer of the first task is triggered, stopping all task processes below a level where the second task is located, and executing task processes corresponding to the second task.
Taking the table two as an example, in an example scenario, when the robot executes the second-level task "detection task" and the third-level task "inspection task" simultaneously, that is, the first task includes the "detection task" and the "inspection task", in response to detecting that the top-level task "survival task" is triggered, that is, the second task is the survival task located at the top level, all task processes located below the top level where the survival task is located need to be stopped at this time, that is, the task processes of the "detection task" and the "inspection task" are stopped, and then the task processes corresponding to the survival task are executed.
In another example scenario, when the robot dog executes a three-level task "patrol task", i.e., the first task is the "patrol task", in response to detecting that the "interactive task" on the upper layer is triggered, i.e., the second task is the "interactive task", the task process under the "interactive task" needs to be stopped at this time, i.e., the task process of the "patrol task" is stopped, and then the task process corresponding to the interactive task is executed.
For example, in a corresponding living scenario, when the machine dog 100 is being executed by the machine dog 100 and the user calls the machine dog 100, the machine dog 100 can determine that the interactive task is triggered when detecting the interactive instructions such as voice, gesture, terminal instruction, etc. of the user, so as to pause the inspection task and enter the interactive mode. For another example, when the robot dog 100 is executing the inspection task, it may be determined that the inspection task is triggered when a stranger is detected or abnormal behavior is recognized, so that the inspection task is suspended, and the inspection task generation alarm information is executed.
As can be seen from the foregoing, in the embodiments of the present disclosure, the multitasking of the robot is divided into a multi-level structure, and the multitasking control is implemented according to the priority of the level structure, so as to improve the multitasking capability of the bionic robot, so as to meet the automation requirement of the bionic robot in a living activation scenario, and improve the bionic attribute and the family attribute of the robot.
Taking table two as an example, in some embodiments of the disclosure, the multi-level tasks include a survival task at a top layer, a detection task and an interaction task at a second layer, and a patrol task at a bottom layer.
As shown in fig. 4, in some embodiments, the process by which the top-level task is triggered includes:
s410, obtaining current residual capacity information of the robot and estimated power consumption of all tasks currently executed.
And S420, determining that the top-level task is triggered in response to the residual power information being smaller than the estimated power consumption.
S430, controlling the robot to navigate to a preset charging position for charging.
In the embodiment of the disclosure, the task currently executed by the robot refers to all tasks included in the current task process of the robot. For instance, in one example, the tasks currently performed by the robot include a "detection task" and a "inspection task", so that all tasks currently performed include a "detection task" and a "inspection task".
It will be appreciated that the robot, when performing these tasks, may estimate the power consumption of the electrical power required to perform these tasks.
In some embodiments, a power consumption table corresponding to different tasks may be built in the robot in advance, so that the robot may determine power consumption corresponding to each task through table lookup, and add power consumption of all tasks to calculate estimated power consumption for executing all tasks.
In other embodiments, a power consumption model may be built in the robot in advance, so that the robot may calculate, in real time, power consumption of electricity required for executing all tasks based on the power consumption model, to obtain preset power consumption.
Of course, those skilled in the art will understand that the manner of calculating the estimated power consumption is not limited to the above example, and this disclosure will not be repeated.
Referring to fig. 2, the robot may acquire current remaining power information including the current remaining power of the robot through the power management module. If the current residual capacity information is smaller than the estimated power consumption, the current residual capacity of the robot is insufficient to support completion of all tasks, and the survivability of the robot needs to be guaranteed preferentially at the moment, so that the top-level survivability task can be determined to be triggered.
Under the condition that the survival task is triggered, the robot can stop all task processes and then execute the survival task process. For example, in some embodiments, the robot may stop the progress of the inspection and detection tasks and then autonomously navigate to a preset charging location for charging.
The preset charging position refers to a position for charging the robot, for example, in one example, the robot generally has an adaptive charging pile, and the position of the charging pile can be fixed, so that the robot can autonomously navigate to the charging pile position to charge, and the top-layer survival task is completed.
In some embodiments, the robot may interact with the charging pile in a wireless manner, so as to perform path planning based on the current position and the charging pile position, thereby autonomously navigating to the charging pile position, which may be understood by those skilled in the art, and the disclosure is not repeated.
In some embodiments, when the current remaining capacity information of the robot is greater than or equal to the estimated power consumption, it indicates that the current remaining capacity of the robot can support completing all tasks, and at this time, all the tasks can be started and executed.
In addition, considering that the condition that the robot may have wired charging in the charging process, in order to avoid the damage to the charger caused by the movement of the robot, in some embodiments, in the case that the current remaining capacity information of the robot is greater than or equal to the estimated power consumption, it may be further determined whether the robot is currently in the wired charging state. If the robot is currently in a wired charge state, then no task can be performed, but the wired charge state continues to be maintained until the wired charge is disconnected. Otherwise, if the robot is not currently in a wired charge state, all tasks may be turned on and performed. Those skilled in the art will appreciate that this disclosure is not repeated.
In the above embodiment, whether the top-level survival task is triggered is determined by comparing the current remaining capacity information of the robot with the estimated power consumption required for executing the task. In other embodiments, it may also be determined in other manners whether the top-level survival task is triggered, for example, the current remaining capacity information may be compared with a preset capacity threshold, for example, the preset capacity threshold may be 20%, and in the case that the current remaining capacity information is less than the preset capacity threshold, it is determined that the top-level survival task is triggered. Those skilled in the art will appreciate that this disclosure is not repeated.
In one example scenario, when the robot performs the inspection task, the survival task is determined to be triggered in response to detecting that the current remaining power information is smaller than the estimated power consumption for performing the inspection task, so that the task process of the inspection task is stopped according to the method process shown in fig. 4, and the survival task with automatic charging is performed.
In another example scenario, when the robot executes the interactive task, it may be determined that the survival task is triggered in response to detecting that the current remaining power information is less than the estimated power consumption for executing the interactive task, thereby stopping the task process of the interactive task and executing the survival task with automatic charging according to the method process shown in fig. 4.
According to the method, in the embodiment of the disclosure, the multitasking of the robot is divided into the multi-level structure, and the survival task of the robot is used as the top-level task, so that the survivability of the robot is preferentially ensured in the task execution process of the robot, the downtime risk caused by electric quantity exhaustion is avoided, and the control effect of the robot is improved.
In some implementations, the detection tasks shown in table two may include one or more target detection sub-tasks that may be used to detect corresponding target objects.
For example, in one example, the detection task may include a target detection subtask that is:
family member detection, namely acquiring scene images through an image acquisition device arranged on the robot, and comparing the identified faces with face templates of family members when the faces are identified on the scene images, so as to determine whether the family members are family members or not, wherein the face templates of the family members are face images acquired in advance and stored in the robot after user authorization;
flame/smoke detection, namely acquiring a scene image through an image acquisition device arranged on the robot, detecting whether flame or smoke exists on the scene image through an image recognition technology, and sending alarm information when the flame/smoke is recognized;
Abnormal behavior detection, namely, acquiring a scene image through an image acquisition device arranged on the robot, detecting whether the scene image comprises abnormal behaviors or not through an image recognition technology, wherein the abnormal behaviors can be such as falling, frame beating and the like, and sending out alarm information when the abnormal behaviors are recognized;
the gas leakage detection can detect the gas content in the current air through a sensor arranged on the robot, and alarm information is sent out when the gas content exceeds a preset value.
Of course, those skilled in the art will appreciate that the object detection subtask is not limited to the above examples, but may include any other type of detection subtask, and this disclosure is not enumerated herein.
As shown in fig. 5, in some embodiments, the process of detecting that a task is triggered includes:
s510, traversing and executing all target detection subtasks in the current scene to obtain a detection result of each target detection subtask.
And S520, responding to the detection result of any target detection subtask as a preset result, and determining that the detection task is triggered.
S530, controlling the robot to generate alarm information.
In the embodiment of the disclosure, the detection task includes a plurality of target detection subtasks, as in the previous example, and the robot may sequentially traverse and execute all the target detection subtasks.
For example, in one example, the robot may acquire a current scene image, perform stranger detection, flame/smoke detection, and abnormal behavior detection in order based on the current scene image, and obtain a corresponding detection result. Meanwhile, the robot can collect the gas content of the air in the current scene and determine a corresponding detection result based on the gas content.
The preset result indicates that the detection result of the target detection subtask hits, for example, taking family member detection as an example, the corresponding preset result is "detecting a non-family member", and taking abnormal behavior detection as an example, the corresponding preset result is "detecting abnormal behavior". After the detection results of all the target detection subtasks are obtained, responding to any one detection result of the target detection subtasks to be a preset result, and determining that the detection task is triggered.
In some embodiments, the detection task is triggered to indicate that an abnormal condition is detected, at which point the robot may be controlled to generate alarm information. The alarm information may include audible and visual alarm information, for example, corresponding audible and visual alarm information is sent out by a speaker, an LED (Light emitting Diode) lamp, etc. on the robot, and the audible alarm information may be, for example, a "wang" call made by a simulated dog, so as to improve the bionic property of the robot dog 100.
In other embodiments, as shown in connection with fig. 1, the robot may send the alarm information to the terminal device 200 after generating the alarm information, thereby reminding the user of the terminal device 200 of finding an abnormal situation, and even if the user is no longer within the alarm range of the robot, the robot may be alerted to the alarm.
As can be seen from the above, in the embodiment of the present disclosure, by setting a plurality of target detection subtasks for the detection task, the detection capability of the robot in the home scene is improved, and the security effect is improved.
The inspection task refers to a navigation task of a robot at regular or irregular intervals, for example, fig. 6 shows a schematic diagram of the inspection task of the robot in a home scenario.
As shown in fig. 6, the initial position of the robot is a patrol point a, and the routes of a single patrol are in turn: moving from patrol point a to patrol point B, moving from patrol point B to patrol point C, moving from patrol point C to patrol point D, moving from patrol point D to patrol point E, moving from patrol point E to patrol point B, returning from patrol point B to initial patrol point a. Based on the example of fig. 6, a method procedure of an embodiment of the present disclosure will be described with reference to fig. 7.
As shown in fig. 7, in some embodiments, the process of performing the patrol task includes:
S710, determining the next target position of the robot according to the pre-constructed map information and the current position of the robot.
S720, controlling the robot to move to the target position, and executing a preset action at the target position.
And S730, controlling the robot to move to a preset initial position until all target positions are traversed.
In this embodiment of the present disclosure, map information of a scene where a robot is located may be pre-constructed based on a SLAM (Simultaneous Localization and Mapping, synchronous positioning and mapping) technology, and a person skilled in the art can understand and fully implement a process of SLAM mapping by referring to a related technology, which is not described in detail in this disclosure.
In the robot inspection process, the next target position of the robot movement can be determined according to the current position information and the pre-constructed map information. For example, as shown in fig. 6, one or more moving positions may be preset on the map information for the inspection route of the robot, where the moving positions are the patrol points a to E shown in fig. 6. For example, when the current position of the robot is the patrol point C, the next target position can be determined to be the patrol point D according to the map information.
In this example, the robot may call the navigation module to move to the patrol point D, and repeatedly execute the above method process, and sequentially move to the next target position until all patrol points are traversed, so that the robot may be controlled to move to a preset initial position, where the preset initial position is the patrol point a of the initial position of the robot, and the patrol task is completed.
It should be noted that, in the embodiment of the disclosure, in order to avoid the condition that the inspection omission occurs in the inspection of the robot, when the robot moves to a target position, the robot may be controlled to execute a preset action at the target position. The preset action can be a simple limb action, or can be one task or a plurality of task combinations.
For example, in one example, using the machine dog 100 shown in fig. 1 as an example, each time the machine dog 100 moves to a target location, the machine dog may be controlled to sit down for a single sound, or controlled to turn around for a plurality of circles, and the motion path of the machine dog may be recorded by using the action behavior. For another example, the detection task may be performed once every time the machine dog 100 moves to a target location, traversing all target detection subtasks at the target location. Those skilled in the art will appreciate that this disclosure is not repeated.
In the embodiment of the disclosure, the robot may perform the above-mentioned inspection task periodically or aperiodically, and the time interval for performing the inspection task may be set by a user or may be determined autonomously by the robot, which is not limited in the disclosure.
According to the embodiment of the disclosure, the security capability of the robot in a home scene is improved by executing the inspection task, so that the robot has home properties and bionic performance, and the control effect of the robot is improved.
In the embodiment of the disclosure, the interaction task refers to a control task for interaction between the robot and the user, in some embodiments, an interaction mode may be configured for the robot in advance, and in the interaction mode, the robot may provide various interaction capabilities with the user, so as to improve the interestingness of the robot.
As shown in fig. 8, in some embodiments, in the control method of the examples of the disclosure, the triggering process of the interaction task includes:
and S810, responding to detection of a user trigger instruction, and determining that the interaction task is triggered.
S820, controlling the robot to enter an interactive interaction mode.
In the embodiment of the disclosure, the manner of triggering the robot interaction task includes, but is not limited to, a voice command, a touch command and a terminal control command.
Taking the machine dog 100 shown in fig. 1 as an example, for example, a user may set a corresponding trigger tone for the machine dog 100. For example, when the user desires to interact with the robot dog 100, the robot can acquire corresponding voice information only by speaking the voice into the "small treasured" voice, and determine that the family member calls himself or herself through the technologies of voiceprint recognition, voice recognition and the like, and at this time, the interaction task can be determined to be triggered.
For example, in another example, a user may touch a touch pad on the machine dog 100, and when the machine dog 100 detects a touch signal from the user touching the touch pad, it may be determined that an interaction task is triggered.
For example, in one example, a user may touch the chin of the robot dog 100, and when a depth sensor provided to the chin of the robot dog 100 detects a user touch, it may be determined that the user desires to interact with the robot dog 100, and an interaction task is triggered.
For example, in still another example, when a user desires to interact with the robot dog 100, the terminal device 200 may be operated to transmit a control command to the robot dog 100, and when the robot dog 100 receives the terminal control command, it may be determined that an interaction task is triggered.
Of course, those skilled in the art will appreciate that other ways of triggering the robot interaction task may be included, and this disclosure will not enumerate.
When the interaction task is triggered, the user is expected to interact with the robot, and the multi-stage task framework shown in the table II is combined, so that if the current robot is executing the inspection task, the task process of the inspection task can be stopped according to the method process as the interaction task positioned at the upper layer of the inspection task is triggered, the task process of the interaction task is executed, and the robot enters an interaction mode.
In the interactive interaction mode, the robot can execute corresponding actions or functions according to user instructions, for example, a user can control actions such as "lying down", "turning around" and the like of the robot, for example, the user can inquire network information such as "weather", "stock trend" and the like of the robot, for example, the user can set backlog and the like, and the disclosure is omitted.
According to the embodiment of the disclosure, the interaction capability of the robot and the user can be improved through the execution of the interaction task, entertainment and interestingness are improved, so that the robot has family member attributes and family functions, and is convenient for floor application in a family scene.
In the embodiment of the disclosure, when the robot enters the interactive interaction mode, the interactive interaction mode can be exited based on the user instruction, so that the execution of the previously suspended subordinate task is continued.
For example, in one example, the robot detects a voice command of a user when performing the inspection task, so as to control the robot to suspend the inspection task and enter an interactive interaction mode, and when the user wants the robot to continue the inspection task, the robot can exit the interactive interaction mode of the robot by means of, for example, the voice command, the gesture command, the touch command, the terminal control command and the like, and after the interactive interaction mode exits, the robot continues to perform the inspection task.
As can be seen from the foregoing, in the embodiments of the present disclosure, the multitasking of the robot is divided into a multi-level structure, and the multitasking control is implemented according to the priority of the level structure, so as to improve the multitasking capability of the bionic robot, so as to meet the automation requirement of the bionic robot in a living activation scenario, and improve the bionic attribute and the family attribute of the robot.
In some embodiments, the present disclosure provides a robot control device, which may be the robot of any of the foregoing embodiments.
As shown in fig. 9, in some embodiments, a robot control device of the examples of the present disclosure includes:
the task detection module 10 is configured to respond to the fact that when the robot executes a first task, a second task located on the upper layer of the first task is detected to be triggered, and all task processes below the level where the second task is located are stopped;
the task execution module 20 is configured to execute a task process corresponding to the second task.
In some embodiments, the task detection module 10 is configured to:
acquiring current residual electric quantity information of the robot and estimated power consumption of all tasks currently executed;
And determining that the top-level task is triggered in response to the remaining power information being less than the estimated power consumption.
In some embodiments, the task execution module 20 is configured to:
and controlling the robot to navigate to a preset charging position for charging.
In some embodiments, the task detection module 10 is configured to:
traversing all target detection subtasks included in the detection task in the current scene to obtain a detection result of each target detection subtask;
and responding to the detection result of any one target detection subtask as a preset result, and determining that the detection task is triggered.
In some embodiments, the task execution module 20 is configured to:
controlling the robot to generate alarm information; or,
and controlling the robot to generate alarm information and sending the alarm information to terminal equipment matched in advance.
In some embodiments, the task execution module 20 is configured to:
determining a next target position of the robot according to the pre-constructed map information and the current position of the robot;
controlling the robot to move to the target position, and executing preset actions at the target position;
And controlling the robot to move to a preset initial position until all target positions are traversed.
In some embodiments, the task detection module 10 is configured to:
and in response to detecting a user trigger instruction, determining that the interactive task is triggered, wherein the user trigger instruction comprises at least one of a voice instruction, a touch instruction and a terminal control instruction.
In some embodiments, the task execution module 20 is configured to:
and controlling the robot to enter an interactive interaction mode, wherein the interactive interaction mode represents the interaction mode of the robot and a user.
As can be seen from the foregoing, in the embodiments of the present disclosure, the multitasking of the robot is divided into a multi-level structure, and the multitasking control is implemented according to the priority of the level structure, so as to improve the multitasking capability of the bionic robot, so as to meet the automation requirement of the bionic robot in a living activation scenario, and improve the bionic attribute and the family attribute of the robot.
In some embodiments, the present disclosure provides a robot comprising:
a processor; and
a memory storing computer instructions for causing a processor to perform the method of any of the embodiments described above.
In some embodiments, the present disclosure provides a storage medium storing computer instructions for causing a processor of a robot to perform the method of any of the embodiments described above.
Specifically, fig. 10 shows a schematic structural diagram of a robot 600 suitable for implementing the method of the present disclosure, and by means of the robot shown in fig. 10, the above-described corresponding functions of the processor and the storage medium may be implemented.
As shown in fig. 10, the robot 600 includes a processor 601, which can perform various appropriate actions and processes according to a program stored in a memory 602 or a program loaded into the memory 602 from a storage section 608. In the memory 602, various programs and data required for the operation of the robot 600 are also stored. The processor 601 and the memory 602 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, mouse, etc.; an output portion 607 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The drive 610 is also connected to the I/O interface 605 as needed. Removable media 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed as needed on drive 610 so that a computer program read therefrom is installed as needed into storage section 608.
In particular, according to embodiments of the present disclosure, the above method processes may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing the method described above. In such an embodiment, the computer program can be downloaded and installed from a network through the communication portion 609, and/or installed from the removable medium 611.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It should be apparent that the above embodiments are merely examples for clarity of illustration and are not limiting of the embodiments. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. While still being apparent from variations or modifications that may be made by those skilled in the art are within the scope of the present disclosure.

Claims (10)

1. A method of controlling a robot, wherein an execution task of the robot includes a plurality of levels, the method comprising:
responding to the fact that when the robot executes a first task, a second task located on the upper layer of the first task is detected to be triggered, and stopping all task processes below the level where the second task is located;
and executing the task process corresponding to the second task.
2. The method of claim 1, wherein in the case where the second task is a top-level task, the process in which the second task is triggered comprises:
acquiring current residual electric quantity information of the robot and estimated power consumption of all tasks currently executed;
and determining that the top-level task is triggered in response to the remaining power information being less than the estimated power consumption.
3. The method of claim 2, wherein executing the task process corresponding to the second task comprises:
and controlling the robot to navigate to a preset charging position for charging.
4. A method according to any one of claims 1 to 3, characterized in that in case the second task is a detection task located below a top-level task, the process by which the second task is triggered comprises:
traversing all target detection subtasks included in the detection task in the current scene to obtain a detection result of each target detection subtask;
and responding to the detection result of any one target detection subtask as a preset result, and determining that the detection task is triggered.
5. The method of claim 4, wherein executing the task process corresponding to the second task comprises:
controlling the robot to generate alarm information; or,
and controlling the robot to generate alarm information and sending the alarm information to terminal equipment matched in advance.
6. The method according to claim 4, wherein, in the case where the second task is a patrol task located below the detection task, the executing the task process corresponding to the second task includes:
Determining a next target position of the robot according to the pre-constructed map information and the current position of the robot;
controlling the robot to move to the target position, and executing preset actions at the target position;
and controlling the robot to move to a preset initial position until all target positions are traversed.
7. The method of claim 6, wherein, in the case where the second task is an interactive task located at an upper layer of the inspection task, the process in which the second task is triggered includes:
in response to detecting a user trigger instruction, determining that the interactive task is triggered, wherein the user trigger instruction comprises at least one of a voice instruction, a touch instruction and a terminal control instruction;
the task process corresponding to the second task is executed, including:
and controlling the robot to enter an interactive interaction mode, wherein the interactive interaction mode represents the interaction mode of the robot and a user.
8. A robot control device, wherein an execution task of a robot includes a plurality of levels, the device comprising:
the task detection module is configured to respond to the fact that when the robot executes a first task, a second task located on the upper layer of the first task is detected to be triggered, and all task processes below the level where the second task is located are stopped;
And the task execution module is configured to execute the task process corresponding to the second task.
9. A robot, comprising:
a processor; and
memory storing computer instructions for causing a processor to perform the method according to any one of claims 1 to 7.
10. A storage medium, characterized in that computer instructions are stored for causing a processor of a robot to perform the method according to any one of claims 1 to 7.
CN202311380045.1A 2023-10-23 2023-10-23 Robot, control method and device thereof, and storage medium Pending CN117260688A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311380045.1A CN117260688A (en) 2023-10-23 2023-10-23 Robot, control method and device thereof, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311380045.1A CN117260688A (en) 2023-10-23 2023-10-23 Robot, control method and device thereof, and storage medium

Publications (1)

Publication Number Publication Date
CN117260688A true CN117260688A (en) 2023-12-22

Family

ID=89201005

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311380045.1A Pending CN117260688A (en) 2023-10-23 2023-10-23 Robot, control method and device thereof, and storage medium

Country Status (1)

Country Link
CN (1) CN117260688A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150367513A1 (en) * 2013-03-06 2015-12-24 Robotex Inc. System and method for collecting and processing data and for utilizing robotic and/or human resources
CN108897282A (en) * 2018-05-03 2018-11-27 顺丰科技有限公司 Multitask modularization robot and schedule management method, device and its storage medium
CN110196594A (en) * 2019-05-24 2019-09-03 北京海益同展信息科技有限公司 Computer room inspection control method, device, equipment and storage medium
CN111618854A (en) * 2020-05-26 2020-09-04 中国人民解放军国防科技大学 Task segmentation and collaboration method for security robot
CN112318484A (en) * 2020-12-15 2021-02-05 苏州光格设备有限公司 Task scheduling method for track inspection robot
CN114595042A (en) * 2022-02-10 2022-06-07 北京旷视机器人技术有限公司 Task execution method, robot, storage medium, and computer program product
CN115723142A (en) * 2022-12-09 2023-03-03 深圳市优必行科技有限公司 Robot control method, device, computer readable storage medium and robot

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150367513A1 (en) * 2013-03-06 2015-12-24 Robotex Inc. System and method for collecting and processing data and for utilizing robotic and/or human resources
CN108897282A (en) * 2018-05-03 2018-11-27 顺丰科技有限公司 Multitask modularization robot and schedule management method, device and its storage medium
CN110196594A (en) * 2019-05-24 2019-09-03 北京海益同展信息科技有限公司 Computer room inspection control method, device, equipment and storage medium
CN111618854A (en) * 2020-05-26 2020-09-04 中国人民解放军国防科技大学 Task segmentation and collaboration method for security robot
CN112318484A (en) * 2020-12-15 2021-02-05 苏州光格设备有限公司 Task scheduling method for track inspection robot
CN114595042A (en) * 2022-02-10 2022-06-07 北京旷视机器人技术有限公司 Task execution method, robot, storage medium, and computer program product
CN115723142A (en) * 2022-12-09 2023-03-03 深圳市优必行科技有限公司 Robot control method, device, computer readable storage medium and robot

Similar Documents

Publication Publication Date Title
Gross et al. Progress in developing a socially assistive mobile home robot companion for the elderly with mild cognitive impairment
US20200319640A1 (en) Method for navigation of a robot
CN108231069B (en) Voice control method of cleaning robot, cloud server, cleaning robot and storage medium thereof
JP2022173244A (en) Mobile cleaning robot artificial intelligence for situational awareness
US11393317B2 (en) Enhanced audiovisual analytics
US20190332119A1 (en) Mobile robot and method of controlling the same
Gross et al. I'll keep an eye on you: Home robot companion for elderly people with cognitive impairment
Del Duchetto et al. Lindsey the tour guide robot-usage patterns in a museum long-term deployment
CN106406119A (en) Service robot based on voice interaction, cloud technology and integrated intelligent home monitoring
US10936880B2 (en) Surveillance
US11876925B2 (en) Electronic device and method for controlling the electronic device to provide output information of event based on context
CN102609089A (en) Multi-state model for robot and user interaction
CN105446332B (en) Automatic cleaning control method and device and electronic equipment
KR102008367B1 (en) System and method for autonomous mobile robot using a.i. planning and smart indoor work management system using the robot
CN110088704A (en) The method for controlling cleaning equipment
Del Duchetto et al. Are you still with me? Continuous engagement assessment from a robot's point of view
US11654554B2 (en) Artificial intelligence cleaning robot and method thereof
KR20200027072A (en) Controlling method for Moving robot
JPWO2018087971A1 (en) MOBILE BODY CONTROL DEVICE AND MOBILE BODY CONTROL PROGRAM
CN113199472A (en) Robot control method, device, storage medium, electronic device, and robot
CN117500642A (en) System, apparatus and method for exploiting robot autonomy
WO2019211932A1 (en) Information processing device, information processing method, program, and autonomous robot control system
CN117260688A (en) Robot, control method and device thereof, and storage medium
CN111343696A (en) Communication method of self-moving equipment, self-moving equipment and storage medium
CN112907803B (en) Automatic AI (Artificial Intelligence) adjustment intelligent access control system and access control detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination