CN114393582B - Robot, control method and system thereof and storage device - Google Patents

Robot, control method and system thereof and storage device Download PDF

Info

Publication number
CN114393582B
CN114393582B CN202210068585.5A CN202210068585A CN114393582B CN 114393582 B CN114393582 B CN 114393582B CN 202210068585 A CN202210068585 A CN 202210068585A CN 114393582 B CN114393582 B CN 114393582B
Authority
CN
China
Prior art keywords
information
state information
intelligent
expected
self
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210068585.5A
Other languages
Chinese (zh)
Other versions
CN114393582A (en
Inventor
杜晓雨
王冲
肖阳
谭斌
刘旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Energy Injection Technology Co ltd
Original Assignee
Shenzhen Energy Injection Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Energy Injection Technology Co ltd filed Critical Shenzhen Energy Injection Technology Co ltd
Priority to CN202210068585.5A priority Critical patent/CN114393582B/en
Publication of CN114393582A publication Critical patent/CN114393582A/en
Application granted granted Critical
Publication of CN114393582B publication Critical patent/CN114393582B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1682Dual arm manipulator; Coordination of several manipulators

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

The invention provides a robot, a control method and system thereof, and a storage device. The robot control method is applied to an intelligent robot group, wherein the intelligent robot group comprises at least two intelligent robots; the robot control method comprises the following steps: the target intelligent robot sends self state information and expected state information to other intelligent robots in the intelligent robot group so that the other intelligent robots adjust current state information according to the self state information and the expected state information; the target intelligent robot receives state return information sent by other intelligent robots, wherein the state return information comprises adjusted current state information of the other intelligent robots. The intelligent robot control system can control the whole group of intelligent robots through the target intelligent robots, improve the interestingness and the control efficiency of robot control, and improve the use experience of users.

Description

Robot, control method and system thereof and storage device
Technical Field
The invention relates to the field of robots, in particular to a robot, a control method and system thereof, and storage equipment.
Background
With the development and progress of intelligent robot technology, robots are widely used in business and living of people. In the process of using the intelligent robot, besides the intelligent robot is required to complete some fixed sounds and actions, the intelligent robot is required to display emotion changes and action changes, but the existing intelligent robot can only respond correspondingly based on operations (such as flapping and stroking) of a user, when a robot group consisting of a plurality of intelligent robots exists, if all intelligent robots in the robot group are required to complete feedback of some actions and emotions, the user is required to operate and set each intelligent robot, and the operation and the setting are troublesome and easy to miss.
Disclosure of Invention
The invention aims to solve the technical problems that a user is required to operate and set each intelligent robot, the operation and the setting are troublesome, errors are easy to occur, and aiming at the defects in the prior art, the invention provides the robot, the control method, the control system and the storage equipment thereof, which can realize the control of the whole group of intelligent robots through the target intelligent robot, improve the control interestingness and the control efficiency of the robots and improve the use experience of the user.
The technical scheme adopted for solving the technical problems is as follows: the robot control method is applied to an intelligent robot group, wherein the intelligent robot group comprises at least two intelligent robots which are connected with each other, and any two adjacent intelligent robots are connected with each other;
The robot control method comprises the following steps:
the target intelligent robot sends self state information and expected state information to other intelligent robots in the intelligent robot group, wherein the self state information comprises self emotion information and/or self action information, the expected state information comprises expected emotion information and/or expected action information, so that the other intelligent robots adjust current state information according to the self state information and the expected state information, and the current state information comprises current emotion information and/or current action information;
the target intelligent robot receives state return information sent by the other intelligent robots, wherein the state return information comprises the adjusted current state information of the other intelligent robots.
The technical scheme adopted for solving the technical problems is as follows: there is provided a robot control system including the following modules:
The intelligent robot system comprises a sending module, a receiving module and a processing module, wherein the sending module is used for sending self state information and expected state information to other intelligent robots in the intelligent robot group, the self state information comprises self emotion information and self action information, the expected state information comprises expected emotion information and expected action information, so that the other intelligent robots adjust current state information according to the self state information and the expected state information, and the current state information comprises current emotion information and/or current action information;
the receiving module is used for receiving state return information sent by the other intelligent robots, and the state return information comprises the current state information adjusted by the other intelligent robots.
The technical scheme adopted for solving the technical problems is as follows: there is provided a robot comprising a memory, a processor, the memory storing a computer program which, when executed, causes the processor to perform the steps of the method as described above.
The technical scheme adopted for solving the technical problems is as follows: there is provided a storage medium storing a computer program which, when executed by a computer program processor, causes the processor to perform the steps of the method as described above.
Compared with the prior art, the intelligent robot group comprises at least two intelligent robots, wherein the target intelligent robots send the state information and the expected state information to other intelligent robots in the intelligent robot group, so that the other intelligent robots adjust the current emotion information and the current action information according to the state information and the expected state information, and the target intelligent robots receive the state return information sent by the other intelligent robots, so that emotion infection and action interaction among the intelligent robots can be automatically realized, the user is not required to operate respectively, the control interestingness of the intelligent robots can be realized, and the control efficiency and the user experience are improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Wherein:
fig. 1 is a schematic diagram of an embodiment of an application scenario of a robot control method provided by the present invention;
Fig. 2 is a schematic flow chart of a first embodiment of a robot control method provided by the present invention;
FIG. 3 is a flow chart of a second embodiment of a robot control method provided by the present invention;
FIG. 4 is a schematic diagram of a robot control system according to an embodiment of the present invention;
FIG. 5 is a schematic view of a robot according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an embodiment of a storage medium according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1 and 2, fig. 1 is a schematic diagram of an embodiment of an application scenario of a robot control method provided by the present invention. Fig. 2 is a schematic flow chart of a first embodiment of a robot control method provided by the present invention. As shown in fig. 1, the intelligent robot group includes a plurality of intelligent robots A, B, C, D …, and the model and appearance of each intelligent robot may be the same or different, set by a user according to the need. In this implementation scenario, a communication interface is provided on each intelligent robot, through which each intelligent robot can be connected with other intelligent robots. In this implementation scenario, every two adjacent intelligent robots are connected to each other through a communication interface. In other implementations, every two adjacent intelligent robots are connected by wireless (e.g., bluetooth, WIFI, NFC, infrared) or wired means. For example, A is connected to B, B is connected to C, C is connected to D …
As shown in fig. 2, the robot control method provided by the invention comprises the following steps:
S101: the target intelligent robot sends the state information and the expected state information of the target intelligent robot to other intelligent robots in the intelligent robot group connected with the target intelligent robot, so that the other intelligent robots adjust the current state information according to the state information and the expected state information of the target intelligent robot.
In a specific implementation scenario, the intelligent robots a and B connected to each other are illustrated as an example. A is taken as a target intelligent robot, and B is taken as other intelligent robots connected with the target robot. A sends self-state information including self-emotion information (including at least one of happy, angry, lost, normal) and/or self-action information (e.g., clapping, beating shoulders, clapping, crying, etc.) and expected-state information including expected-emotion information (including at least one of happy, angry, lost, normal) and/or expected-action information (e.g., responding to clapping, placebo, etc.) to B. In this implementation scenario, the expected motion information is generated by the target intelligent robot according to the motion information, and is the same or matched motion with the motion information. In other implementation scenarios, the expected action information corresponding to each own action information may be preset by the user. For example, the self emotion information of a is happy, and the self action information is clapping. The expected emotion information is happy, and the expected action information is clapping.
And B, acquiring current state information of the user after receiving the self state information and the expected state information sent by the A, wherein the current state information comprises at least one of current emotion information and current action information. In this embodiment scenario, an emotion change table is preset, where the emotion change table includes how to obtain adjusted current emotion information by combining self emotion information and current emotion information after receiving self state information of a, please refer to table 1, and table 1 is the emotion change table in this embodiment. As shown in table 1, the column of robot a represents the own emotion information of the target intelligent robot, the column of robot B represents the current emotion information of the other intelligent robots, and the state result represents the adjusted current emotion information of the other intelligent robots. When the state result corresponds to a plurality of emotion information, the final state result is randomly selected and generated from the emotion results.
Robot A Robot B Status results Remarks
State of happiness State of happiness State of happiness Random arrangement
State of happiness State of Qi generation State of happiness State of Qi generation Normal state Drop-out state Random arrangement
State of happiness Drop-out state State of happiness Drop-out state Normal state Random arrangement
State of happiness Normal state State of happiness Normal state Random arrangement
State of Qi generation State of happiness State of Qi generation State of happiness Normal state Drop-out state Random arrangement
State of Qi generation State of Qi generation State of Qi generation Random arrangement
State of Qi generation Drop-out state State of Qi generation Drop-out state Normal state Random arrangement
State of Qi generation Normal state State of Qi generation Normal state Random arrangement
Drop-out state State of happiness Drop-out state State of happiness Normal state State of Qi generation Random arrangement
Drop-out state State of Qi generation Drop-out state State of Qi generation Normal state State of Qi generation Random arrangement
Drop-out state Drop-out state Drop-out state Random arrangement
Drop-out state Normal state Drop-out state Normal state State of Qi generation State of happiness Random arrangement
Normal state State of happiness Normal state State of happiness Random arrangement
Normal state State of Qi generation Normal state State of Qi generation Drop-out state Random arrangement
Normal state Drop-out state Normal state Drop-out state State of Qi generation Random arrangement
Normal state Normal state Normal state Random arrangement
TABLE 1
For example, when the current emotion information of B is happy and the current action information is no action, the current emotion information of B is the happy state after receiving the state information of B. And if the current emotion information is matched with the expected emotion information and is in a happy state, the B can execute the expected action information, and the expected action information is taken as the current action information and is in clapping with the A.
For example, if the current emotion information of B is in a state of being angry, after receiving the state information of the B, the final current emotion information is in a lost state, and if the current emotion information is not matched with the expected emotion information, the B may not execute the expected action information, may retain the current action information or execute other actions, does not execute the expected action, and does not clap with the a.
In another implementation scenario, the target intelligent robot (intelligent robot a) obtains the touch event type input by the user, adjusts its own state information according to the event type, and generates the expected state information. For example, a user may input a touch event type through a mobile terminal connected to the target intelligent robot or may obtain the touch event type by touching or patting a certain portion of the target intelligent robot according to the strength and the portion of the touch or patting of the user. A receives the touch event type input by a user, adjusts the state information of the user and generates expected state information. For example, the touch event triggered by the shoulder of the user is a photographing scene, the self state information of the user is self action information of the pendulum model, the expected state information of photographing is generated and sent to the user B, the user B receives the expected state information and photographs the user A, and therefore the purpose that the user does not need to control the user A and the user B respectively can be achieved, and a group of intelligent robots automatically execute actions preset by the user.
S102: the target intelligent robot receives state return information sent by other intelligent robots, wherein the state return information comprises adjusted current state information of the other intelligent robots.
In a specific implementation scenario, the message B generates status return information according to the current emotion information and/or the current action information, and the message a receives the status return information and acquires the message B. For example, a obtains that B is currently in a happy state and claps with itself. Or A obtains that B is in the state of losing the sole currently and is not clapping with the user.
Further, the A can adjust the self state information and generate the expected state information according to the acquired state return information of the B and the self state information of the A. Specifically, please continue to refer to table 1. For example, when the return status message includes that B is currently in a happy state and claps with A, A will continue to remain in the happy state, and the action may be a hug or smile. When the return status message includes that B is in the lost state and is not clapping with A, A may change from the open heart state to the normal state and go to placebo B.
Still further, the state return information further includes specified state information generated by other intelligent robots according to the current emotion information and the current action information, where the specified state information includes the specified emotion information and the specified action information. For example, B is in a lost state and is not clapping with a, the generated specified state information includes specified emotion information (normal state) and specified action information (placebo B). And A, after receiving the appointed state information, adjusting the emotion information of the user to be in a normal state, and adjusting the action information of the user to be in a comfort state B.
Further, after the A adjusts the self state information according to the appointed state information, new expected state information is generated according to the adjusted self state information, and the adjusted self state information and the new expected state information are sent to the B, so that emotion infection and action interaction between the A and the B are realized.
In another implementation scenario, an intelligent robot A, B, C, D is illustrated as an example. The expected action information comprises execution duration and/or execution delay duration, action forwarding information and forwarding target robot identification. For example, the expected action is a lifting hand, the execution duration is 3s, the execution delay duration is 2s, and the action forwarding information includes action forwarding directions, such as directions B to C, C to D. But also the directions C to B, B to A, C to D. In other implementations, a target robot identification, such as B, C, D, is also included. The identifier B, C, D may be added to the expected action information, and each intelligent robot that receives the expected action information determines whether the intelligent robot corresponds to the target robot identifier, if so, performs an action according to the expected action information, and forwards the expected action information, and if not, only forwards the expected action information. For example, B receives the expected action information, performs the expected action, and forwards the expected action information, and a receives the expected action information and forwards the expected action information.
In one implementation scenario, the present invention may be used to implement a human wave, for example, the expected action is a hand lift, the execution duration is 3s, the execution delay duration is 2s, and the action forwarding information includes an action forwarding direction, which is a direction from a to B, B to C, C to D. A receives a user instruction, wherein the user instruction comprises a desired action as a lifting operation, the execution duration is 3s, the execution delay duration is 2s and action forwarding information, A executes the lifting operation according to a user and generates desired action information, the desired action information comprises the content such as the execution duration is 3s in the user instruction, the execution delay duration is 2s and the action forwarding information, and the like, the desired action information is forwarded to B, B receives the desired action information, executes the lifting operation, and forwards the desired action information to C … so as to push, and C and D execute corresponding operations until the last robot of the intelligent robot group. The artificial wave effect of a group of intelligent robots can be realized. By the same token, the user can input the user instruction to A and D, so that the bilateral waves effect of a group of intelligent robots can be realized. The user command can be input to the intelligent robot C or the intelligent robot positioned in the middle of a group of intelligent robots, so that the waves of the people spread from the middle to the two sides can be realized.
In other implementation scenarios, the expected action may also be a dance action, which is set by the user according to the effect to be displayed, where the expected action is to set the expected execution time, and the user instruction is used to send the expected action information including the expected action and the expected execution time to a, and the information is forwarded to B, and the information is forwarded to C … to the intelligent robots of the whole group, and the intelligent robots of the whole group execute the expected action at the expected execution time, so as to realize the scene of dancing of the intelligent robots of the whole group.
In other implementation scenarios, the same dance action may be performed together by A, B, C, D after setting a separate action of a by a user instruction, for example, a performs a period of operation first, and then reaches an expected execution time, so that an effect of dancing by a-neck may be achieved.
As can be seen from the above description, in this embodiment, the intelligent robot group includes at least two intelligent robots, where the target intelligent robot sends its own status information and expected status information to other intelligent robots in the intelligent robot group, so that the other intelligent robots adjust current emotion information and current action information according to its own status information and expected status information, and the target intelligent robot receives status return information sent by the other intelligent robots, so that emotion infection and action interaction between the intelligent robots can be automatically achieved, without separate operations of users, so that interest of intelligent robot control can be achieved, and control efficiency and user experience are improved.
Referring to fig. 3, fig. 3 is a flowchart of a second embodiment of a robot control method according to the present invention. The robot control method provided by the invention comprises the following steps:
S201: the target intelligent robot sends its own state information and expected state information to other intelligent robots in the intelligent robot group.
In a specific implementation scenario, step S201 is substantially identical to step S101 in the first embodiment of the robot control method provided by the present invention, and will not be described herein.
S202: and judging whether the target intelligent robot is connected with other intelligent robots, and then, firstly transmitting the state information of the target intelligent robot, if so, executing the step S203, and if not, executing the step S205.
In a specific implementation scenario, it is determined whether the target intelligent robot first transmits its own state information after establishing a connection with another intelligent robot, and in this implementation scenario, it is determined whether a and B first establish a connection and then transmit its own state information. Each information interaction after the connection between the A and the B is established can be recorded, the information interaction comprises the transmission from the A to the B and the transmission from the B to the A, and if no record of the information interaction exists at present, the information can be judged to be the first transmission of the self state information after the connection between the A and the B is established.
S203: receiving planning state information sent by other intelligent robots, wherein the planning state information comprises current emotion information of the other intelligent robots; and modifying at least one of the self emotion information and the self action information according to the planning state information.
In a specific implementation scenario, the current process of sending self state information after the connection between the A and the B is established is that a call is required to be made between the AB, the A firstly receives planning state information sent by the B, the planning state information comprises current emotion information of other intelligent robots, and at least one of self emotion information and self action information is modified according to a table by combining the current emotion information of the B with the self emotion information of the A.
For example, the initial self-emotion information of A is happy, the current emotion information of B is normal, after the planning state information of B is obtained, the self-emotion information of A is adjusted to be normal, further, the initial self-action information of A is clapping, and when the self-emotion information is adjusted, the self-action information is also modified to swing hands.
S204: and generating expected state information according to the modified self emotion information and/or the self action information, and sending the modified self state information and the expected state information.
In a specific implementation scenario, the expected state information is generated according to the modified self-emotion information and/or self-action information, for example, the modified self-emotion information of a is normal, the self-action information of a is waving, the generated expected state information of B is in a normal state, and waving action is executed. And A sends the modified self state information and the expected state information, namely that the self emotion information of A is normal, the self action information is waving, and the expected B is in a normal state and carries out the content of waving action to B.
S205: the target intelligent robot receives state return information sent by other intelligent robots, wherein the state return information comprises current state information adjusted by the other intelligent robots.
In a specific implementation scenario, the step S204 is substantially identical to the step S102 in the first embodiment of the robot control method provided by the present invention, and will not be described herein.
Through the above description, in this embodiment, when the target intelligent robot first transmits its own state information after establishing a connection with another intelligent robot, the target intelligent robot receives the planning state information transmitted by the other intelligent robot, and modifies at least one of its own emotion information and its own action information according to the planning state information, so that a real emotion infection scene can be better simulated, and the interest and the authenticity of the control can be improved.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an embodiment of a robot control system according to the present invention. The robot control system 10 provided by the invention is applied to the intelligent robot group shown in fig. 1, and comprises a sending module 11 and a receiving module 12. The sending module 11 is configured to send self-state information and expected state information to other intelligent robots in the intelligent robot group, where the self-state information includes self-emotion information and self-action information, and the expected state information includes expected emotion information and expected action information, so that the other intelligent robots adjust current state information according to the self-state information and the expected state information, and the current state information includes current emotion information and/or current action information. The receiving module 12 is configured to receive status return information sent by other intelligent robots, where the status return information includes adjusted current status information of the other intelligent robots.
The receiving module 12 is further configured to receive planning state information sent by other intelligent robots when the target intelligent robot first sends its own state information after establishing a connection with the other intelligent robots, where the planning state information includes current emotion information of the other intelligent robots. The sending module 11 is further configured to modify at least one of self-emotion information and self-action information according to the planning state information, generate expected state information according to the modified self-emotion information and/or self-action information, and send the modified self-state information and the expected state information.
The expected action information comprises at least one of execution duration, execution delay duration and expected execution time, and action forwarding information and/or target robot identification; and the other intelligent robots forward the expected motion information to other intelligent robots corresponding to the target robot identification after receiving the expected motion information or forward the expected motion information according to the indication content of the motion forwarding information.
The action forwarding information comprises an action forwarding direction, and the other intelligent robots send expected action information to the other intelligent robots located in the action forwarding direction.
The sending module 11 is further configured to obtain a touch event type input by a user, adjust self state information according to the event type, and generate expected state information.
The sending module 11 is further configured to obtain a touch position of the user on the target intelligent robot, and obtain a touch event type corresponding to the touch position.
The receiving module 12 is further configured to adjust self emotion information and/or self action information according to the status return information and the self status information.
The state return information also comprises appointed state information generated by other intelligent robots according to the current emotion information and the current action information, wherein the appointed state information comprises appointed emotion information and appointed action information. The receiving module 12 is further configured to adjust its own status information according to the specified status information.
The sending module 11 is further configured to generate expected motion information according to the self motion information, where the expected motion information is the same or matched motion with the self motion information.
After receiving the expected emotion information, when the current emotion information of the other intelligent robots is not matched with the expected emotion information, the other intelligent robots keep the current action information or execute other actions, and do not execute the expected action information.
Wherein the expected motion information includes dance motions.
As can be seen from the above description, in this embodiment, the intelligent robot group includes at least two intelligent robots, where the target intelligent robot sends its own status information and expected status information to other intelligent robots in the intelligent robot group, so that the other intelligent robots adjust current emotion information and current action information according to its own status information and expected status information, and the target intelligent robot receives status return information sent by the other intelligent robots, so that emotion infection and action interaction between the intelligent robots can be automatically achieved, without separate operations of users, so that interest of intelligent robot control can be achieved, and control efficiency and user experience are improved.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an embodiment of a robot according to the present invention. The robot 20 includes a processor 21 and a memory 22. The processor 21 is coupled to the memory 22. The memory 22 has stored therein a computer program which is executed by the processor 21 in operation to implement the method as shown in fig. 2, 3. The detailed method can be referred to above, and will not be described here.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a storage medium according to an embodiment of the invention. The storage medium 30 stores at least one computer program 31, and the computer program 31 is configured to be executed by a processor to implement the method shown in fig. 2 and 3, and the detailed method is referred to above and will not be described herein. In one embodiment, the computer readable storage medium 30 may be a memory chip, a hard disk or a removable hard disk in a terminal, or other readable and writable storage means such as a flash disk, an optical disk, etc., and may also be a server, etc.
Those skilled in the art will appreciate that the processes implementing all or part of the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, and the program may be stored in a non-volatile computer readable storage medium, and the program may include the processes of the embodiments of the methods as above when executed. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (SYNCHLINK) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (10)

1. The robot control method is characterized by being applied to an intelligent robot group, wherein the intelligent robot group comprises at least two intelligent robots, and any two adjacent intelligent robots are connected with each other;
The robot control method comprises the following steps:
the target intelligent robot sends self state information and expected state information to other intelligent robots in the intelligent robot group, wherein the self state information comprises self emotion information and/or self action information, the expected state information comprises expected emotion information and/or expected action information, so that the other intelligent robots adjust current state information according to the self state information and the expected state information, and the current state information comprises current emotion information and/or current action information;
The target intelligent robot receives state return information sent by the other intelligent robots, and adjusts self state information and generates expected state information according to the state return information and the self state information; the state return information comprises the adjusted current state information of the other intelligent robots.
2. The robot control method according to claim 1, wherein before the step of the target intelligent robot transmitting the own state information and the expected state information to the other intelligent robots, comprising:
When the target intelligent robot is connected with other intelligent robots and then transmits the self state information for the first time, receiving planning state information transmitted by the other intelligent robots, wherein the planning state information comprises current emotion information of the other intelligent robots;
Modifying at least one of the self emotion information and the self action information according to the planning state information;
the step of sending the self state information and the expected state information to the other intelligent robots by the target intelligent robot comprises the following steps:
generating expected state information according to the modified self emotion information and/or the self action information, and sending the modified self state information and the expected state information.
3. The robot control method according to claim 1, wherein the expected action information includes at least one of an execution duration, an execution delay duration, and an expected execution time, and action forwarding information including an action forwarding direction and/or a target robot identification;
And after receiving the expected action information, the other intelligent robots forward the expected action information to other intelligent robots corresponding to the target robot identification or send the expected action information to other intelligent robots positioned in the action forwarding direction.
4. The robot control method according to claim 1, wherein before the step of the target intelligent robot transmitting the own state information and the expected state information to the other intelligent robots in the intelligent robot group, comprising:
And acquiring a touch position of a user on the target intelligent robot, acquiring a touch event type corresponding to the touch position, adjusting the self state information according to the event type, and generating the expected state information.
5. The robot control method according to claim 1, wherein after the step of receiving the status return information transmitted from the other intelligent robot, the target intelligent robot comprises:
adjusting the self emotion information and/or the self action information according to the state return information and the self state information;
The step of sending the self state information and the expected state information to other intelligent robots in the intelligent robot group by the target intelligent robot comprises the following steps:
and generating the expected action information according to the self action information, wherein the expected action information is the same or matched action with the self action information.
6. The robot control method according to claim 5, wherein the state return information further includes specified state information generated by the other intelligent robot according to the current emotion information and the current action information, the specified state information including specified emotion information and specified action information;
the step of receiving the state return information sent by the other intelligent robots by the target intelligent robot further comprises the following steps:
and adjusting the self state information according to the specified state information.
7. The method for controlling a robot according to claim 1, wherein,
And each intelligent robot in the intelligent robot group is provided with a communication interface, and any two adjacent intelligent robots are connected through the communication interfaces.
8. A robotic control system, the robotic control system comprising the following modules:
The intelligent robot system comprises a sending module, a receiving module and a processing module, wherein the sending module is used for sending self state information and expected state information to other intelligent robots in an intelligent robot group, the self state information comprises self emotion information and self action information, the expected state information comprises expected emotion information and expected action information, so that the other intelligent robots adjust current state information according to the self state information and the expected state information, and the current state information comprises current emotion information and/or current action information;
the receiving module is used for receiving state return information sent by the other intelligent robots, and the target intelligent robot adjusts the state information of the target intelligent robot according to the state return information and the state information of the target intelligent robot and generates expected state information; and the state return information comprises the current state information adjusted by the other intelligent robots.
9. A robot comprising a memory, a processor, the memory storing a computer program, the processor, when executing the computer program, causing the processor to perform the steps of the method as claimed in any one of claims 1 to 7.
10. A storage medium storing a computer program which, when executed by a computer program processor, causes the processor to perform the steps of the method according to any one of claims 1 to 7.
CN202210068585.5A 2022-01-20 2022-01-20 Robot, control method and system thereof and storage device Active CN114393582B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210068585.5A CN114393582B (en) 2022-01-20 2022-01-20 Robot, control method and system thereof and storage device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210068585.5A CN114393582B (en) 2022-01-20 2022-01-20 Robot, control method and system thereof and storage device

Publications (2)

Publication Number Publication Date
CN114393582A CN114393582A (en) 2022-04-26
CN114393582B true CN114393582B (en) 2024-06-25

Family

ID=81233433

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210068585.5A Active CN114393582B (en) 2022-01-20 2022-01-20 Robot, control method and system thereof and storage device

Country Status (1)

Country Link
CN (1) CN114393582B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113759942A (en) * 2021-09-23 2021-12-07 哈尔滨工程大学 Multi-intelligent-robot underwater cooperative capture control system and method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109635616B (en) * 2017-10-09 2022-12-27 阿里巴巴集团控股有限公司 Interaction method and device
CN212586735U (en) * 2020-08-25 2021-02-23 广州中国科学院先进技术研究所 Multi-robot cooperative control system for carrying and boxing

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113759942A (en) * 2021-09-23 2021-12-07 哈尔滨工程大学 Multi-intelligent-robot underwater cooperative capture control system and method

Also Published As

Publication number Publication date
CN114393582A (en) 2022-04-26

Similar Documents

Publication Publication Date Title
KR102239589B1 (en) Blockchain data processing method and device
KR102141771B1 (en) Multi-blockchain network data processing method, device and server
US10714090B2 (en) Virtual reality speech control method and apparatus
CN107392783B (en) Social contact method and device based on virtual reality
US11452941B2 (en) Emoji-based communications derived from facial features during game play
CN109521927B (en) Robot interaction method and equipment
CN109582463A (en) Resource allocation method, device, terminal and storage medium
US20200285306A1 (en) Information interaction method and device, storage medium and electronic device
US11442500B2 (en) Display apparatus and method for controlling the display apparatus
KR20080022810A (en) Software robot apparatus
CN106325228B (en) Method and device for generating control data of robot
US11267121B2 (en) Conversation output system, conversation output method, and non-transitory recording medium
WO2017071385A1 (en) Method and device for controlling target object in virtual reality scenario
JPWO2015155977A1 (en) Cooperation system, apparatus, method, and recording medium
JP2010538849A (en) Robots with interchangeable behavior coding computer programs
US20220247973A1 (en) Method for enabling synthetic autopilot video functions and for publishing a synthetic video feed as a virtual camera during a video call
CN114393582B (en) Robot, control method and system thereof and storage device
CN110505526A (en) A kind of method of controlling operation, system and the storage medium of virtual spectators
CN108055655A (en) A kind of method, apparatus, equipment and the storage medium of speech ciphering equipment plusing good friend
KR102120936B1 (en) System for providing customized character doll including smart phone
CN106649752A (en) Answer acquisition method and device
US10911823B2 (en) Media information processing method, apparatus and system
US20220410368A1 (en) Robot and method for operating the same
KR102405227B1 (en) Online-offline fusion service providing method and service providing apparatus
CN114237402B (en) Virtual reality space movement control system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant