CN114578823A - Robot motion bionic planning method, electronic equipment and storage medium - Google Patents

Robot motion bionic planning method, electronic equipment and storage medium Download PDF

Info

Publication number
CN114578823A
CN114578823A CN202210210958.8A CN202210210958A CN114578823A CN 114578823 A CN114578823 A CN 114578823A CN 202210210958 A CN202210210958 A CN 202210210958A CN 114578823 A CN114578823 A CN 114578823A
Authority
CN
China
Prior art keywords
motor
motion
sub
action
angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210210958.8A
Other languages
Chinese (zh)
Inventor
傅欢欢
肖志光
陈盛军
王俊宝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Pengxing Intelligent Research Co Ltd
Original Assignee
Shenzhen Pengxing Intelligent Research Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Pengxing Intelligent Research Co Ltd filed Critical Shenzhen Pengxing Intelligent Research Co Ltd
Priority to CN202210210958.8A priority Critical patent/CN114578823A/en
Publication of CN114578823A publication Critical patent/CN114578823A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)
  • Toys (AREA)

Abstract

The application provides a bionic robot motion planning method, electronic equipment and a storage medium, wherein the method comprises the following steps: decomposing the action to be executed by the robot into at least two sub-actions; acquiring motion parameters of a preset part of the robot in a motion direction; driving a preset part to execute the movement in the movement direction based on the movement parameters through a control motor so as to control the preset part to execute the sub-action; and detecting the progress of the preset part for executing the sub-action, and controlling the preset part to execute the next sub-action based on the progress of the preset part for executing the sub-action until the execution of the bionic action to be executed is finished. The complicated bionic motion is decomposed into simple sub-motions, and the robot part is controlled to execute each sub-motion one by one, so that the robot is accurately and efficiently controlled to execute the bionic motion, and the robot can conveniently interact with a user through the bionic motion.

Description

Robot motion bionic planning method, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of robotics, and in particular, to a method for biomimetic planning of robot actions, an electronic device, and a storage medium.
Background
With the development of Artificial Intelligence (AI) technology, robots are increasingly widely used in daily production and life, and common robots include mechanical arms, wheeled unmanned vehicles, unmanned forklifts, robots, and the like. However, these robots are generally used only for processing or transportation, cannot perform bionic motions by robot parts such as head mechanisms, foot mechanisms, and the like, and are inconvenient for the robot to interact with a user.
Disclosure of Invention
In view of the above, it is desirable to provide a method, an electronic device and a storage medium for biomimetic planning of robot actions to solve the technical problem that a robot cannot execute biomimetic actions.
The application provides a bionic planning method for robot actions, which comprises the following steps:
decomposing the motion to be executed by the robot into at least two sub-motions, wherein each sub-motion has a single motion direction;
acquiring motion parameters of a preset part of the robot in the motion direction;
driving the preset part to execute the movement in the movement direction based on the movement parameters through a control motor so as to control the preset part to execute the sub-action;
and detecting the process of the preset part for executing the sub-action, and controlling the preset part to execute the next sub-action based on the process of the preset part for executing the sub-action until the action to be executed is executed.
Optionally, if the moving direction of the sub-motion comprises a pitch direction and a rotation direction, the desired angular velocity of the pitch direction
Figure BDA0003533226280000021
Wherein, ω isyawsetΔ θ being the desired angular velocity of the direction of rotationpitchAngle of movement in said pitch direction, Δ θyawIs the angle of movement in the direction of rotation.
Optionally, the detecting the process of the preset part executing the sub-action, and controlling the preset part to execute a next sub-action based on the process of the preset part executing the sub-action includes:
judging whether the predicted movement angle of the motor is larger than or equal to the target movement angle or not every communication period;
and if the predicted motion angle is larger than or equal to the target motion angle, determining that the preset part completes the sub-action, and sending a reversing instruction to control the motor to drive the preset part to execute the next sub-action.
Optionally, the obtaining of the predicted movement angle of the motor comprises:
acquiring the current movement angle and the current angular velocity of the motor,
acquiring a prediction step length according to the communication period, wherein the prediction step length is the number of the communication periods when the motor decelerates from the current angular speed to zero at a preset acceleration;
and calculating the predicted movement angle of the motor according to the current movement angle and the predicted step length.
Optionally, calculating a predicted movement angle of the motor according to the following formula,
Figure BDA0003533226280000022
Figure BDA0003533226280000023
wherein, thetacurrentIs the current motion angle of the motor, i is the predicted step length, ωnAngular velocity, ω, of the motor at nth communication cycle after the current predicted timen-1The angular speed of the motor in the last communication period of the nth communication period after the current predicted time is n-1, 2 and 3 … i, and when n-1, omegan-1=ω0The angular speed fed back for the last communication period of the motor at the current predicted moment, T is the communication period, omegasetα is the angular acceleration of the motor, fabs (ω), for the desired angular velocity of the motorn) Is the absolute value of the angular velocity of the motor in the nth communication period after the current moment, the time required for the motor to decelerate from the current angular velocity to zero at the preset angular acceleration is t,
Figure BDA0003533226280000031
the prediction step length i is the minimum integer which is larger than or equal to T/T, wherein alpha is the angular acceleration of the motor.
Optionally, the detecting the process of the preset part executing the sub-action, and controlling the preset part to execute a next sub-action based on the process of the preset part executing the sub-action includes:
judging whether the preset part completes the sub-action or not every estimated period;
and if the difference value between the predicted motion angle and the target motion angle is smaller than a threshold value or the predicted motion angle is larger than or equal to the target motion angle, determining that the preset part completes the sub-action, and sending a reversing instruction to control the motor to drive the preset part to execute the next sub-action.
Optionally, the obtaining of the predicted movement angle of the motor comprises:
acquiring the current movement angle and the current angular speed of the motor,
acquiring a prediction step length according to the prediction period, wherein the prediction step length is the number of the prediction periods when the motor decelerates from the current angular speed to zero at a preset acceleration;
and calculating the predicted movement angle of the motor according to the current movement angle and the predicted step length.
Optionally, calculating a predicted movement angle of the motor according to the following formula,
θi=θi-1i*T1
Figure BDA0003533226280000041
wherein, thetaiFor the predicted angle of motion, θi-1For the predicted angle of motion, omega, of the last estimated periodiFor the angular velocity of the motor in the current estimated period, when i is 1, ωi-1=ω0The angular velocity T fed back for the last estimated period of the motor at the current estimated time1To estimate the period, omegasetThe time required for the motor to decelerate from the current angular velocity to zero at a preset angular acceleration is t1
Figure BDA0003533226280000042
The prediction step size i is greater than or equal to t1/T1Wherein α is the angular acceleration of the motor.
The present application further provides an electronic device, the electronic device including:
a processor; and
the robot motion bionic planning method comprises a storage, wherein a plurality of program modules are stored in the storage, and the program modules are loaded by the processor and execute the robot motion bionic planning method.
The application also provides a computer readable storage medium, on which at least one computer instruction is stored, the instruction being loaded by a processor and executing the robot motion bionic planning method.
According to the robot action bionic planning method, the electronic equipment and the storage medium, the complex bionic action to be executed by the robot can be decomposed into the simple sub-actions, the motor is controlled to drive the robot part to execute each sub-action one by one in the preset movement direction, so that the robot is accurately and efficiently controlled to execute the bionic action, the robot can conveniently interact with a user through the bionic action, and the user experience is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic diagram of an application environment architecture of a robot motion bionic planning method according to a preferred embodiment of the present application.
Fig. 2 is a block diagram of a robot according to a preferred embodiment of the present invention.
Fig. 3 is a perspective view of a robot according to a preferred embodiment of the present invention.
Fig. 4 is a flowchart of a bionic planning method for robot actions according to a preferred embodiment of the present application.
Fig. 5A-5D are schematic diagrams of sub-motions of a bionic robot motion according to a preferred embodiment of the present application.
Fig. 6 is a flowchart for acquiring a motion parameter of a preset portion of a robot in a motion direction according to a preferred embodiment of the present application.
FIG. 7 is a flowchart illustrating a control of the default portion to perform sub-actions according to a preferred embodiment of the present application.
Fig. 8 is a flowchart illustrating a method for controlling a predetermined portion to execute a next sub-action according to an embodiment of the present application.
Fig. 9 is a flowchart for obtaining a predicted movement angle of a motor according to an embodiment of the present application.
Fig. 10 is a flowchart illustrating a preset portion being controlled to execute a next sub-action according to another embodiment of the present application.
Fig. 11 is a flowchart for obtaining a predicted movement angle of a motor according to another embodiment of the present application.
Fig. 12 is a schematic structural diagram of an electronic device according to a preferred embodiment of the present application.
Description of the main elements
Electronic equipment 1
Processor 10
Memory 20
Computer program 30
Robot 100
Mechanical unit 101
Driving plate 1011
Motor 1012
Mechanical structure 1013
Fuselage main body 1014
Leg 1015
Foot 1016
Head structure 1017
Tail structure 1018
Carrying structure 1019
Saddle structure 1020
Camera structure 1021
Communication unit 102
Sensing unit 103
Interface unit 104
Memory cell 105
Display unit 106
Display panel 1061
Input unit 107
Touch panel 1071
Input device 1072
Touch detection device 1073
Touch controller 1074
Control module 110
Power supply 111
The following specific examples will further illustrate the application in conjunction with the above figures.
DETAILED DESCRIPTION OF EMBODIMENT (S) OF INVENTION
In order that the above objects, features and advantages of the present application can be more clearly understood, a detailed description of the present application will be given below with reference to the accompanying drawings and specific embodiments. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth to provide a thorough understanding of the present application, and the described embodiments are merely a subset of the embodiments of the present application and are not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. The terminology used in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
In the following description, suffixes such as "module", "component", or "unit" used to represent components are used only for facilitating the explanation of the present invention, and have no specific meaning in itself. Thus, "module", "component" or "unit" may be used mixedly.
Fig. 1 is a schematic diagram of an application environment architecture of a path planning method for a robot according to a preferred embodiment of the present application.
The path planning method for the robot is applied to the electronic device 1, and the electronic device 1 may establish a communication connection with at least one robot 100 through a network. The network may be a wired network or a Wireless network, such as radio, Wireless Fidelity (WIFI), cellular, satellite, broadcast, etc. The cellular network may be a 4G network or a 5G network.
The electronic device 1 may be an electronic device installed with a path planning program, such as a smart phone, a personal computer, a server, and the like, where the server may be a single server, a cloud server, a server cluster, or the like. The server 2 may be a single server, a cloud server, a server cluster, or the like.
Fig. 2 is a schematic diagram of a hardware structure of a robot 100 according to an embodiment of the present invention. The robot 100 may be a multi-legged robot, and in the embodiment shown in fig. 2, the multi-legged robot 100 includes a mechanical unit 101, a communication unit 102, a sensing unit 103, an interface unit 104, a storage unit 105, a display unit 106, an input unit 107, a control module 110, and a power supply 111. The various components of the multi-legged robot 100 can be connected in any manner, including wired or wireless connections, and the like. Those skilled in the art will appreciate that the specific structure of the multi-legged robot 100 shown in fig. 2 does not constitute a limitation to the multi-legged robot 100, that the multi-legged robot 100 may include more or less components than those shown, that some components do not belong to the essential constitution of the multi-legged robot 100, that some components may be omitted or combined as necessary within the scope not changing the essence of the invention.
The following describes the components of the multi-legged robot 100 in detail with reference to fig. 2 and 3:
the mechanical unit 101 is the hardware of the multi-legged robot 100. As shown in fig. 2, the machine unit 101 may include a drive plate 1011, a motor 1012, a machine structure 1013, as shown in fig. 3, the machine structure 1013 may include a body 1014, extendable legs 1015, feet 1016, and in other embodiments, the machine structure 1013 may further include extendable robotic arms (not shown), a rotatable head structure 1017, a swingable tail structure 1018, a load structure 1019, a saddle structure 1020, a camera structure 1021, and the like. It should be noted that each component module of the mechanical unit 101 may be one or multiple, and may be configured according to specific situations, for example, the number of the legs 1015 may be 4, each leg 1015 may be configured with 3 motors 1012, and the number of the corresponding motors 1012 is 12.
The communication unit 102 can be used for receiving and transmitting signals, and can also communicate with other devices through a network, for example, receive command information sent by a remote controller or other multi-legged robots 100 to move in a specific direction at a specific speed according to a specific gait, and transmit the command information to the control module 110 for processing. The communication unit 102 includes, for example, a WiFi module, a 4G module, a 5G module, a bluetooth module, an infrared module, etc.
The sensing unit 103 is used for acquiring information data of the environment around the multi-legged robot 100 and monitoring parameter data of each component inside the multi-legged robot 100, and sending the information data to the control module 110. The sensing unit 103 includes various sensors such as a sensor for acquiring surrounding environment information: laser radar (for long-range object detection, distance determination, and/or velocity value determination), millimeter wave radar (for short-range object detection, distance determination, and/or velocity value determination), a camera, an infrared camera, a Global Navigation Satellite System (GNSS), and the like. Such as sensors monitoring the various components inside the multi-legged robot 100: an Inertial Measurement Unit (IMU) (for measuring values of velocity, acceleration and angular velocity values), a sole sensor (for monitoring sole impact point position, sole attitude, ground contact force magnitude and direction), a temperature sensor (for detecting component temperature). As for the other sensors such as the load sensor, the touch sensor, the motor angle sensor, and the torque sensor, which can be configured in the multi-legged robot 100, the detailed description is omitted here.
The interface unit 104 can be used to receive inputs from external devices (e.g., data information, power, etc.) and transmit the received inputs to one or more components within the multi-legged robot 100, or can be used to output inputs to external devices (e.g., data information, power, etc.). The interface unit 104 may include a power port, a data port (e.g., a USB port), a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, and the like.
The storage unit 105 is used to store software programs and various data. The storage unit 105 may mainly include a program storage area and a data storage area, where the program storage area may store an operating system program, a motion control program, an application program (such as a text editor), and the like; the data storage area may store data generated by the multi-legged robot 100 in use (such as various sensing data acquired by the sensing unit 103, log file data), and the like. In addition, the storage unit 105 may include high-speed random access memory, and may also include non-volatile memory, such as disk memory, flash memory, or other volatile solid-state memory.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The input unit 107 may be used to receive input numeric or character information. Specifically, the input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also called a touch screen, may collect a user's touch operations (such as operations of the user on the touch panel 1071 or near the touch panel 1071 using a palm, a finger, or a suitable accessory) and drive a corresponding connection device according to a preset program. The touch panel 1071 may include two parts of a touch detection device 1073 and a touch controller 1074. The touch detection device 1073 detects the touch orientation of the user, detects a signal caused by a touch operation, and transmits the signal to the touch controller 1074; the touch controller 1074 receives touch information from the touch sensing device 1073, converts the touch information into touch point coordinates, and transmits the touch point coordinates to the control module 110, and can receive and execute commands from the control module 110. The input unit 107 may include other input devices 1072 in addition to the touch panel 1071. In particular, other input devices 1072 may include, but are not limited to, one or more of a remote control joystick or the like, and are not limited to such.
Further, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the control module 110 to determine the type of the touch event, and then the control module 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 2, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions, respectively, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions, which is not limited herein.
The control module 110 is a control center of the multi-legged robot 100, connects the respective components of the entire multi-legged robot 100 using various interfaces and lines, and performs overall control of the multi-legged robot 100 by operating or executing software programs stored in the storage unit 105 and calling up data stored in the storage unit 105.
The power supply 111 is used to supply power to various components, and the power supply 111 may include a battery and a power supply control board for controlling functions such as battery charging, discharging, and power consumption management. In the embodiment shown in fig. 2, the power source 111 is electrically connected to the control module 110, and in other embodiments, the power source 111 may be electrically connected to the sensing unit 103 (e.g., a camera, a radar, a sound box, etc.) and the motor 1012 respectively. It should be noted that each component may be connected to a different power source 111 or powered by the same power source 111.
On the basis of the above embodiments, in particular, in some embodiments, the communication connection with the multi-legged robot 100 can be performed through a terminal device, when the terminal device communicates with the multi-legged robot 100, the command information can be transmitted to the multi-legged robot 100 through the terminal device, the multi-legged robot 100 can receive the command information through the communication unit 102, and in case of receiving the command information, the command information can be transmitted to the control module 110, so that the control module 110 can process the target velocity value according to the command information. Terminal devices include, but are not limited to: the mobile phone, the tablet computer, the server, the personal computer, the wearable intelligent device and other electrical equipment with the image shooting function.
The instruction information may be determined according to a preset condition. In one embodiment, the multi-legged robot 100 can include a sensing unit 103, and the sensing unit 103 can generate instruction information according to the current environment in which the multi-legged robot 100 is located. The control module 110 can determine whether the current velocity value of the multi-legged robot 100 satisfies the corresponding preset condition according to the instruction information. If yes, keeping the current speed value and the current gait movement of the multi-legged robot 100; if the target velocity value is not met, the target velocity value and the corresponding target gait are determined according to the corresponding preset conditions, so that the multi-legged robot 100 can be controlled to move at the target velocity value and the corresponding target gait. The environmental sensors may include temperature sensors, air pressure sensors, visual sensors, sound sensors. The instruction information may include temperature information, air pressure information, image information, and sound information. The communication mode between the environmental sensor and the control module 110 may be wired communication or wireless communication. The manner of wireless communication includes, but is not limited to: wireless network, mobile communication network (3G, 4G, 5G, etc.), bluetooth, infrared.
Fig. 4 is a flowchart of a bionic robot motion planning method according to a preferred embodiment of the present application. The order of the steps in the flow chart may be changed and some steps may be omitted according to different needs.
In an embodiment of the present application, the robot motion bionic planning method is applied to the body of the robot 100 and executed by the main control unit of the robot 100.
S401, deciding the action to be executed by the robot.
In an embodiment, the action to be performed by the robot is an action performed by a predetermined part of the robot, optionally a head, a foot or other movable part of the robot. The actions to be performed by the decision robot include: and deciding the action to be executed by the robot according to the current state or the interaction result of the robot. The action to be performed by the robot may be a biomimetic action, i.e. a motion that mimics a human.
For example, the current state of the robot is a low battery state. And when the robot is in a low-power state, deciding that the action to be executed is a prompt action of low power. Referring to fig. 5A, the low-power prompt motion is taken as the head motion of the robot, and the motion trajectory is a triangle.
For example, the current state of the robot is an open state. And when the robot is in the happy state, deciding that the action to be executed is a happy interactive action. Referring to fig. 5B, the happy interaction is the head motion of the robot, and the motion trajectory is V-shaped.
For example, the interaction result of the robot is yes. And when the interaction result of the robot is yes, deciding the action to be executed as a positive interaction action. Referring to fig. 5C, the positive interaction is the head motion of the robot, and the motion trajectory is 1-shaped.
For example, the result of the interaction of the robot is no. And when the interaction result of the robot is negative, deciding that the action to be executed is a negative interaction action. Referring to fig. 5D, the negative interaction is taken as the head motion of the robot, and the motion track is in a straight line shape.
S402, decomposing the action to be executed by the robot into at least two sub-actions.
In an embodiment, each sub-action has a single movement direction, and the at least two sub-actions are arranged in sequence according to execution time. The decomposition of the action to be performed by the robot into at least two sub-actions comprises: the action to be performed by the robot is decomposed into at least two sub-actions in a single direction of motion.
For example, if the action to be performed is a low-power prompt action, the action is decomposed into a sub-action a moving to the lower right, a sub-action b translating to the left, and a sub-action c moving to the upper right.
For example, if the action to be performed is an open interaction action, the action is decomposed into a sub-action a of moving downward, a sub-action b of returning to the original position, a sub-action c of moving upward and a sub-action d of returning to the original position for the second time.
For example, if the action to be performed is an affirmative interactive action, the action is decomposed into a sub-action a of moving downward and a sub-action b of returning to the original position.
For example, if the action to be executed is a negative interactive action, the action is decomposed into a sub-action a of translating to the right and a sub-action b of returning to the original position.
And S403, acquiring the motion parameters of the preset part of the robot in the motion direction.
Referring to fig. 6, in an embodiment, the moving direction includes a pitch direction and/or a rotation direction. The method for acquiring the motion parameters of the preset part of the robot in the motion direction comprises the following steps:
s4031, determine whether the sub-motion is a single-degree-of-freedom motion or a two-degree-of-freedom motion.
In an embodiment, the motion parameter of the sub-action comprises a motion angle. Judging whether the sub-action is a single-degree-of-freedom action or a double-degree-of-freedom action comprises: if the sub-motion only comprises the motion in the pitching direction or only comprises the motion in the rotating direction, determining that the sub-motion is the single-degree-of-freedom motion; and if the sub-motion comprises the motion in the pitching direction and the motion in the rotating direction, determining that the sub-motion is a two-degree-of-freedom motion.
For example, as shown in fig. 5A, sub-motions a and c are two-degree-of-freedom motions, and sub-motion b is a single-degree-of-freedom motion.
And S4032, if the sub-motion is a single-degree-of-freedom motion, acquiring the motion parameters of the preset part in the pitching direction or the rotating direction.
In one embodiment, the obtaining of the motion parameter of the preset portion in the pitch direction or the rotation direction includes: and determining a target motion angle in a pitching direction or a rotating direction according to the target motion angle of the sub-action.
S4033, if the sub-motion is a two-degree-of-freedom motion, the motion parameters of the preset part in the pitching direction and the rotating direction are obtained.
In an embodiment, the obtaining of the motion parameters of the predetermined portion in the pitch direction and the rotation direction includes: and determining the motor motion angle in the pitching direction and the motor motion angle in the rotating direction according to the target motion angle of the sub-action.
S404, driving the preset part to execute the movement in the movement direction by controlling a motor based on the movement parameters so as to control the preset part to execute the sub-action.
In one embodiment, the motor 1012 includes a first motor and/or a second motor. Driving the preset part to execute the motion in the motion direction based on the motion parameter by controlling a motor so as to control the preset part to execute the sub-action, wherein the control step comprises the following steps: and driving a preset part of the robot to execute pitching motion by controlling a first motor based on the motion parameter of the pitching direction, and/or controlling a second motor to drive the preset part of the robot to execute rotating motion based on the motion parameter of the rotating direction, so as to control the preset part to execute the sub-action.
In one embodiment, the motion parameters further include a motion time, a desired angular velocity and an angular acceleration of the motor, and the motion time of each sub-action is the same. Wherein the motion time of the sub-action is determined based on the execution time of the action and the number of sub-actions into which the action is decomposed. For example, if the execution time of the low power prompting motion is 1.5 seconds, and the low power prompting motion is decomposed into three sub-motions, the motion time of each sub-motion is 0.5 seconds. The desired angular velocity is a preset value. The angular acceleration is determined by the desired angular velocity and the movement time of the sub-motion, i.e. the angular velocity α ωsetAnd t, t is the acceleration time or the deceleration time of the motor.
In an embodiment, the first motor and the second motor have a preset communication period, optionally, the preset communication period is 0.1 second. Referring to fig. 7, controlling the preset portion to execute the sub-action includes:
s4041, sending an acceleration instruction to control the first motor to drive the preset portion to perform an acceleration motion along the pitch direction based on the target motion angle, the motion time, the expected angular velocity, and the angular acceleration in the pitch direction, and/or sending an acceleration instruction to control the second motor to drive the preset portion to perform an acceleration motion along the rotation direction based on the target motion angle, the motion time, the expected angular velocity, and the angular acceleration in the rotation direction.
Specifically, if the sub-motion is a single-degree-of-freedom motion in the pitch direction, an acceleration instruction is sent to control the first motor to drive the preset part to perform accelerated motion in the pitch direction based on the target motion angle, the motion time, the expected angular velocity and the angular acceleration in the pitch direction. And if the sub-motion is a single-degree-of-freedom motion in the rotation direction, sending an acceleration instruction to control the second motor to drive the preset part to perform accelerated motion along the rotation direction based on the target motion angle, the motion time, the expected angular velocity and the angular acceleration in the rotation direction. If the sub-motion is a two-degree-of-freedom motion, sending an acceleration instruction to control the first motor to drive the preset part to perform accelerated motion along the pitching direction based on the target motion angle, the motion time, the expected angular velocity and the angular acceleration in the pitching direction, and sending an acceleration instruction to control the second motor to drive the preset part to perform accelerated motion along the rotating direction based on the target motion angle, the motion time, the expected angular velocity and the angular acceleration in the rotating direction.
In one embodiment, if the sub-motion is a two degree of freedom motion, the desired angular velocity in the pitch direction is
Figure BDA0003533226280000151
Wherein, ω isyawsetΔ θ being the desired angular velocity of the direction of rotationpitchAngle of movement in said pitch direction, Δ θyawIs the angle of movement in the direction of rotation. Through the formula, the cooperative control of the first motor and the second motor can be realized, so that the first motor and the second motor simultaneously reach the target position, namely, the target movement angle of the sub-action is simultaneously moved.
S4042, every other communication cycle, determines whether the angular velocity of the first motor and/or the second motor reaches the desired angular velocity.
S4043, if the angular velocity of the first motor and/or the second motor reaches the desired angular velocity, sending a deceleration command to control the first motor and/or the second motor to decelerate.
It should be noted that, when the angular velocity of the first motor and/or the second motor reaches the desired angular velocity, and the first motor and/or the second motor does not receive the deceleration command yet, the first motor and/or the second motor continues to move at the desired angular velocity until the deceleration command is received to decelerate.
In an embodiment, if the angular velocity of the first motor and/or the second motor does not reach the desired angular velocity, the process continues to S4041.
S405, detecting the progress of the preset part for executing the sub-action, and controlling the preset part to execute the next sub-action based on the progress of the preset part for executing the sub-action until the execution of the to-be-executed bionic action is finished.
In an embodiment, the process of the sub-action performed by the predetermined portion is a movement angle of the motor (the first motor and/or the second motor). Detecting the process of the preset part for executing the sub-action, and controlling the preset part to execute the next sub-action based on the process of the preset part for executing the sub-action comprises the following steps:
referring to fig. 8, in an embodiment, the detecting the process of the preset portion executing the sub-action, and the controlling the preset portion to execute the next sub-action based on the process of the preset portion executing the sub-action includes:
s501, judging whether the predicted movement angle of the motor is larger than or equal to the target movement angle every other communication period.
S502, if the predicted motion angle is larger than or equal to the target motion angle, the preset part is determined to finish the sub-action, and a reversing instruction is sent to control the motor to drive the preset part to execute the next sub-action. If the predicted motion angle is smaller than the target motion angle, the process continues to S501.
Referring to fig. 9, in the embodiment, obtaining the predicted movement angle of the motor includes:
and S5011, acquiring the current movement angle and the current angular speed of the motor.
S5012, obtaining the predicted step size according to the communication period.
In the embodiment, the predicted step size is the number of communication cycles when the motor decelerates from the current angular velocity to zero at a preset acceleration. Specifically, the time t from the start of deceleration of the first motor or the second motor to zero angular velocity is calculated1=(0-ωset)/a,ωsetBased on said time t, for a desired angular velocity of the motor1Determining the prediction step size i to be greater than or equal to t1T is the minimum integer of/T, and T is the communication period. For example, if t1If T is 2, i is 2, if T is1if/T is 2.5, i is 3.
Alternatively, in other embodiments, the predicted step size may also be preset according to a communication response characteristic and an acceleration characteristic of the motor.
And S5013, calculating the predicted movement angle of the motor according to the current movement angle and the predicted step length.
In the embodiment, the predicted movement angle of the motor is calculated by the following formula,
Figure BDA0003533226280000171
wherein the content of the first and second substances,
Figure BDA0003533226280000172
in the above calculation formula, θcurrentIs the current motion angle of the motor, i is the predicted step length, ωnAngular velocity, ω, of the motor at nth communication cycle after the current predicted timen-1The angular speed of the motor in the last communication period of the nth communication period after the current predicted time is n-1, 2 and 3 … i, and when n-1, omegan-1=ω0For the motor at the current predicted momentAngular velocity fed back in the last communication period, T being the communication period, ωsetAlpha is the angular acceleration of the motor for the desired angular velocity of the motor, which is negative if the motor is in a decelerated state, fabs (ω)n) Is the absolute value of the angular velocity of the motor in the nth communication period after the current moment, the time required for the motor to decelerate from the current angular velocity to zero at the preset angular acceleration is t,
Figure BDA0003533226280000181
the prediction step length i is the minimum integer which is larger than or equal to T/T, wherein alpha is the angular acceleration of the motor.
For example, assume that the prediction step i is 2, ω0Is the current angular velocity, ω, of the motor1=ω0+α*T,ω2=ω1+ α × T, the predicted movement angle θ of the motorprediction=θcurrent+(ω0+α*T)*T+(ω1+α*T)*T。
Referring to fig. 10, in another embodiment, the detecting the progress of the sub-action executed by the predetermined portion, and the controlling the predetermined portion to execute the next sub-action based on the progress of the sub-action executed by the predetermined portion includes:
s601, judging whether the preset part completes the sub-action or not every estimation period.
S602, if the difference value between the predicted motion angle of the motor and the target motion angle is smaller than a threshold value or the predicted motion angle is larger than or equal to the target motion angle, determining that the preset part completes the sub-action, and sending a reversing instruction to control the motor to drive the preset part to execute the next sub-action. If the difference between the predicted motion angle and the target motion angle is greater than or equal to the threshold or the predicted motion angle is smaller than the target motion angle, the process continues to S601.
Referring to fig. 11, in the another embodiment, the obtaining the predicted movement angle of the motor includes:
s6011, acquiring a current motion angle and a current angular velocity of the motor.
S6012, obtaining the prediction step length according to the estimation period.
In the another embodiment, the predicted step size is an estimated number of cycles when the motor decelerates from the current angular velocity to zero at a preset acceleration.
Alternatively, in other embodiments, the predicted step size may also be preset according to a communication response characteristic and an acceleration characteristic of the motor.
S6013, the predicted movement angle of the motor is calculated according to the current movement angle and the predicted step length.
In the other embodiment, the predicted movement angle of the motor is calculated by the following calculation formula,
θi=θi-1i*T1
wherein the content of the first and second substances,
Figure BDA0003533226280000191
in the above calculation formula, θiFor the predicted movement angle, θi-1The predicted motion angle for the last predicted period. It should be noted that, the motor feeds back the current motion angle (i.e. the real position of the preset portion) once every communication period (e.g. 40ms), the communication period 40ms from the current motion angle being fed back to the next motion angle being fed back will be predicted by 40 prediction periods, and then the motion angle of the next communication period is obtained and then the position is corrected, θ0And feeding back the motion angle of the motor in real time for the latest time. OmegaiFor the angular velocity of the motor in the current estimated period, when i is 1, ωi-1=ω0The angular velocity fed back for the last estimation period of the motor at the current estimation moment, alpha is the angular acceleration of the motor, T1To estimate the period, omegasetFor the desired angular velocity of the machine, fabs (ω)i) For the motor in the current estimated periodAbsolute value of angular velocity. If the motor is in a deceleration state, the angular acceleration is a negative number. i is the prediction step length, i is greater than or equal to t1/T1Is the smallest integer of (a).
Wherein the time required for the motor to decelerate from the current angular velocity to zero at the preset angular acceleration is t1
Figure BDA0003533226280000192
In the above calculation formula, α is the angular acceleration of the motor.
In the further embodiment, a position and speed estimate of the motor is made between the receipt of the current motor feedback and the next motor feedback. Optionally, if the program running frequency is currently set, that is, the estimated position frequency of the motor is 1000Hz, the estimated period T is1=1ms。
In the other embodiment, when the difference between the predicted movement angle and the expected movement angle is smaller than the threshold, it is determined that the current sub-action is completed, the control of the next action can be switched to, the position and the speed of the next sub-action are directly issued to the corresponding motor, and the position fed back by the motor is updated in real time and assigned to the starting position of the current prediction algorithm.
In the above embodiment, the motor position is estimated in the motion process of the motor, the reversing instruction is sent in advance to switch the sub-actions when the estimated motor motion angle reaches the target motion angle, rather than sending the reversing instruction to switch the sub-actions when the actual motion angle of the motor reaches the target motion angle, so that the motion stagnation caused by the problems of motor communication delay, untimely response and the like is relieved, and the bionic effect of the robot actions can be obviously improved.
Fig. 12 is a schematic structural diagram of an electronic device according to a preferred embodiment of the present application.
The electronic device 1 comprises, but is not limited to, a processor 10, a memory 20, a computer program 30 stored in the memory 20 and executable on the processor 10. The computer program 30 is, for example, a route planning program. The processor 10 implements the steps of the robot motion bionic planning method when executing the computer program 30, for example, steps S401 to S405 shown in fig. 4, steps S4031 to S4033 shown in fig. 6, steps S4041 to S4043 shown in fig. 7, steps S501 to S502 shown in fig. 8, steps S5011 to S5013 shown in fig. 9, steps S601 to S602 shown in fig. 10, and steps S6011 to S6013 shown in fig. 11.
If the robot is the electronic device 1, the processor 10 is the control module 110, and the memory 20 is the storage unit 105.
Illustratively, the computer program 30 may be partitioned into one or more modules/units that are stored in the memory 20 and executed by the processor 10 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 30 in the electronic device 1.
It will be appreciated by a person skilled in the art that the schematic diagram is only an example of the electronic device 1 and does not constitute a limitation of the electronic device 1, and that it may comprise more or less components than shown, or some components may be combined, or different components, e.g. the electronic device 1 may further comprise an input output device, a network access device, a bus, etc.
The Processor 10 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor 10 may be any conventional processor or the like, the processor 10 being the control center of the electronic device 1, various interfaces and lines connecting the various parts of the whole electronic device 1.
The memory 20 may be used to store the computer program 30 and/or the modules/units, and the processor 10 implements various functions of the electronic device 1 by running or executing the computer program and/or the modules/units stored in the memory 20 and calling data stored in the memory 20. The memory 20 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the electronic apparatus 1, and the like. In addition, the memory 20 may include volatile and non-volatile memory such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other storage devices.
The integrated modules/units of the electronic device 1 may be stored in a computer-readable storage medium if they are implemented in the form of software functional units and sold or used as separate products. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the embodiments of the methods described above can be realized. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM).
The specific embodiment of the computer-readable storage medium according to the present application has substantially the same expansion content as the embodiments of the robot motion bionic planning method, and is not described herein again.
The robot action bionic planning method, the electronic device and the storage medium can decompose the complex bionic action to be executed by the robot into simple sub-actions, and the motor is controlled to drive the robot part to execute each sub-action one by one in the motion direction, so that the robot is controlled to execute the bionic action accurately and efficiently, the robot can interact with a user through the bionic action conveniently, and the user experience is improved.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. Several units or means recited in the apparatus claims may also be embodied by one and the same item or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Although the present application has been described in detail with reference to preferred embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the spirit and scope of the present application.

Claims (10)

1. A method for biomimetic planning of a robot action, the method comprising:
the method comprises the steps of decomposing a motion to be executed by a robot into at least two sub-motions, wherein each sub-motion has a single motion direction;
acquiring motion parameters of a preset part of the robot in the motion direction;
driving the preset part to perform the movement in the movement direction based on the movement parameters through a control motor so as to control the preset part to perform the sub-action;
and detecting the process of the preset part for executing the sub-action, and controlling the preset part to execute the next sub-action based on the process of the preset part for executing the sub-action until the action to be executed is executed.
2. The method according to claim 1, wherein if the moving direction of the sub-motion comprises a pitch direction and a rotation direction, the desired angular velocity of the pitch direction is determined by the desired angular velocity of the sub-motion
Figure FDA0003533226270000011
Wherein, ω isyawsetΔ θ is the desired angular velocity of the direction of rotationpitchAngle of movement in said pitch direction, Δ θyawIs the angle of movement in the direction of rotation.
3. The method for biomimetic planning of robot actions according to claim 1, wherein the detecting a progress of the sub-action performed by the predetermined portion and controlling the predetermined portion to perform a next sub-action based on the progress of the sub-action performed by the predetermined portion comprises:
judging whether the predicted movement angle of the motor is larger than or equal to the target movement angle or not every communication period;
and if the predicted motion angle is larger than or equal to the target motion angle, determining that the preset part completes the sub-action, and sending a reversing instruction to control the motor to drive the preset part to execute the next sub-action.
4. The method of biomimetic planning of robotic actions of claim 3, wherein obtaining a predicted angle of motion of the motor comprises:
acquiring a current motion angle and a current angular speed of the motor;
acquiring a prediction step length according to the communication period, wherein the prediction step length is the number of the communication periods when the motor decelerates from the current angular speed to zero at a preset acceleration;
and calculating the predicted movement angle of the motor according to the current movement angle and the predicted step length.
5. The method for biomimetic planning of robot actions according to claim 4, characterized in that: the predicted movement angle of the motor is calculated according to the following formula,
Figure FDA0003533226270000021
Figure FDA0003533226270000022
wherein, thetacurrentFor the current motion angle of the motor, i is the predicted step length, ωnAngular velocity, ω, of the motor at nth communication cycle after the current predicted timen-1N is 1, 2, 3 … i for the angular velocity of the motor in the last communication cycle of the n-th communication cycle after the current predicted time, and when n is 1, ω is setn-1=ω0The angular speed fed back for the last communication period of the motor at the current predicted moment, T is the communication period, omegasetα is the angular acceleration of the motor, fabs (ω), for the desired angular velocity of the motorn) Is the absolute value of the angular velocity of the motor in the nth communication period after the current moment, the time required for the motor to decelerate from the current angular velocity to zero at the preset angular acceleration is t,
Figure FDA0003533226270000023
the prediction step length i is the minimum integer which is larger than or equal to T/T, wherein alpha is the angular acceleration of the motor.
6. The method for biomimetic planning of robot actions according to claim 1, wherein the detecting a progress of the sub-action performed by the predetermined portion and controlling the predetermined portion to perform a next sub-action based on the progress of the sub-action performed by the predetermined portion comprises:
judging whether the preset part completes the sub-action or not every other estimated period;
and if the difference value between the predicted motion angle of the motor and the target motion angle is smaller than a threshold value or the predicted motion angle is larger than or equal to the target motion angle, determining that the preset part completes the sub-action, and sending a reversing instruction to control the motor to drive the preset part to execute the next sub-action.
7. The method of biomimetic planning of robotic actions of claim 6, wherein obtaining a predicted angle of motion of the motor comprises:
acquiring the current movement angle and the current angular speed of the motor,
acquiring a prediction step length according to the prediction period, wherein the prediction step length is the number of the prediction periods when the motor decelerates from the current angular speed to zero at a preset acceleration;
and calculating the predicted movement angle of the motor according to the current movement angle and the predicted step length.
8. The method for biomimetic planning of robotic actions of claim 7, wherein: the predicted movement angle of the motor is calculated according to the following formula,
θi=θi-1i*T1
Figure FDA0003533226270000031
wherein, thetaiFor the predicted angle of motion, θi-1For the predicted angle of motion, omega, of the last estimated periodiFor the angular velocity of the motor in the current estimated period, when i is 1, ωi-1=ω0The angular velocity T fed back for the last estimated period of the motor at the current estimated time1To estimate the period, omegasetFor the desired angular velocity of the machine, fabs (ω)i) The absolute value of the angular speed of the motor in the current estimation period is t, and the time required by the motor to decelerate from the current angular speed to zero at the preset angular acceleration is t1
Figure FDA0003533226270000041
The prediction step size i is greater than or equal to t1/T1Wherein α is the angular acceleration of the motor.
9. An electronic device, characterized in that the electronic device comprises:
a processor; and
a memory having stored therein a plurality of program modules that are loaded by the processor and execute the method of biomimetic planning of robot actions according to any of claims 1-8.
10. A computer-readable storage medium having stored thereon at least one computer instruction, wherein the instruction is loaded by a processor and performs a method for biomimetic planning of robot actions according to any of claims 1-8.
CN202210210958.8A 2022-03-04 2022-03-04 Robot motion bionic planning method, electronic equipment and storage medium Pending CN114578823A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210210958.8A CN114578823A (en) 2022-03-04 2022-03-04 Robot motion bionic planning method, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210210958.8A CN114578823A (en) 2022-03-04 2022-03-04 Robot motion bionic planning method, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114578823A true CN114578823A (en) 2022-06-03

Family

ID=81779242

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210210958.8A Pending CN114578823A (en) 2022-03-04 2022-03-04 Robot motion bionic planning method, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114578823A (en)

Similar Documents

Publication Publication Date Title
CN107589752B (en) Method and system for realizing cooperative formation of unmanned aerial vehicle and ground robot
US11613249B2 (en) Automatic navigation using deep reinforcement learning
US9292015B2 (en) Universal construction robotics interface
CN114800535B (en) Robot control method, mechanical arm control method, robot and control terminal
TW202102959A (en) Systems, and methods for merging disjointed map and route data with respect to a single origin for autonomous robots
CN117500642A (en) System, apparatus and method for exploiting robot autonomy
CN114740835A (en) Path planning method, path planning device, robot, and storage medium
US10035264B1 (en) Real time robot implementation of state machine
EP3944049B1 (en) Mobile communication terminal device operation of robot terminal
CN117073662A (en) Map construction method, device, robot and storage medium
JPWO2019171491A1 (en) Mobile control device, mobile, mobile control system, mobile control method and mobile control program
CN114578823A (en) Robot motion bionic planning method, electronic equipment and storage medium
CN114330755B (en) Data set generation method and device, robot and storage medium
CN116700299A (en) AUV cluster control system and method based on digital twin
US11947350B2 (en) Devices, systems, and methods for operating intelligent vehicles using separate devices
CN114454176B (en) Robot control method, control device, robot, and storage medium
US20190111563A1 (en) Custom Motion Trajectories for Robot Animation
CN114571460A (en) Robot control method, device and storage medium
CN114137992A (en) Method and related device for reducing shaking of foot type robot
Surmann et al. Teleoperated visual inspection and surveillance with unmanned ground and aerial vehicles
CN115709471B (en) Robot state value adjusting method and robot
US20240139950A1 (en) Constraint condition learning device, constraint condition learning method, and storage medium
CN116974288B (en) Robot control method and robot
CN115446844B (en) Robot control method, robot and control terminal
US20230384788A1 (en) Information processing device, information processing system, information processing method, and recording medium storing program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination