CN113448338B - Robot control method, robot, computer program product, and storage medium - Google Patents

Robot control method, robot, computer program product, and storage medium Download PDF

Info

Publication number
CN113448338B
CN113448338B CN202110575994.XA CN202110575994A CN113448338B CN 113448338 B CN113448338 B CN 113448338B CN 202110575994 A CN202110575994 A CN 202110575994A CN 113448338 B CN113448338 B CN 113448338B
Authority
CN
China
Prior art keywords
virtual
motion
robot
underwater robot
formation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110575994.XA
Other languages
Chinese (zh)
Other versions
CN113448338A (en
Inventor
叶心宇
朱华
张巍
李胜全
张爱东
梅涛
陆海博
何哲
李脊森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peng Cheng Laboratory
Original Assignee
Peng Cheng Laboratory
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peng Cheng Laboratory filed Critical Peng Cheng Laboratory
Priority to CN202110575994.XA priority Critical patent/CN113448338B/en
Publication of CN113448338A publication Critical patent/CN113448338A/en
Application granted granted Critical
Publication of CN113448338B publication Critical patent/CN113448338B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/04Control of altitude or depth
    • G05D1/06Rate of change of altitude or depth
    • G05D1/0692Rate of change of altitude or depth specially adapted for under-water vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application discloses a robot control method, a robot, a computer program product and a storage medium, wherein the robot control method comprises the following steps: acquiring motion environment parameters of each underwater robot, the number of robots in which each underwater robot is located and motion information of non-cooperative targets, wherein the motion environment parameters comprise the radius of a motion track; determining the motion trail of the underwater robot according to the motion environment parameters, the number of robots in formation, the motion information of the non-cooperative targets and a preset path prediction model; the underwater robot is controlled to move according to the motion trail, the problem that formations of the underwater robot cannot be adaptively changed when non-cooperative targets are in avoidance tracking by means of an external environment in the prior art is solved, and the self-adaptive capacity of the underwater robot formation tracking process is improved.

Description

Robot control method, robot, computer program product, and storage medium
Technical Field
The present application relates to the field of robot control technology, and in particular, to a robot control method, a robot, a computer program product, and a storage medium.
Background
In recent years, along with the enhancement of national territory main awareness, ocean monitoring and protection awareness, the requirements for positioning and tracking underwater non-cooperative targets are increasingly greater, and the multi-underwater robot carrying sensing equipment such as sonar is utilized for carrying out omnibearing observation and tracking on the non-cooperative targets, so that the method has important significance in the aspects of maintaining national ocean rights and interests, observing ocean environments and the like. With the development of sensor technology, underwater communication technology and modern manufacturing technology, the development of underwater robots is proceeding toward miniaturization, intellectualization and clustering, the maneuverability and perceptibility of miniaturized and low-cost Autonomous Underwater Vehicles (AUVs) are continuously improved, and the realization of positioning and tracking of underwater targets by utilizing a plurality of AUVs in formation becomes a reality.
However, the technical problems of the formation tracking scheme are as follows: in the tracking process, when a non-cooperative target is in evasion tracking by means of an external environment, the formation of the underwater robot cannot be adaptively transformed.
Disclosure of Invention
The application mainly aims to provide a robot control method, a robot, a computer program product and a storage medium, and aims to solve the problem that the formation of an underwater robot cannot be adaptively changed when a non-cooperative target is kept away and tracked by means of an external environment in the prior art.
To achieve the above object, the present application provides a robot control, which in one embodiment comprises the steps of:
acquiring motion environment parameters of each underwater robot, the number of robots in which each underwater robot is located and motion information of non-cooperative targets, wherein the motion environment parameters comprise the radius of a motion track;
determining the motion trail of the underwater robot according to the motion environment parameters, the number of robots in formation, the motion information of the non-cooperative targets and a preset path prediction model;
and controlling the underwater robot to move according to the motion trail.
In an embodiment, before the step of obtaining the motion environment parameter of each underwater robot, the number of robots in which each underwater robot is located, and the motion information of the non-cooperative target, the method further includes:
acquiring preset path parameters of each virtual navigator and motion information of non-cooperative targets, wherein the path parameters comprise the radius of each virtual navigator motion track, identification information of the virtual navigator and the number of robots in which the virtual navigator is located, and the virtual navigator corresponds to the underwater robot;
obtaining the current position of each virtual navigator according to each path parameter;
determining the current position of each virtual navigator according to the current position of each virtual navigator and the motion information of the non-cooperative targets, wherein the motion information of the non-cooperative targets comprises: the speed of motion and angular velocity of the non-cooperative target;
and generating a preset path prediction model according to the current position of each virtual pilot.
In an embodiment, the step of obtaining the position of each virtual pilot under the target coordinate system according to each path parameter includes:
obtaining the position of each virtual navigator under the target coordinate system according to a first calculation formula, wherein the first calculation formula is as follows:
wherein i is the identification information of the ith virtual navigator, R d Radius, gamma, of the motion trajectory for the ith virtual pilot i To guide the variables, phi i Formation phase and phi for formation where the ith virtual pilot is located i =(i-1)·2·π/N Z ,N Z The number of robots in formation for the ith virtual pilot.
In an embodiment, the step of determining the current position of each virtual pilot according to the position of each virtual pilot in the target coordinate system and the motion information of the non-cooperative target includes:
determining the current position of each virtual navigator according to a second calculation formula, wherein the second calculation formula is as follows:
wherein p is d,ii T) is the expected path of the virtual pilot of the ith underwater robot in the earth coordinate system,is the current position of the non-cooperative target, R t Coordinate transformation matrix between earth coordinate system and hull coordinate system as non-cooperative target +.>Is a second-order antisymmetric matrix>Angular velocity for non-cooperative target, +.>The symbology being relative to gamma i Differential of->Is the speed of the non-cooperative target.
In an embodiment, after determining the current location of each virtual pilot, the method further includes:
constructing a communication topological relation among each virtual navigator;
generating a formation coordination control signal of each virtual navigator based on the communication topological relation;
and controlling each virtual pilot to move according to the formation coordination control signal.
In an embodiment, the generating the formation coordination control signal of each virtual pilot based on the communication topology relationship includes:
adjacent matrix A= [ a ] for acquiring and storing connection relation of edges ij ]Which is provided withWherein a is ij The connection relation between two adjacent virtual pilots is adopted;
a metric matrix is obtained based on the adjacency matrix, wherein the metric matrix is D=diag (|N) 1 |,|N 2 |,...,|N N I), wherein,the number of robots in formation for the ith virtual pilot;
according to the measurement matrix and combining a gain matrix K c =diag(k c,1 ,k c,2 ,...,k c,N ) A formation coordination control signal of the virtual pilot is generated.
In an embodiment, the controlling the underwater robot to move according to the motion trail includes:
acquiring path tracking errors of the underwater robot and the virtual pilot;
processing the tracking error by adopting a first-order sliding mode control method to obtain a control law of the underwater robot;
and controlling the underwater robot to move according to the motion trail according to the control law.
To achieve the above object, the present application also provides a computer program product comprising a robot control program which, when executed by the processor, implements the steps of the robot control method described above.
In order to achieve the above object, the present application also provides an underwater robot including a memory, a processor, and a robot control program stored in the memory and executable on the processor, which when executed by the processor, implements the respective steps of robot control as described above.
In order to achieve the above object, the present application also provides a storage medium storing a robot control program which, when executed by a processor, realizes the respective steps of robot control as described above.
The robot control method and device, the underwater robot and the storage medium provided by the application have at least the following technical effects:
the method comprises the steps that as the motion environment parameters of each underwater robot, the number of robots in which each underwater robot is located and the motion information of non-cooperative targets are acquired, the motion environment parameters comprise the radius of a motion track; and forming a queue to determine the motion trail of the underwater robot according to the motion environment parameters, the number of robots in the queue, the motion information of the non-cooperative targets and a preset path prediction model, and controlling the underwater robot to move according to the motion trail, so that the problem that formations of the underwater robot cannot be adaptively transformed when the non-cooperative targets are kept away and tracked by means of external environments in the prior art is solved, and the self-adaptive capacity of the underwater robot in the formation tracking process is improved.
Drawings
FIG. 1 is a schematic view of an underwater robot architecture according to an embodiment of the present application;
FIG. 2 is a flow chart of a first embodiment of the robot control of the present application;
FIG. 3 is a flow chart of a second embodiment of the robotic control of the present application;
FIG. 4 is a flow chart of a third embodiment of the robotic control of the present application;
FIG. 5 is a detailed flowchart of step S320 in a fourth embodiment of the robot control of the present application;
FIG. 6 is a schematic diagram of a refinement flow of step S130 in a fifth embodiment of the robot control of the present application;
the achievement of the objects, functional features and advantages of the present application will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The application aims to solve the problem that in the prior art, when a non-cooperative target is kept away and tracked by means of an external environment, the formation of underwater robots cannot be adaptively transformed, and the method comprises the steps of acquiring the motion environment parameters of each underwater robot, the number of robots in which each underwater robot is located and the motion information of the non-cooperative target, wherein the motion environment parameters comprise the radius of a motion track; determining the motion trail of the underwater robot according to the motion environment parameters, the number of robots in formation, the motion information of the non-cooperative targets and a preset path prediction model; and controlling the underwater robot to move according to the motion trail, and improving the self-adaptive capacity of the underwater robot in the formation tracking process.
In order to better understand the above technical solution, exemplary embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the application to those skilled in the art.
As shown in fig. 1, fig. 1 is a schematic structural diagram of a hardware running environment according to an embodiment of the present application.
It should be noted that fig. 1 may be a schematic architecture diagram of a hardware running environment of the underwater robot.
As shown in fig. 1, the underwater robot may include: a processor 1001, such as a CPU, memory 1005, user interface 1003, network interface 1004, communication bus 1002. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface such as a WI-FI interface, a radio interface. The memory 1005 may be a high-speed RAM memory or a stable memory (non-volatile memory), such as a disk memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
Optionally, the underwater robot may further include a camera, a sensor, a wireless transmission module, etc., wherein the sensor includes a motion sensor or other sensor; the wireless transmission module is mainly used for data communication; of course, the mobile terminal may also be configured with other sensors such as a gyroscope and an infrared sensor, which are not described herein.
It will be appreciated by those skilled in the art that the configuration of the underwater robot shown in fig. 1 is not limiting of the underwater robot and that the underwater robot may include more or fewer components than shown, or may combine certain components, or may have a different arrangement of components.
As shown in fig. 1, an operating system, a network communication module, a user interface module, and a robot control program may be included in the memory 1005 as one type of storage medium. The operating system is a program for managing and controlling underwater robot hardware and software resources, a robot control program, and other software or program operations.
In the underwater robot shown in fig. 1, the user interface 1003 is mainly used for connecting a terminal, and data communication is performed with the terminal; the network interface 1004 is mainly used for data transmission; the processor 1001 may be used to invoke a robot control program stored in the memory 1005.
In this embodiment, the underwater robot includes: a memory 1005, a processor 1001, and a robot control program stored on the memory and executable on the processor, wherein:
in this embodiment, the processor 1001 may be configured to call a robot control program stored in the memory 1005, and perform the following operations:
acquiring motion environment parameters of each underwater robot, the number of robots in which each underwater robot is located and motion information of non-cooperative targets, wherein the motion environment parameters comprise the radius of a motion track;
determining the motion trail of the underwater robot according to the motion environment parameters, the number of robots in formation, the motion information of the non-cooperative targets and a preset path prediction model;
and controlling the underwater robot to move according to the motion trail.
In this embodiment, the processor 1001 may be configured to call a robot control program stored in the memory 1005, and perform the following operations:
acquiring preset path parameters of each virtual navigator and motion information of non-cooperative targets, wherein the path parameters comprise the radius of each virtual navigator motion track, identification information of the virtual navigator and the number of robots in which the virtual navigator is located, and the virtual navigator corresponds to the underwater robot;
obtaining the position of each virtual navigator under a target coordinate system according to each path parameter;
determining the current position of each virtual navigator according to the position of each virtual navigator in a target coordinate system and the motion information of the non-cooperative target, wherein the motion information of the non-cooperative target comprises: the speed of motion and angular velocity of the non-cooperative target;
and generating a preset path prediction model according to the current position of each virtual pilot.
In this embodiment, the processor 1001 may be configured to call a robot control program stored in the memory 1005, and perform the following operations:
constructing a communication topological relation among each virtual navigator;
generating a formation coordination control signal of each virtual navigator based on the communication topological relation;
and controlling each virtual pilot to move according to the formation coordination control signal.
In this embodiment, the processor 1001 may be configured to call a robot control program stored in the memory 1005, and perform the following operations:
adjacent matrix A= [ a ] for acquiring and storing connection relation of edges ij ]Wherein a is ij The connection relation between two adjacent virtual pilots is adopted;
a metric matrix is obtained based on the adjacency matrix, wherein the metric matrix is D=diag (|N) 1 |,|N 2 |,...,|N N I), wherein,the number of robots in formation for the ith virtual pilot;
according to the measurement matrix and combining a gain matrix K c =diag(k c,1 ,k c,2 ,...,k c,N ) A formation coordination control signal of the virtual pilot is generated.
In this embodiment, the processor 1001 may be configured to call a robot control program stored in the memory 1005, and perform the following operations:
acquiring path tracking errors of the underwater robot and the virtual pilot;
processing the tracking error by adopting a first-order sliding mode control method to obtain a control law of the underwater robot;
and controlling the underwater robot to move according to the motion trail according to the control law.
The underwater robot provided by the embodiment of the application is an underwater robot adopted for implementing the method of the embodiment of the application, so that based on the method introduced by the embodiment of the application, a person skilled in the art can know the specific structure and deformation of the underwater robot, and therefore, the description is omitted here. All underwater robots adopted by the method of the embodiment of the application belong to the scope of protection to be protected by the application. The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
For a software implementation, the techniques described in embodiments of the present application may be implemented by modules (e.g., procedures, functions, and so on) that perform the functions described in embodiments of the present application. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
Based on the above-described structure, an embodiment of the present application is presented.
Referring to fig. 2, fig. 2 is a schematic flow chart of a first embodiment of the robot control according to the present application, which includes the following steps:
step S110, obtaining the motion environment parameters of each underwater robot, the number of robots in which each underwater robot is located and the motion information of the non-cooperative targets.
In this embodiment, the motion environment parameters include a radius of a motion track where the underwater robot is located, when the motion environment parameters, that is, the motion track radius changes, the underwater robot formation reduces a coverage area and passes through a track narrow area, when the motion track widens, the underwater robot formation enlarges the coverage area to keep a better tracking effect, under an actual underwater motion condition, the underwater robot motion environment parameters can be acquired by means of a plurality of distance measuring sensors, such as laser or sonar, and the like, the distance measuring sensors are mounted on the underwater robot, the motion track radius measurement is realized according to the time of sending signals and receiving signals, and in a simulation process, the motion environment parameters of the underwater robot can be set according to actual conditions; when the motion environment changes, the number of robots in which the underwater machines are located is required to be dynamically changed and obtained, and each robot has corresponding robot identification information; the non-cooperative target is an object to be tracked by the underwater robot, the non-cooperative target is an underwater non-cooperative target, the non-cooperative target is a target to be detected, and the non-cooperative target comprises an underwater unidentified submersible vehicle, a failure submersible vehicle, marine animals, plumes caused by petroleum leakage and submarine volcanic eruption and the like; the motion parameters of the non-cooperative targets include a velocity or angular velocity of the non-cooperative targets.
And step S120, determining the motion trail of the underwater robot according to the motion environment parameters, the number of robots in formation, the motion information of the non-cooperative targets and a preset path prediction model.
In this embodiment, the number of the robots in the formation may change in real time according to the change of the motion environment parameter, and according to the motion environment parameter and the number of the robots in the formation where the underwater robots are located, a top-level decision-making planner is adopted to generate a corresponding path parameter, and the path parameter, the motion angle, the speed and the position of the non-cooperative target are used as input data to input into a preset path prediction model, so as to obtain a motion track corresponding to the matched underwater robot.
And step S130, controlling the underwater robot to move according to the motion trail.
In this embodiment, the virtual pilots and the underwater robots are in one-to-one correspondence, that is, the virtual pilots and the underwater robots have a mapping relationship, after knowing the motion parameters of the virtual pilots, the number of robots in a formation, and the motion information of the non-cooperative targets, the parameters are input into a path prediction model, and the motion track of the underwater robots, which changes in real time, can be obtained according to the mapping relationship, and the bottom motion controller controls each underwater robot to move correspondingly according to the motion track.
The method comprises the steps of inputting the motion parameters of a virtual navigator, the number of robots in a formation and the motion information of the non-cooperative targets into a path prediction model after knowing the motion parameters, the number of robots in the formation and the motion information of the non-cooperative targets, obtaining the motion track of the underwater robots in real time according to the mapping relation between the virtual navigator and the underwater robots, and controlling each underwater robot to correspondingly move according to the motion track by a bottom-layer motion controller.
Referring to fig. 3, fig. 3 is a flow chart of a second embodiment of the robot control according to the present application, in this embodiment, before step S110 in the first embodiment, the method includes:
step S210, obtaining preset path parameters of each virtual navigator and motion information of a non-cooperative target.
In this embodiment, the path parameters are generated by the top-level decision planner according to the motion environment parameters and the task requirements, where the path parameters include a radius of a motion track of each virtual pilot, identification information of the virtual pilot, and the number of robots in which the virtual pilot is located, the identification information of the virtual pilot is a number of the virtual pilot, and the virtual pilot corresponds to the underwater robot.
And step S220, obtaining the position of each virtual pilot under the target coordinate system according to each path parameter.
In this embodiment, the position of each virtual navigator in the target coordinate system is obtained according to a first calculation formula:
wherein i is the i-th virtual navigator identification information, R d Radius, gamma, of the motion trajectory for the ith virtual pilot i To guide the variables, phi i Formation phase and phi for formation where the ith virtual pilot is located i =(i-1)·2·π/N Z ,N Z The number of robots in formation for the ith virtual pilot.
And step S230, determining the current position of each virtual navigator according to the position of each virtual navigator in the target coordinate system and the motion information of the non-cooperative targets.
In this embodiment, the motion information of the non-cooperative target includes: determining the current position of the virtual navigator according to a second calculation formula, wherein the second calculation formula is as follows:
wherein p is d,ii T) is the expected path of the virtual pilot of the ith underwater robot in the earth coordinate system,is the current position of the non-cooperative target, R t Coordinate transformation matrix between earth coordinate system and hull coordinate system as non-cooperative target +.>Is a second-order antisymmetric matrix>Angular velocity for non-cooperative target, +.>The symbology being relative to gamma i Differential of->Is the speed of the non-cooperative target.
Step S240, generating a preset path prediction model according to the current position of each virtual pilot.
In this embodiment, the path prediction model has a current position of each virtual pilot generated, and the current position of each virtual pilot is determined by two parts, one part is a path parameter generated by a top-level decision-making planner according to a motion environment parameter and a task requirement, and the current position of the virtual pilot is generated according to the path parameter; and the other part is required to combine the motion information of the current non-cooperative target, and the current position of the virtual pilot is finally generated according to the two parts of data.
The method comprises the steps of acquiring preset path parameters of each virtual navigator and motion information of non-cooperative targets, obtaining the position of each virtual navigator under a target coordinate system according to each path parameter, determining the current position of each virtual navigator according to the position of each virtual navigator under the target coordinate system and the motion information of the non-cooperative targets, generating a preset path prediction model according to the current position of each virtual navigator, changing path generator parameters in real time, quickly changing the coverage area of a multi-robot formation according to task requirements, changing formation parameters in real time through a narrow channel or reducing target distance, and flexibly increasing and decreasing the number of robots in the formation according to task requirements, thereby realizing the effect of navigation formation self-adaption dynamic change in a plurality of underwater robot cooperative tracking tasks.
Referring to fig. 4, fig. 4 is a flowchart illustrating steps of a third embodiment of the robot control according to the present application, in this embodiment, step S230 in the second embodiment includes:
step S310, constructing a communication topological relation among each virtual navigator.
In this embodiment, the communication topology relationship is used to establish a communication relationship between virtual pilots, each of the virtual pilots is represented by a node, communication connection between the nodes is represented by an edge formed by node connection, and the communication relationship of the virtual pilots is represented by an undirected algebraic topology map, which includes connection relationships of nodes, edges, and edges.
Step S320, generating a formation coordination control signal of each virtual pilot based on the communication topological relation.
In this embodiment, the formation coordination control signal is used to control the virtual pilots to perform orderly motion so as to avoid collision, an adjacency matrix is determined according to the connection relationship of each virtual pilot edge in the undirected algebraic topological graph by calculating the connection relationship of each virtual pilot edge, a measurement matrix is obtained by calculating based on the adjacency matrix, and the formation coordination control signal of each virtual pilot is generated through the measurement matrix and the gain matrix.
And step S330, controlling each virtual pilot to move according to the formation coordination control signal.
In the embodiment, in the formation process, each virtual pilot controls the virtual pilot to orderly move according to the formation coordination control signal correspondingly generated in the movement process, so that collision of adjacent virtual pilots in the movement process is avoided, and the formation self-adaptive dynamic change effect in the collaborative tracking task is realized.
Because the communication topological relation between each virtual navigator is constructed, the connection relation of each virtual navigator side in the undirected algebraic topological diagram is calculated, a metric matrix is obtained based on the calculation of the adjacent matrix according to the connection relation adjacent matrix of the side, a formation coordination control signal of each virtual navigator is generated through the metric matrix and the gain matrix, and the virtual navigator is controlled to orderly move according to the formation coordination control signal correspondingly generated, so that collision is avoided.
Referring to fig. 5, fig. 5 is a schematic diagram of a refinement flow of step S320 in a fourth embodiment of the robot control according to the present application, in which step S320 in the third embodiment includes:
step S321, obtaining and storing an adjacent matrix of the connection relation of the edges.
In this embodiment, the adjacency matrix is used to represent the matrix of the adjacent relation between fixed points, and it is assumed that V and E are set, where V is the vertex and E is the edge, one-dimensional array is used to store all vertex data in the graph, and one two-dimensional array is used to store the data of the relation (edge or arc) between vertices, and this two-dimensional array is called adjacency matrix, and the adjacency matrix adopted in the present application is a= [ a ] ij ]Wherein a is ij For the connection relation between two adjacent virtual pilots, 1 is connection, 0 is disconnection, and an adjacent matrix is determined according to the connection relation of the edges, wherein the adjacent matrix is symmetrical and the main diagonal is necessarily zero, so that the adjacent matrix is an undirected graph adjacent matrix.
In step S322, a metric matrix is obtained based on the adjacency matrix.
In this embodiment, the metric matrix is d=diag (|n) 1 |,|N 2 |,...,|N N I), wherein,in the undirected graph, the degree of any vertex i is the number of all non-zero elements in the ith column (or the ith row), the degree of departure of the vertex i in the directed graph is the number of all non-zero elements in the ith row, and the degree of arrival is the number of all non-zero elements in the ith column, for example in the formation of four underwater robots,
when dynamically changing into three underwater robot formations,
when dynamically changing into two underwater robot formations,
step S323, generating a formation coordination control signal of the virtual pilot according to the metric matrix and combining the gain matrix.
In this embodiment, the gain matrix is used to coordinate and control the virtual pilots, and the gain matrix is K c =diag(k c,1 ,k c,2 ,...,k c,N ) According to the measurement matrix and the gain matrix, adopting a formula v r,i (t)=[-K C (D-A)γ] i And calculating formation coordination control signals of the virtual pilots, wherein the formation coordination control signals of the virtual pilots acting on the ith underwater robot are used for controlling the movement of the virtual pilots according to the formation coordination control signals so as to avoid collision.
Because the adjacency matrix for acquiring and storing the connection relation of the edges is adopted, the measurement matrix is acquired based on the adjacency matrix, and the technical scheme of generating the formation coordination control signals of the virtual pilots according to the measurement matrix and combining the gain matrix is adopted, the problem of collision of the underwater robots in the prior art in the formation movement process is solved, and the movement of each virtual pilot is controlled by receiving the formation coordination control signals so as to avoid collision.
Referring to fig. 6, fig. 6 is a schematic diagram of a refinement flow of step S130 in a fifth embodiment of the robot control according to the present application, in this embodiment, step S130 in the first embodiment includes:
step S131, obtaining path tracking errors of the underwater robot and the virtual pilot.
In this embodiment, the position error of the underwater robot and the synchronization error between the underwater robots during formation holding are obtained according to the difference between the current position of the underwater robot and the current position of the virtual pilot, and the path tracking error of each underwater robot is obtained from the position error and the synchronization error, and the path tracking error of the underwater robot and the virtual pilot is defined asWherein->Is epsilon= [ epsilon ] in the ith underwater robot hull coordinate system 12 ] T Is a reference point of (c).
And S132, processing the tracking error by adopting a first-order sliding mode control method to obtain the control law of the underwater robot.
In this embodiment, the first-order sliding mode control is a nonlinear control, and can purposefully change continuously in a dynamic process according to the current state of the system to force the system to move according to a predetermined state track, the first-order sliding mode control is implemented by designing a dynamic nonlinear sliding mode surface equation, and the sliding mode surface is composed of errors of the current position of the underwater robot and the current position of the virtual pilot, namely path tracking errors; designing path tracking control law u of ith underwater robot by adopting first-order sliding mode control i =[v f,ii ] T In v f,ii The forward speed instruction and the navigational speed instruction of the underwater robot are respectively given, and the control law is given by the following formula:
rho in i As a scalar factor, the robustness of the path tracking controller under external interference and underlying motion control deviation is guaranteed, delta is given by the following equation,is the pseudo-inverse of delta:
and step S133, controlling the underwater robot to move according to the motion trail according to the control law.
In the present embodiment, according to the obtained control input u i The bottom layer motion controller is designed to track forward speed instructions and navigational speed instructions. The forward speed command adopts incremental PID control of navigational speed feedback, as follows:
u v =u 0 +Δu
e v =v t -v c
wherein u is 0 E, control command for balancing with navigational speed command r Is the error between the current speed and the commanded speed. The PD controller with the directional velocity feedback is used for the directional velocity instruction, and the PD controller is as follows:
e ω =ω tc
therefore, the underwater robot is controlled to move according to the motion trail according to the forward speed instruction and the navigational speed instruction of the virtual pilot.
The method comprises the steps of obtaining path tracking errors of an underwater robot and a virtual pilot, processing the tracking errors by a first-order sliding mode control method to obtain a control law of the underwater robot, and controlling the underwater robot to move according to the motion trail according to the control law.
Based on the same inventive concept, the embodiments of the present application further provide a computer program product, where the computer program product includes a robot control program, where the robot control program when executed by a processor implements the steps of the robot control as described above, and the same technical effects can be achieved, and for avoiding repetition, a description is omitted herein.
Since the computer program product provided by the embodiment of the present application is a computer program product adopted for implementing the method of the embodiment of the present application, based on the method described in the embodiment of the present application, a person skilled in the art can understand the specific structure and modification of the computer program product, and therefore, the description thereof is omitted herein. All computer program products used in the methods of the embodiments of the present application are within the intended scope of the present application.
Based on the same inventive concept, the embodiment of the application also provides a storage medium, wherein the storage medium stores a robot control program, and the robot control program realizes the steps of the robot control when being executed by a processor, and can achieve the same technical effects, so that repetition is avoided, and the description is omitted.
Because the storage medium provided by the embodiment of the present application is a storage medium used for implementing the method of the embodiment of the present application, based on the method introduced by the embodiment of the present application, a person skilled in the art can understand the specific structure and the modification of the storage medium, and therefore, the description thereof is omitted herein. All storage media adopted by the method of the embodiment of the application belong to the scope of protection of the application.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be noted that in the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The application may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (7)

1. A method of controlling a robot, the method comprising:
acquiring preset path parameters of each virtual navigator and motion information of non-cooperative targets, wherein the path parameters comprise the radius of each virtual navigator motion track, identification information of the virtual navigator and the number of robots in which the virtual navigator is located, and the virtual navigator corresponds to the underwater robot;
obtaining the position of each virtual navigator under the target coordinate system according to a first calculation formula, wherein the first calculation formula is as follows:
wherein i is the identification information of the ith virtual navigator, R d Radius, gamma, of the motion trajectory for the ith virtual pilot i To guide the variables, phi i Formation phase and phi for formation where the ith virtual pilot is located i =(i-1)·2·π/N Z ,N Z The number of robots in formation for the ith virtual pilot;
determining the current position of each virtual navigator according to the position of each virtual navigator in a target coordinate system and the motion information of the non-cooperative target, wherein the motion information of the non-cooperative target comprises: the speed of motion and angular velocity of the non-cooperative target;
generating a preset path prediction model according to the current position of each virtual navigator;
acquiring motion environment parameters of each underwater robot, the number of robots in which each underwater robot is located and motion information of non-cooperative targets, wherein the motion environment parameters comprise the radius of a motion track;
determining the motion trail of the underwater robot according to the motion environment parameters, the number of robots in formation, the motion information of the non-cooperative targets and a preset path prediction model;
and controlling the underwater robot to move according to the motion trail.
2. The robot control method of claim 1, wherein the step of determining the current position of each virtual pilot based on the position of each virtual pilot in the target coordinate system and the motion information of the non-cooperative target comprises:
determining the current position of each virtual navigator according to a second calculation formula, wherein the second calculation formula is as follows:
wherein p is d,ii T) is the expected path of the virtual pilot of the ith underwater robot in the earth coordinate system,is the current position of the non-cooperative target, R t Coordinate transformation matrix between earth coordinate system and hull coordinate system as non-cooperative target +.>Is a second-order antisymmetric matrix>Angular velocity for non-cooperative target, +.>The symbology being relative to gamma i Differential of->Is the speed of the non-cooperative target.
3. The robot control method of claim 1, wherein after determining the current location of each virtual pilot, further comprising:
constructing a communication topological relation among each virtual navigator;
generating a formation coordination control signal of each virtual navigator based on the communication topological relation;
and controlling each virtual pilot to move according to the formation coordination control signal.
4. The robot control method of claim 3, wherein the generating the formation coordination control signal for each virtual pilot based on the communication topology comprises:
adjacent matrix A= [ a ] for acquiring and storing connection relation of edges ij ]Wherein a is ij The connection relation between two adjacent virtual pilots is adopted;
a metric matrix is obtained based on the adjacency matrix, wherein the metric matrix is D=diag (|N) 1 |,|Ν 2 |,...,|Ν N I), wherein,the number of robots in formation for the ith virtual pilot;
according to the measurement matrix and combining a gain matrix K c =diag(k c,1 ,k c,2 ,...,k c,N ) A formation coordination control signal of the virtual pilot is generated.
5. The robot control method according to claim 1, wherein the controlling the underwater robot to move according to the motion profile includes:
acquiring path tracking errors of the underwater robot and the virtual pilot;
processing the tracking error by adopting a first-order sliding mode control method to obtain a control law of the underwater robot;
and controlling the underwater robot to move according to the motion trail according to the control law.
6. An underwater robot comprising a memory, a processor, and a robot control program stored in the memory and executable on the processor, which when executed by the processor, implements the robot control method according to any one of claims 1-5.
7. A storage medium storing a robot control program which, when executed by a processor, implements the robot control method according to any one of claims 1-5.
CN202110575994.XA 2021-05-25 2021-05-25 Robot control method, robot, computer program product, and storage medium Active CN113448338B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110575994.XA CN113448338B (en) 2021-05-25 2021-05-25 Robot control method, robot, computer program product, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110575994.XA CN113448338B (en) 2021-05-25 2021-05-25 Robot control method, robot, computer program product, and storage medium

Publications (2)

Publication Number Publication Date
CN113448338A CN113448338A (en) 2021-09-28
CN113448338B true CN113448338B (en) 2023-11-28

Family

ID=77810235

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110575994.XA Active CN113448338B (en) 2021-05-25 2021-05-25 Robot control method, robot, computer program product, and storage medium

Country Status (1)

Country Link
CN (1) CN113448338B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111522351A (en) * 2020-05-15 2020-08-11 中国海洋大学 Three-dimensional formation and obstacle avoidance method for underwater robot

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111522351A (en) * 2020-05-15 2020-08-11 中国海洋大学 Three-dimensional formation and obstacle avoidance method for underwater robot

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于多虚拟领航者的多机器人编队控制方法;王钦钊等;《装甲兵工程学院学报》;第31卷(第5期);第49-54页 *
多AUV队形控制的新方法;吴小平等;舰船科学技术;30(2);第128-134页 *

Also Published As

Publication number Publication date
CN113448338A (en) 2021-09-28

Similar Documents

Publication Publication Date Title
Hasberg et al. Simultaneous localization and mapping for path-constrained motion
CN110554376A (en) Radar range finding method for vehicles
CN111338383B (en) GAAS-based autonomous flight method and system, and storage medium
Karakaya et al. A new mobile robot toolbox for MATLAB
Abichandani et al. Mixed integer nonlinear programming framework for fixed path coordination of multiple underwater vehicles under acoustic communication constraints
CN111618861A (en) Double-follow-up intelligent arm control method based on four-axis structure
Shi et al. Distributed circumnavigation control of autonomous underwater vehicles based on local information
Lu et al. Real-time perception-limited motion planning using sampling-based MPC
Fahimi et al. An alternative closed-loop vision-based control approach for Unmanned Aircraft Systems with application to a quadrotor
Lyu et al. Vision-based plane estimation and following for building inspection with autonomous UAV
Guo et al. Intelligent assistance positioning methodology based on modified iSAM for AUV using low-cost sensors
CN113448338B (en) Robot control method, robot, computer program product, and storage medium
Nazarzehi et al. Decentralized three dimensional formation building algorithms for a team of nonholonomic mobile agents
CN113984045B (en) Method and system for estimating motion state of movable butt-joint target of underwater robot
Filaretov et al. Formation control of AUV on the base of visual tracking of AUV-leader
EP3797939B1 (en) Control command based adaptive system and method for estimating motion parameters of differential drive vehicles
CN112747752A (en) Vehicle positioning method, device, equipment and storage medium based on laser odometer
Zolghadr et al. Locating a two-wheeled robot using extended Kalman filter
Tsai et al. Cooperative localization using fuzzy decentralized extended information filtering for homogenous omnidirectional mobile multi-robot system
Khorrami et al. A hierarchical path planning and obstacle avoidance system for an autonomous underwater vehicle
Demim et al. Simultaneous localization and mapping algorithm based on 3D laser for unmanned aerial vehicle
CN114265406B (en) Intelligent vehicle formation control system based on machine vision and control method thereof
Winkens et al. Optical truck tracking for autonomous platooning
Kelly et al. Simultaneous mapping and stereo extrinsic parameter calibration using GPS measurements
Uyulan et al. Mobile robot localization via sensor fusion algorithms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant