CN116382328A - Dam intelligent detection method based on cooperation of multiple robots in water and air - Google Patents
Dam intelligent detection method based on cooperation of multiple robots in water and air Download PDFInfo
- Publication number
- CN116382328A CN116382328A CN202310224181.5A CN202310224181A CN116382328A CN 116382328 A CN116382328 A CN 116382328A CN 202310224181 A CN202310224181 A CN 202310224181A CN 116382328 A CN116382328 A CN 116382328A
- Authority
- CN
- China
- Prior art keywords
- unmanned aerial
- aerial vehicle
- formation
- algorithm
- underwater robot
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 42
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 title claims abstract description 31
- 238000000034 method Methods 0.000 claims abstract description 11
- 230000015572 biosynthetic process Effects 0.000 claims description 49
- 238000010586 diagram Methods 0.000 claims description 23
- 238000000605 extraction Methods 0.000 claims description 19
- 230000008569 process Effects 0.000 claims description 9
- RZVHIXYEVGDQDX-UHFFFAOYSA-N 9,10-anthraquinone Chemical compound C1=CC=C2C(=O)C3=CC=CC=C3C(=O)C2=C1 RZVHIXYEVGDQDX-UHFFFAOYSA-N 0.000 claims description 3
- 238000011176 pooling Methods 0.000 claims description 3
- 238000005755 formation reaction Methods 0.000 claims 13
- 125000002015 acyclic group Chemical group 0.000 claims 1
- 230000004927 fusion Effects 0.000 claims 1
- 230000009286 beneficial effect Effects 0.000 abstract description 2
- 238000012545 processing Methods 0.000 abstract description 2
- 238000011897 real-time detection Methods 0.000 abstract description 2
- 238000012549 training Methods 0.000 description 8
- 239000011159 matrix material Substances 0.000 description 7
- 230000008859 change Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 239000002689 soil Substances 0.000 description 2
- 241000251468 Actinopterygii Species 0.000 description 1
- 241001474374 Blennius Species 0.000 description 1
- 241000195493 Cryptophyta Species 0.000 description 1
- 208000034699 Vitreous floaters Diseases 0.000 description 1
- 239000010419 fine particle Substances 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000036544 posture Effects 0.000 description 1
- 239000004575 stone Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/10—Simultaneous control of position or course in three dimensions
- G05D1/101—Simultaneous control of position or course in three dimensions specially adapted for aircraft
- G05D1/106—Change initiated in response to external conditions, e.g. avoidance of elevated terrain or of no-fly zones
Landscapes
- Engineering & Computer Science (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention provides an intelligent dam detection method based on cooperation of a plurality of robots in water and air, and belongs to the technical problem of cooperation of the plurality of robots. The technical problems of real-time detection processing of underwater faults and narrow view of the underwater robot during manual detection of the dam body are solved. The technical proposal is as follows: the method comprises the following steps: s1, tracking an unmanned aerial vehicle; s2, after the multiple unmanned aerial vehicles find faults, distributing the faults to unmanned aerial vehicles and underwater robots with the same group number as the fault number; s3, planning a path of the robot to reach a designated fault position; s4, after reaching the designated position, the underwater robot is submerged for fault detection; s5, after the detection is finished, the underwater robot floats on the water again. The beneficial effects of the invention are as follows: and a plurality of robot systems simultaneously perform fault detection on a plurality of fault points, so that the efficiency of fault detection is improved.
Description
Technical Field
The invention relates to the technical field of cooperation of a plurality of robots, in particular to an intelligent dam detection method based on cooperation of a plurality of robots in water and air.
Background
According to statistics, at present, a lot of reservoirs are completed in the 20 th century, professional personnel are required to perform fault detection of a dam body, due to the fact that underwater environments are complex, dark currents and sundries often exist, underwater visibility is low, and safety of the professional personnel cannot be well guaranteed. For underwater faults, the faults are often processed by recording data, pictures and other information brought back to the shore, and cannot be checked in real time.
Due to the narrow view field of the underwater robot, faults and dam conditions cannot be observed in time. Through investigation, faults such as piping, cracks, deformation and the like commonly exist in the dam body. Piping generally occurs in sandy soil, fine particles in soil are taken away by water flow, after a leakage channel is formed, a dike and a dam collapse can be caused, and under the condition that the water level is relatively high, muddy water springs appearing on the back water slope of the riverway dike or the reservoir dam can have a certain air bubble. Piping appears gradually, and danger can be avoided as long as the piping can be timely found and processed; under many conditions of cracks and deformation of the dam, the dam is unstable due to the fact that the underwater part of the dam is flushed all the time. Therefore, the intelligent dam detection system based on the cooperation of the water space multiple robots is provided for detecting the dam.
Disclosure of Invention
The invention aims to provide an intelligent dam detection method based on cooperation of a plurality of robots in water and air, which is used for solving the problems of safety during traditional manual detection of a dam, real-time detection and treatment of underwater faults and narrow view field of the underwater robots.
In order to achieve the aim of the invention, the invention adopts the technical scheme that: a dam intelligent detection system based on water-air multi-robot cooperation specifically comprises the following steps:
s1: forming a plurality of unmanned aerial vehicles in the air, maintaining a certain speed for piloting, floating the plurality of underwater robots on the water surface, and tracking the unmanned aerial vehicles;
s2: after the multiple unmanned aerial vehicles find faults, distributing the faults to unmanned aerial vehicles and underwater robots with the same number of groups as the faults;
s3: the unmanned aerial vehicle and the underwater robot distributed to the tasks are separated from the formation, the unmanned aerial vehicle and the underwater robot reach the appointed fault position through the robot path planning, and other unmanned aerial vehicles and the underwater robots continue to search for faults;
s4: after reaching the designated position, the underwater robot submerges to perform fault detection, and the unmanned aerial vehicle hovers to wait for the detection to finish;
s5: after the detection is finished, the underwater robot floats on the water again, and the unmanned aerial vehicle and the underwater robot return to the formation to continue searching.
Further, the step S1 includes the following steps:
s1.1, controlling the underwater robots and the unmanned aerial vehicles through a fuzzy pid algorithm, and adopting a dynamic control algorithm to enable the multi-unmanned-vehicle formation to adapt to various different dam environments and keep a certain speed for piloting.
S1.2, unmanned aerial vehicle formation is controlled by adopting a Leader-Follower algorithm.
The leader-follower formation algorithm refers to setting one or more unmanned aerial vehicles as pilot unmanned aerial vehicles, and the remaining unmanned aerial vehicles as follow unmanned aerial vehicles, which perform flight control at a yaw angle and distance required by the formation. The Leader-Follower formation control graph is represented by a directed acyclic graph f= (V, E, D), and a finite set of control systems containing N vertices and designating each node is v= { V 1 ,...,v n This finite set is a set of N control systems,wherein x is i ∈R n Be used for showing unmanned aerial vehicle's place information, u i ∈R m Representing state information of the unmanned aerial vehicle during movement, the edge set E epsilon V x V can be used for representing the relationship between the piloting unmanned aerial vehicle and the following unmanned aerial vehicle. If u is i I, x with unmanned aerial vehicle i Relevant, then command->And each j (v) is E i ,v j )∈E,v i E V control target ready setAnd D.
For the multi-unmanned aerial vehicle j, the piloting unmanned aerial vehicle with the tail part j of the multi-unmanned aerial vehicle formation connected with the vertex j can use the set L j E V, in unmanned aerial vehicle formation, vertex V with yaw angle 0 i Is a piloting unmanned aerial vehicle, v i ∈L F . Because L is F The vertex in (2) has no input edge, so the piloting unmanned aerial vehicle does not need to receive state feedback of the following unmanned aerial vehicle, and only needs to move according to a route given by a program. Meanwhile, unmanned aerial vehicle formation is dynamically corrected, and operations such as obstacle avoidance and the like are performed so that the formation can fly normally.
S1.3, an underwater robot is formed to float on the water surface, a YoloX algorithm is adopted, the underwater robot collects image information through a camera, feature extraction is carried out through a feature extraction network, and the feature extraction network mainly comprises a cross-stage local (CSP) network structure and a Space Pyramid Pool (SPP) network structure. The CSP network structure is mainly used for increasing the depth of the network and improving the feature extraction capability, and the SPP is mainly used for improving the receptive field of the network by carrying out feature extraction in different pooling layers so as to achieve the aim of fusing more feature information. After the characteristics are extracted by using the characteristic extraction network, the characteristics are fused by adopting a FPN structure through a Neck structure. And (3) carrying out target frame prediction by using three decoupling heads at a prediction end to obtain a detection result, and tracking the multi-unmanned aerial vehicle formation by the underwater robot according to the coordinates returned by the detection.
Further, the step S2 includes the following steps:
after a plurality of faults are found by the unmanned aerial vehicles, the problem of task allocation of the fault points is converted into the problem of node allocation of 1 to 1 by adopting a KM algorithm, the unmanned aerial vehicles are respectively allocated to the nearest fault nodes, namely, the distance between each unmanned aerial vehicle and the fault point is used as an index, and the nearest tasks are allocated to each unmanned aerial vehicle through the KM algorithm so as to obtain the optimal task allocation.
Further, the step S3 includes the following steps:
the unmanned plane and the underwater robot distributed to the tasks are separated from the formation in pairs, the unmanned plane carries out path planning through the coordinates of the fault points and the coordinates of the unmanned plane, the D algorithm is adopted to carry out path planning, in the process of carrying out path planning, the D algorithm meets the obstacle, the path planning is carried out again, the dynamic stability of the system is improved, the unmanned plane safely reaches the appointed fault position, and the underwater robot follows the unmanned plane to reach the target place through the YoloX algorithm.
Compared with the prior art, the invention has the beneficial effects that:
(1) The water-air multi-robot detection system integrates and processes fault data through an onboard computer (nano, nuc, tx2 and the like) in a centralized communication mode and transmits the fault data to a ground base station, and operators only need to observe and process on land, so that life is prevented from being endangered by underwater algae or hidden current during manual underwater detection.
(2) The unmanned aerial vehicle is used for fault searching, the unmanned aerial vehicle is formed by the unmanned aerial vehicle through a leader-folllwer algorithm and a fuzzy pid control algorithm, the advantage of wide visual field of unmanned aerial vehicle formation is utilized, the problem that the underwater robot is difficult to perform visual detection due to complex underwater environment is solved, the unmanned aerial vehicle is wider than a single underwater robot fault detection search visual field, and the search range is improved.
(3) The underwater robot floats on the water surface, and the Yolox algorithm is adopted to detect and track the target of the unmanned aerial vehicle to reach the target point, so that the complexity of path planning caused by the obstacles such as seaweed, fish, stone and the like under water is avoided, and the influence of water surface fluctuation and water surface floaters in the water path planning is also avoided.
(4) The water-air multi-robot system can simultaneously perform fault detection on a plurality of fault points, the distance between the water-air multi-robot system and the fault points is used for performing task distribution by adopting a KM algorithm, each unmanned aerial vehicle can perform path planning through a D-type algorithm so as to fly to a target fault point, and the underwater robot can process fault data in real time under water through an onboard computer, so that the efficiency of fault detection is improved.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention.
Fig. 1 is a schematic overall flow diagram of a dam intelligent detection method based on cooperation of a plurality of robots in water and air.
FIG. 2 is a block diagram of an intelligent dam detection system according to the present invention.
FIG. 3 is a block diagram of fuzzy pid control in accordance with the present invention.
Fig. 4 is a diagram showing a centralized communication control structure in the present invention.
Fig. 5 is a diagram of experimental results of unmanned aerial vehicle formation in the present invention.
FIG. 6 is a diagram of the network structure of the Yolox algorithm in the present invention.
Fig. 7 is a diagram of the two parts of the KM algorithm of the present invention.
Fig. 8 is a diagram of the best distribution of KM algorithm according to the present invention.
Fig. 9 is a graph of experimental results of the D-algorithm in the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. Of course, the specific embodiments described herein are for purposes of illustration only and are not intended to limit the invention.
Example 1
Referring to fig. 1 to 9, the technical scheme provided by the embodiment is that a dam intelligent detection system based on water-air multi-robot cooperation, a system structure diagram is shown in fig. 1, and the method comprises the following steps:
s1: forming a plurality of unmanned aerial vehicles in the air, maintaining a certain speed for piloting, floating the plurality of underwater robots on the water surface, and tracking the unmanned aerial vehicles;
furthermore, the underwater robots and the unmanned aerial vehicles are controlled through the fuzzy pid algorithm, so that the multi-unmanned-vehicle formation can adapt to various different dam environments, and a certain speed is kept for piloting.
Referring to fig. 2, fig. 2 is a block diagram of a fuzzy PID control, where the fuzzy PID control includes components of fuzzification, determination of fuzzy rules, defuzzification, etc. The unmanned aerial vehicle and the underwater robot blur E and EC by determining the deviation E of the current distance from the target speed and the position and the change EC of the current deviation and the last deviation, then perform fuzzy reasoning according to a given fuzzy rule, find the membership corresponding to the output value according to a fuzzy rule table, finally perform defuzzification on fuzzy parameters, output PID control parameters, and the underwater robot and the unmanned aerial vehicle input the PID control parameters into a PID controller so as to adapt to control requirements under various conditions.
Furthermore, unmanned aerial vehicle formation adopts a Leader-Follower algorithm to carry out formation control.
Referring to fig. 3, fig. 3 is a centralized communication control structure diagram, in which each unmanned aerial vehicle in the unmanned aerial vehicle formation transmits its own status information to a ground console through a communication network to perform centralized processing, a Leader-impeller formation algorithm obtains a control instruction of each unmanned aerial vehicle, and then the ground console transmits the control instruction to each unmanned aerial vehicle.
The leader-follower formation algorithm defines: the leader-follower formation algorithm refers to setting one or more unmanned aerial vehicles as pilot unmanned aerial vehicles, and the remaining unmanned aerial vehicles as follow unmanned aerial vehicles, which perform flight control at a yaw angle and distance required by the formation. The Leader-Follower formation control graph is represented by a directed acyclic graph f= (V, E, D), and a finite set of control systems containing N vertices and designating each node is v= { V 1 ,...,v n This finite set is a set of N control systems,wherein x is i ∈R n Be used for showing unmanned aerial vehicle's place information, u i ∈R m Representing state information of the unmanned aerial vehicle during movement, the edge set E epsilon V x V can be used for representing the relationship between the piloting unmanned aerial vehicle and the following unmanned aerial vehicle. If u is i I, x with unmanned aerial vehicle i Relevant, then command->And each j (v) is E i ,v j )∈E,v i The control target for e V is represented by set D.
For the multi-unmanned aerial vehicle j, the piloting unmanned aerial vehicle with the tail part j of the multi-unmanned aerial vehicle formation connected with the vertex j can use the set L j E V, in unmanned aerial vehicle formation, vertex V with yaw angle 0 i Is a piloting unmanned aerial vehicle, v i ∈L F . Because L is F The vertex in (2) has no input edge, so the piloting unmanned aerial vehicle does not need to receive state feedback of the following unmanned aerial vehicle, and only needs to move according to a route given by a program. Meanwhile, unmanned aerial vehicle formation can fly normally by dynamically correcting, performing operations such as obstacle avoidance and the like, and finally, the unmanned aerial vehicle formation result is shown as a unmanned aerial vehicle formation experimental result diagram in fig. 4, wherein the unmanned aerial vehicle formation experimental result diagram is T-shaped formation.
Furthermore, the underwater robot formation floats on the water surface to track the target by using a YoloX algorithm.
Referring to fig. 5, fig. 5 is a network structure diagram of Yolox algorithm, an underwater robot collects image information through a camera, and performs feature extraction through a feature extraction network, wherein the feature extraction network mainly comprises a cross-stage local (CSP) network structure and a Spatial Pyramid Pool (SPP) network structure. The CSP network structure is mainly used for increasing the depth of the network and improving the feature extraction capability, and the SPP is mainly used for improving the receptive field of the network by carrying out feature extraction in different pooling layers so as to achieve the aim of fusing more feature information. After the characteristics are extracted by using the characteristic extraction network, the characteristics are fused by adopting a FPN structure through a Neck structure. And carrying out target frame prediction by using three decoupling heads at a prediction end.
In order to be suitable for the use of edge equipment under actual conditions, a Yolox-tiniy network structure is used in the system, a data set is a Det-Fly data set, and a camera directly collects a target unmanned aerial vehicle in the air in the Det-Fly data set and comprises multiple unmanned aerial vehicle postures under the condition of looking up, overlooking and looking up.
For the training set, a total of 150 rounds are trained on Det-Fly, training is performed using random gradient descent, in the first 5 rounds, training is performed to warm up, and since model training is very unstable at the beginning, the learning rate needs to be set very low so as to ensure good convergence of the network, but at this time the training process becomes very slow, so training is performed to warm up first, and after a period of training, the learning rate is slowly reduced. In the model training process, the learning rate uses a cosine learning rate, the learning rate is set to lr×batch size/8, the initial lr=0.01, the weight decay rate is 0.0005, the sgd momentum is 0.9, and the batch size is set to 4. The input size is plotted uniformly from 448 to 832, spanning 32. The FPS and delay in the paper were tested using FP16 accuracy, test equipment NVIDIA 3070, batch size=1.
In addition, the data set is subjected to horizontal random inversion, color dithering and multi-scale data enhancement, and in Yolox, mosaic and MixUp are added into an enhancement strategy to improve the performance of Yolox. But in the last 15 rounds the two functions are turned off.
After a detection result is obtained through a YoloX-tiny algorithm, the underwater robot tracks a multi-unmanned aerial vehicle formation according to the coordinates returned by detection.
S2: after the multiple unmanned aerial vehicles find faults, distributing the faults to unmanned aerial vehicles and underwater robots with the same number of groups as the faults;
further, after a plurality of faults are found by the unmanned aerial vehicles, the problem of task allocation of the fault points is converted into the problem of node allocation of 1 to 1 by adopting a KM algorithm, the unmanned aerial vehicles are respectively allocated to the nearest fault nodes, namely, the distance between each unmanned aerial vehicle and the fault point is used as an index, and the nearest tasks are allocated to each unmanned aerial vehicle through the KM algorithm, so that optimal task allocation is obtained.
Referring to fig. 6, fig. 6 is a diagram of a part of the KM algorithm with a distance from each unmanned aerial vehicle to a fault point as initial data, and coordinates of a current point and coordinates of a target point as a start point and an end point, and the diagram of the part of the KM algorithm with the distance is drawn.
Referring to fig. 7, fig. 7 is an optimal distribution diagram of KM algorithm, and the following 5×5 matrix table can be obtained by converting the weighted bipartite diagram into a matrix based on the weighted bipartite diagram.
Subtracting the minimum value of the row from each row in the matrix, subtracting the minimum value of the column from each column in the matrix, covering all 0 s in the matrix with the least lines, stopping the circulation if the number of lines is equal to the order of the matrix, obtaining the optimal matching, if the number of lines is unequal, continuing the circulation, finding the smallest element in all elements which are not covered by the lines, setting the smallest element as k, subtracting k from all elements which are not covered, and adding k to the places where all lines cross. The final matrix must be covered with 5 lines, thus obtaining the best match, taking 0's points to obtain the corresponding matching relationship, and deleting its corresponding rows and columns every 0's. Finally, the best distribution diagram of the KM algorithm shown in fig. 7 is obtained.
S2: the unmanned aerial vehicle and the underwater robot distributed to the tasks are separated from the formation, the unmanned aerial vehicle and the underwater robot reach the appointed fault position through the robot path planning, and other unmanned aerial vehicles and the underwater robots continue to search for faults;
further, the unmanned plane and the underwater robot distributed to the tasks are separated from the formation in pairs, and the unmanned plane adopts a D algorithm to conduct path planning through the coordinates of the fault points and the coordinates of the unmanned plane. First, the states of all nodes are set to NEW, the shortest path cost from the target point to each grid is set to 0, and the target point is put into the OPEN table. And then, continuously calling the process state optimization function until an optimal path is found and moving according to the optimal path until the target point is reached or the cost is changed. And finally, when a dynamic obstacle is encountered, modifying the cost, putting the node which encounters the obstacle into the OPEN again, searching for nodes nearby the node, and when a dynamic fault occurs in the map, changing the state of X to cause h (X) to change, wherein the k value takes the minimum h value before and after the change. And knowing the shortest path cost from the minimum value of k to each grid by calling a function, and transmitting the change of the cost to Y, thereby re-planning the optimal path.
In the process of path planning, when an obstacle is encountered, the path planning is performed again by the D algorithm, so that the dynamic stability of the system is improved, the specified fault position is reached safely, and the underwater robot follows the unmanned plane to the target place by adopting the YoloX algorithm, as shown in the experimental result diagram of the D algorithm in fig. 8.
S4: after reaching the designated position, the underwater robot submerges to perform fault detection, and the unmanned aerial vehicle hovers to wait for the detection to finish;
s5: after the detection is finished, the underwater robot floats on the water again, and the unmanned aerial vehicle and the underwater robot return to the formation to continue searching.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.
Claims (4)
1. A dam intelligent detection method based on cooperation of a plurality of robots in water and air is characterized by comprising the following steps:
s1: a plurality of unmanned aerial vehicles are formed in the air to keep a certain speed for piloting, and a plurality of underwater robots float on the water surface to track the unmanned aerial vehicles;
s2: after a plurality of unmanned aerial vehicles find faults, distributing the faults to unmanned aerial vehicles and underwater robots with the same group number as the faults;
s3: the unmanned aerial vehicle and the underwater robot distributed to the tasks are separated from the formation, the unmanned aerial vehicle and the underwater robot reach the appointed fault position through the robot path planning, and other unmanned aerial vehicles and the underwater robots continue to search for faults;
s4: after reaching the designated position, the underwater robot submerges to perform fault detection, and the unmanned aerial vehicle hovers to wait for the detection to finish;
s5: after the detection is finished, the underwater robot floats on the water again, and the unmanned aerial vehicle and the underwater robot return to the formation to continue searching.
2. The intelligent detection method for the dam based on the cooperation of the water and the air robots as claimed in claim 1, wherein the step S1 comprises the following steps:
s1.1, controlling an underwater robot and an unmanned aerial vehicle through a fuzzy pid algorithm, and adopting a dynamic control algorithm to enable multi-unmanned aerial vehicle formation to adapt to various different dam environments, and maintaining a certain speed for piloting;
s1.2, unmanned aerial vehicle formation is controlled by adopting a Leader-Follower algorithm;
the Leader-Follower formation algorithm refers to setting one or more unmanned aerial vehicles as piloted unmanned aerial vehicles, setting the rest unmanned aerial vehicles as following unmanned aerial vehicles, performing flight control by the following unmanned aerial vehicles with yaw angle and distance required by formation, and the Leader-Follower formation control diagram is represented by a directed acyclic diagram f= (V, E, D), contains N vertices and designates a finite set of control systems for each node as v= { V 1 ,...,v n The finite set is a set of N control systems, x . =f i (t,x i ,u i ) Wherein x is i ∈R n Be used for showing unmanned aerial vehicle's place information, u i ∈R m Representing state information of the unmanned aerial vehicle during movement, wherein an edge set E epsilon V X V is used for representing the relationship between the piloting unmanned aerial vehicle and the following unmanned aerial vehicle, and if u i I, x with unmanned aerial vehicle i Related then commandAnd each j (v) is E i ,v j )∈E,v i The control target of the E V is represented by a set D;
for the multi-unmanned aerial vehicle j, the piloting unmanned aerial vehicle with the tail part j of the multi-unmanned aerial vehicle formation connected with the vertex j uses the set L j E V, in unmanned aerial vehicle formation, vertex V with yaw angle 0 i Is a piloting unmanned aerial vehicle, v i ∈L F Because of L F The vertex in the model (3) has no input edge, so that the piloting unmanned aerial vehicle does not need to receive state feedback of the following unmanned aerial vehicle, only needs to move according to a route given by a program, and meanwhile, the unmanned aerial vehicle formation can fly normally by dynamically correcting, performing operations such as obstacle avoidance and the like;
s1.3, an underwater robot formation floats on the water surface and adopts a YoloX algorithm, the underwater robot acquires image information through a camera, feature extraction is carried out through a feature extraction network, the feature extraction network mainly comprises a cross-stage local CSP network structure and a spatial pyramid pool SPP network structure, the CSP network structure is mainly used for enabling the depth of the network to be increased and improving feature extraction capacity, SPP is mainly used for increasing the receptive field of the network through feature extraction in different pooling layers, the purpose of fusing more feature information is achieved, after feature extraction of the feature extraction network is utilized, feature fusion is carried out through a Neck structure, target frame prediction is carried out through three decoupling heads at a prediction end, a detection result is obtained, and the underwater robot tracks multiple unmanned aerial vehicle formations according to coordinates returned by detection.
3. The intelligent dam detection method based on the cooperation of a plurality of robots in water and air according to claim 1, wherein the S2 comprises the following contents:
after a plurality of faults are found by the unmanned aerial vehicles, the problem of task allocation of the fault points is converted into the problem of node allocation of 1 to 1 by adopting a KM algorithm, the unmanned aerial vehicles are respectively allocated to the nearest fault nodes, namely, the distance between each unmanned aerial vehicle and the fault point is used as an index, and the nearest tasks are allocated to each unmanned aerial vehicle through the KM algorithm so as to obtain the optimal task allocation.
4. The intelligent dam detection method based on the cooperation of a plurality of robots in water and air according to claim 1, wherein the specific content of S3 is as follows:
the unmanned plane and the underwater robot distributed to the tasks are separated from the formation in pairs, the unmanned plane carries out path planning through the coordinates of the fault points and the coordinates of the unmanned plane, the D algorithm is adopted to carry out path planning, in the process of carrying out path planning, the D algorithm meets the obstacle, the path planning is carried out again, therefore, the unmanned plane safely reaches the appointed fault position, and the underwater robot follows the unmanned plane to reach the target point through the YoloX algorithm.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310224181.5A CN116382328B (en) | 2023-03-09 | 2023-03-09 | Dam intelligent detection method based on cooperation of multiple robots in water and air |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310224181.5A CN116382328B (en) | 2023-03-09 | 2023-03-09 | Dam intelligent detection method based on cooperation of multiple robots in water and air |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116382328A true CN116382328A (en) | 2023-07-04 |
CN116382328B CN116382328B (en) | 2024-04-12 |
Family
ID=86972218
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310224181.5A Active CN116382328B (en) | 2023-03-09 | 2023-03-09 | Dam intelligent detection method based on cooperation of multiple robots in water and air |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116382328B (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190369641A1 (en) * | 2018-05-31 | 2019-12-05 | Carla R. Gillett | Robot and drone array |
CN110647145A (en) * | 2019-09-05 | 2020-01-03 | 新疆大学 | Ground mobile robot and unmanned aerial vehicle cooperative operation system and method based on security |
CN112130587A (en) * | 2020-09-30 | 2020-12-25 | 北京理工大学 | Multi-unmanned aerial vehicle cooperative tracking method for maneuvering target |
CN112513766A (en) * | 2020-02-26 | 2021-03-16 | 深圳市大疆创新科技有限公司 | Method, tracking device, storage medium and computer program product for path planning |
CN112686826A (en) * | 2021-01-13 | 2021-04-20 | 东华大学 | Marine search and rescue method in severe weather environment |
CN113657270A (en) * | 2021-08-17 | 2021-11-16 | 江苏熙枫智能科技有限公司 | Unmanned aerial vehicle tracking method based on deep learning image processing technology |
CN113657256A (en) * | 2021-08-16 | 2021-11-16 | 大连海事大学 | Unmanned ship-borne unmanned aerial vehicle sea-air cooperative visual tracking and autonomous recovery method |
CN113671994A (en) * | 2021-09-01 | 2021-11-19 | 重庆大学 | Multi-unmanned aerial vehicle and multi-unmanned ship inspection control system based on reinforcement learning |
CN114815810A (en) * | 2022-03-22 | 2022-07-29 | 武汉理工大学 | Unmanned aerial vehicle-cooperated overwater cleaning robot path planning method and equipment |
EP4123593A1 (en) * | 2021-07-23 | 2023-01-25 | The Boeing Company | Rapid object detection for vehicle situational awareness |
-
2023
- 2023-03-09 CN CN202310224181.5A patent/CN116382328B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190369641A1 (en) * | 2018-05-31 | 2019-12-05 | Carla R. Gillett | Robot and drone array |
CN110647145A (en) * | 2019-09-05 | 2020-01-03 | 新疆大学 | Ground mobile robot and unmanned aerial vehicle cooperative operation system and method based on security |
CN112513766A (en) * | 2020-02-26 | 2021-03-16 | 深圳市大疆创新科技有限公司 | Method, tracking device, storage medium and computer program product for path planning |
CN112130587A (en) * | 2020-09-30 | 2020-12-25 | 北京理工大学 | Multi-unmanned aerial vehicle cooperative tracking method for maneuvering target |
CN112686826A (en) * | 2021-01-13 | 2021-04-20 | 东华大学 | Marine search and rescue method in severe weather environment |
EP4123593A1 (en) * | 2021-07-23 | 2023-01-25 | The Boeing Company | Rapid object detection for vehicle situational awareness |
CN113657256A (en) * | 2021-08-16 | 2021-11-16 | 大连海事大学 | Unmanned ship-borne unmanned aerial vehicle sea-air cooperative visual tracking and autonomous recovery method |
CN113657270A (en) * | 2021-08-17 | 2021-11-16 | 江苏熙枫智能科技有限公司 | Unmanned aerial vehicle tracking method based on deep learning image processing technology |
CN113671994A (en) * | 2021-09-01 | 2021-11-19 | 重庆大学 | Multi-unmanned aerial vehicle and multi-unmanned ship inspection control system based on reinforcement learning |
CN114815810A (en) * | 2022-03-22 | 2022-07-29 | 武汉理工大学 | Unmanned aerial vehicle-cooperated overwater cleaning robot path planning method and equipment |
Non-Patent Citations (1)
Title |
---|
姚鹏;綦声波;黎明;: "基于无人机/无人艇的最优动态覆盖观测技术", 海洋科学, no. 1 * |
Also Published As
Publication number | Publication date |
---|---|
CN116382328B (en) | 2024-04-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110806756B (en) | Unmanned aerial vehicle autonomous guidance control method based on DDPG | |
CN104407619B (en) | Multiple no-manned plane under uncertain environment reaches multiple goal approachs simultaneously | |
CN109521794A (en) | A kind of multiple no-manned plane routeing and dynamic obstacle avoidance method | |
CN112747736A (en) | Indoor unmanned aerial vehicle path planning method based on vision | |
Zhu et al. | Task assignment and path planning of a multi-AUV system based on a Glasius bio-inspired self-organising map algorithm | |
CN110991972B (en) | Cargo transportation system based on multi-agent reinforcement learning | |
CN109540151A (en) | A kind of AUV three-dimensional path planning method based on intensified learning | |
CN109992006A (en) | A kind of accurate recovery method and system of power patrol unmanned machine | |
CN109871032A (en) | A kind of multiple no-manned plane formation cooperative control method based on Model Predictive Control | |
CN112033410A (en) | Mobile robot environment map construction method, system and storage medium | |
CN108445908A (en) | Mesh calibration method waterborne is tracked based on time optimal unmanned plane | |
CN112817318B (en) | Multi-unmanned-boat collaborative search control method and system | |
CN116225016A (en) | Multi-agent path planning method based on distributed collaborative depth reinforcement learning model | |
CN112650246B (en) | Ship autonomous navigation method and device | |
CN114169066A (en) | Space target characteristic measuring and reconnaissance method based on micro-nano constellation approaching reconnaissance | |
CN110703799A (en) | Multi-carrier-based aircraft cooperative deck surface sliding track planning method based on centralized optimal control | |
CN114355981A (en) | Method and system for self-exploring and map building of quad-rotor unmanned aerial vehicle | |
CN112580537A (en) | Deep reinforcement learning method for multi-unmanned aerial vehicle system to continuously cover specific area | |
CN116242364A (en) | Multi-unmanned aerial vehicle intelligent navigation method based on deep reinforcement learning | |
CN116518974A (en) | Conflict-free route planning method based on airspace grids | |
Bai et al. | Cooperative multi-robot control for monitoring an expanding flood area | |
CN116382328B (en) | Dam intelligent detection method based on cooperation of multiple robots in water and air | |
CN112800545B (en) | Unmanned ship self-adaptive path planning method, equipment and storage medium based on D3QN | |
CN112987765B (en) | Precise autonomous take-off and landing method of unmanned aerial vehicle/boat simulating attention distribution of prey birds | |
Lee | Machine learning vision and nonlinear control approach for autonomous ship landing of vertical flight aircraft |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |