CN113627445B - Formation planning system and method for multi-unmanned aerial vehicle aerial media - Google Patents

Formation planning system and method for multi-unmanned aerial vehicle aerial media Download PDF

Info

Publication number
CN113627445B
CN113627445B CN202110897965.5A CN202110897965A CN113627445B CN 113627445 B CN113627445 B CN 113627445B CN 202110897965 A CN202110897965 A CN 202110897965A CN 113627445 B CN113627445 B CN 113627445B
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
image
control
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110897965.5A
Other languages
Chinese (zh)
Other versions
CN113627445A (en
Inventor
陈彦杰
陈敏俊
吴凝
计书勤
王泂淏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou University
Original Assignee
Fuzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou University filed Critical Fuzhou University
Priority to CN202110897965.5A priority Critical patent/CN113627445B/en
Publication of CN113627445A publication Critical patent/CN113627445A/en
Application granted granted Critical
Publication of CN113627445B publication Critical patent/CN113627445B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • G05D1/104Simultaneous control of position or course in three dimensions specially adapted for aircraft involving a plurality of aircrafts, e.g. formation flying
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a formation planning system for aerial media of multiple unmanned aerial vehicles, which comprises a control unit and a display unit which are connected; the control unit comprises an image recognition module, a distributed formation control module and a matrix switching module; the display unit adopts an OpenGL display module; the image recognition module, the distributed formation control module and the array type switching module are sequentially connected. According to the invention, the unmanned aerial vehicle lamplight show formation planning system is constructed in a modularized manner, so that algorithm planning and simulation verification before performance can be performed on the unmanned aerial vehicle lamplight show market, the design integrity can be greatly improved, the time and labor cost can be reduced, and performance errors can be avoided in a simulation manner.

Description

Formation planning system and method for multi-unmanned aerial vehicle aerial media
Technical Field
The invention relates to the field of unmanned aerial vehicle control, in particular to a system and a method for forming and planning aerial media of multiple unmanned aerial vehicles.
Background
With the rise of multi-unmanned aerial vehicle air performance, the large-scale air media brings a wonderful visual effect and simultaneously brings new ideas and markets for technological development. Along with the continuous issuing of bans for banning fireworks in various large cities in China, the unmanned aerial vehicle lamplight show can replace fireworks for performance in a plurality of important holidays, and becomes a novel environment-friendly pollution-free celebration mode. However, the core content of the unmanned aerial vehicle lamp Guan Xiu relates to image recognition and formation, formation control, obstacle avoidance of multiple unmanned aerial vehicles, unmanned aerial vehicle formation path planning, array switching and other contents, and the unmanned aerial vehicle lamp is still in a split module state and has no integrated system.
Disclosure of Invention
Therefore, the invention aims to provide a system and a method for forming a multi-unmanned aerial vehicle aerial medium, which can carry out algorithm planning and simulation verification before performance aiming at an unmanned aerial vehicle lamplight show market by constructing the unmanned aerial vehicle lamplight show forming system in a modularized mode, so that the design integrity can be greatly improved, the time and labor cost can be reduced, and performance errors can be simulated and avoided.
In order to achieve the above purpose, the invention adopts the following technical scheme:
a multi-unmanned aerial vehicle aerial medium formation planning system comprises a control unit and a display unit which are connected; the control unit comprises an image recognition module, a distributed formation control module and a matrix switching module; the display unit adopts an OpenGL display module; the image recognition module, the distributed formation control module and the array type switching module are sequentially connected.
Further, the image recognition module comprises image acquisition, image preprocessing, character processing, character recognition and discretization coordinate information storage; the distributed control module comprises position control, speed control and obstacle avoidance control; the array type switching module realizes the rapid switching of formation array types based on potential field function in combination with an improved exit algorithm.
A control method of a formation planning system for multi-unmanned aerial vehicle aerial media comprises the following steps:
step S1, preprocessing an input image, and acquiring character recognition to obtain a distance matrix;
and S2, taking the distance matrix as an expected distance matrix of the unmanned aerial vehicle of the part, splitting the movement track of the unmanned aerial vehicle into three directions of x, y and z axes, and independently planning a path in each direction to obtain a track path diagram of the unmanned aerial vehicle.
Further, the step S1 specifically includes:
s11, collecting RGB information of a picture;
step S12, converting the RGB information into a gray image and a binary image through a weighted average method;
s13, preprocessing a binary image based on an image edge detection method of an improved Canny algorithm;
step S14, unifying the size of the character image by adopting a normalization processing mode based on image scaling through a preset threshold value;
step S15, character recognition is carried out by adopting a character recognition method based on template matching;
step S16: the pixel coordinate information retained by the character image obtained after recognition is noted as (x) i ,y i ) The distance pixel value is noted as d, which is calculated by the following formula:
obtaining a distance matrix relative to the origin, and storing the distance matrix to T Δx 、T Δy
Further, the step S13 specifically includes:
(1) Reducing noise by adopting a median filtering algorithm, simultaneously keeping edge information, and setting g (x, y) as a gray value on a (x, y) point and w (x, y) as the size of a current gray window; let w=3, g initially min ,g mid ,g max Respectively the minimum gray level, the middle gray level and the maximum gray level; if g min <g(x,y)<g max Output g (x, y), otherwise output g mid
(2) Smoothing and filtering the image by using a Gaussian function to remove Gaussian noise;
let I be the input image matrix of size m n, G (x, y) be a Gaussian function, O be the output result matrix, the filtering process is defined as follows:
define a one-dimensional Gaussian function as G (x), and
then
G(x,y)=G(x)G(y) (4)
Thus (2)
All pixel points on the image are subjected to convolution processing by utilizing the separability of Gaussian filtering;
(3): respectively calculating gradient values (|delta f|) and direction angles (theta) of the denoised image in the x and y directions by using a Sobel operator; merging the gradient direction angles of 0-360 DEG into 4 directions theta': 0 °,45 °,90 °,135 °;
for all edges, 180 ° =0°,225 ° =45°; the gradient pixels with the maximum value in the gradient direction are reserved, and other pixels are deleted; additionally, the filtering template M (5×5) (sigma. Apprxeq.1.4) is recorded, and the gradient M in the x direction x Gradient m in y direction y The following steps are:
convolving the image with a Sobel template to obtain:
setting two thresholds, namely a high threshold t high And a low threshold t low The method comprises the steps of carrying out a first treatment on the surface of the If "|Δf|", then "<t low Then pixel (x, y) is a non-edge pixel, if |Δf|>t high The pixel (x, y) is an edge pixel, if t low <|Δf|<t high The neighborhood is further enlarged for judgment;
and finally, optimizing an edge detection result through morphological operation on the image.
Further, if a single threshold is adopted, a single threshold T is taken, and a binary image g (x, y) can be obtained after the original image is segmented; if multi-threshold segmentation is adopted, defining a segmentation threshold as T 0 ,T 1 ,……,T k Etc.; the output image is expressed as:
g(x,y)=k
T k-1 ≤f(x,y)≤T k (k=1,2,...)
the size of the unified character image is unified in a normalization processing mode based on image scaling, so that subsequent recognition is facilitated.
Further, the step S15 specifically includes: establishing a standardized character template library according to preset conditions, and then checking the sample with templates in the standard library; adopting a multi-mode sparse self-coding algorithm based on sparse self-coding algorithm improvement and a hierarchical image coding method based on multi-level tree set division and matching tracking to unify input and output and optimize image reconstruction quality; and finally, outputting a result.
Further, the step S16 specifically includes: the pixel coordinate information of the character image which is obtained after recognition and is reserved is recorded as (x) i ,y i ) Recording the distance pixel value as d; the matrix formed by the coordinates of the pixels is denoted as T 1 Taking x 0 =maxx i 、y 0 =maxy i Record origin O t (x 0 ,y 0 ) The method comprises the steps of carrying out a first treatment on the surface of the Calculated by the following formula:
obtaining a distance matrix relative to the origin, and storing the distance matrix to T Δx 、T Δy
Further, the step S2 specifically includes:
step S21, obtaining a distance matrix T Δx 、T Δy The unmanned aerial vehicle expected distance matrix as the part;
s22, splitting a motion track of the unmanned aerial vehicle into three directions of x, y and z axes, and independently planning a path in each direction;
step S23, initializing the coordinate position of the Leader;
step S24, controlling the speed of the unmanned aerial vehicle in each direction shaft at each time point through the control rate, and restraining the running track of the unmanned aerial vehicle;
step S25: in each iteration, taking the speed as an object of iteration control; at each point in time, the drone is subjected to iterations by the control rate effect:
p n+1 =p n +v n *t,,p=x,y,z。
further, the step S24 specifically includes:
(1) The total control rate formula:
the total control of the unmanned aerial vehicle consists of three sub-controls, namely position control, speed control and obstacle avoidance control; taking the speed of the unmanned aerial vehicle as a carrier of the total control, and realizing path planning of the unmanned aerial vehicle in the position iteration of each time point;
(2) Restraint control for unmanned aerial vehicle position
In the formula, p i ,p j The real-time position of the ith and j unmanned aerial vehicle; Δp is the difference in distance between i aircraft and j aircraft in the desired distance of the drone. P is p j -p i Is the difference of the real-time position distance of the unmanned aerial vehicle. (p) j -p i Δp) is the deviation of the actual position distance from the ideal position distance. a, a pj The position control rate is used for controlling the position deviation and regulating any two frames in real timeThe unmanned aerial vehicle deviates from the expected unmanned aerial vehicle position, so that the unmanned aerial vehicle can finally reach the expected position;
(3) Constrained control of speed of unmanned aerial vehicle
j b pj (v pj -v pi ),p=x,y,z
Wherein the sum obtained by calculation is the real-time speed of the ith and j-th unmanned aerial vehicles, b pj The speed control rate is used for controlling the speed similarity of the two unmanned aerial vehicles, so that the formation integrity is ensured;
(4) Obstacle avoidance control for unmanned aerial vehicle
When the two unmanned aerial vehicles are smaller than the expected safe distance, the risk of collision is judged, and the control rate c is passed pj And restraining the flight track of the unmanned aerial vehicle to realize obstacle avoidance.
Further, the matrix switching module performs matrix switching control, and the method comprises the following steps:
step 1, initially determining the maximum number N of unmanned aerial vehicles which can be involved in all patterns in the whole performance process i Acquiring the number N of unmanned aerial vehicles required by the next array i+1
Step 2, if N i+1 <N i Through self-judgment of the optimal planning scheme, the redundant unmanned aerial vehicle off-light retreats to the B plane, and the rest unmanned aerial vehicles switch the array to the appointed position;
compared with the prior art, the invention has the following beneficial effects:
according to the invention, the unmanned aerial vehicle lamplight show formation planning system is constructed in a modularized manner, so that algorithm planning and simulation verification before performance can be performed on the unmanned aerial vehicle lamplight show market, the design integrity can be greatly improved, the time and labor cost can be reduced, performance errors can be avoided in a simulation manner, and the control efficiency and quality of the unmanned aerial vehicle group can be effectively improved.
Drawings
FIG. 1 is a block diagram of a system of the present invention;
FIG. 2 is a discrete diagram of recognition of "Hua" words in experimental case one of the present invention;
fig. 3 is a diagram of a path of variation of the unmanned aerial vehicle in experimental case one of the present invention;
fig. 4 is an image of an unmanned aerial vehicle in OpenGL in experimental case one of the present invention;
FIG. 5 is a discrete diagram of "Wang" word recognition in experimental case two of the present invention;
fig. 6 is a path trace diagram of unmanned aerial vehicle matrix switching in experimental case two of the present invention;
fig. 7 is an imaging diagram of the unmanned aerial vehicle in OpenGL after array switching in experimental case two of the present invention;
Detailed Description
The invention will be further described with reference to the accompanying drawings and examples.
Referring to fig. 1, the invention provides a system for forming a multi-unmanned aerial vehicle aerial medium, which comprises a control unit and a display unit which are connected; the control unit comprises an image recognition module, a distributed formation control module and a matrix switching module; the display unit adopts an OpenGL display module; the image recognition module, the distributed formation control module and the array type switching module are sequentially connected.
In this embodiment, the image recognition module includes image acquisition, image preprocessing, character processing, character recognition and discretization coordinate information storage; the distributed control module comprises a total control rate for position control, speed control and obstacle avoidance controlPerforming algorithm control; the array type switching module realizes the rapid switching of formation array types based on potential field function in combination with an improved exit algorithm. The OpenGL display module is implemented by an MFC application under the visual studio framework.
Preferably, in this embodiment, a control method of a formation planning system for aerial media of multiple unmanned aerial vehicles is provided, including the following steps:
step S1, preprocessing an input image, and acquiring character recognition to obtain a distance matrix;
and S2, taking the distance matrix as an expected distance matrix of the unmanned aerial vehicle of the part, splitting the movement track of the unmanned aerial vehicle into three directions of x, y and z axes, and independently planning a path in each direction to obtain a track path diagram of the unmanned aerial vehicle.
In this embodiment, step S1 specifically includes:
s11, collecting RGB information of a picture;
step S12, converting the RGB information into a gray image and a binary image through a weighted average method;
s13, preprocessing a binary image based on an image edge detection method of an improved Canny algorithm;
step S14, unifying the size of the character image by adopting a normalization processing mode based on image scaling through a preset threshold value;
step S15, character recognition is carried out by adopting a character recognition method based on template matching;
step S16: the pixel coordinate information retained by the character image obtained after recognition is noted as (x) i ,y i ) The distance pixel value is noted as d, which is calculated by the following formula:
obtaining a distance matrix relative to the origin, and storing the distance matrix to T Δx 、T Δy
In this embodiment, step S13 specifically includes:
(1) Reducing noise by adopting a median filtering algorithm, simultaneously keeping edge information, and setting g (x, y) as a gray value on a (x, y) point and w (x, y) as the size of a current gray window; let w=3, g initially min ,g mid ,g max Respectively the minimum gray level, the middle gray level and the maximum gray level; if g min <g(x,y)<g max Output g (x, y), otherwise output g mid
(2) Smoothing and filtering the image by using a Gaussian function to remove Gaussian noise;
let I be the input image matrix of size m n, G (x, y) be a Gaussian function, O be the output result matrix, the filtering process is defined as follows:
define a one-dimensional Gaussian function as G (x), and
then
G(x,y)=G(x)G(y) (4)
Thus (2)
All pixel points on the image are subjected to convolution processing by utilizing the separability of Gaussian filtering;
(3): respectively calculating gradient values (|delta f|) and direction angles (theta) of the denoised image in the x and y directions by using a Sobel operator; merging the gradient direction angles of 0-360 DEG into 4 directions theta': θ°,45 °,90 °,135 °;
for all edges, 180 ° =0°,225 ° =45°; the gradient pixels with the maximum value in the gradient direction are reserved, and other pixels are deleted; additionally, the filtering template M (5×5) (sigma. Apprxeq.1.4) is recorded, and the gradient M in the x direction x Gradient m in y direction y The following steps are:
convolving the image with a Sobel template to obtain:
setting two thresholds, namely a high threshold t high And a low threshold t low The method comprises the steps of carrying out a first treatment on the surface of the If "|Δf|", then "<t low Then pixel (x, y) is a non-edge pixel, if |Δf|>t high The pixel (x, y) is an edge pixel, if t low <|Δf|<t high The neighborhood is further enlarged for judgment;
and finally, optimizing an edge detection result through morphological operation on the image.
Preferably, in the embodiment, step S14 specifically includes taking a single threshold T if the single threshold is adopted, and obtaining a binary image g (x, y) after the original image is segmented; if multi-threshold segmentation is adopted, defining a segmentation threshold as T 0 ,T 1 ,……,T k Etc.; the output image is expressed as:
g(x,y)=k
T k-1 ≤f(x,y)≤T k (k=1,2,...)
the size of the unified character image is unified in a normalization processing mode based on image scaling, so that subsequent recognition is facilitated.
Preferably, in this embodiment, step S15 specifically includes: establishing a standardized character template library according to preset conditions, and then checking the sample with templates in the standard library; adopting a multi-mode sparse self-coding algorithm based on sparse self-coding algorithm improvement and a hierarchical image coding method based on multi-level tree set division and matching tracking to unify input and output and optimize image reconstruction quality; and finally, outputting a result.
Preferably, in this embodiment, step S16 specifically includes: the pixel coordinate information of the character image which is obtained after recognition and is reserved is recorded as (x) i ,y i ) Recording the distance pixel value as d; the matrix formed by the coordinates of the pixels is denoted as T 1 Taking x 0 =maxx i 、y 0 =maxy i Record origin O t (x 0 ,y 0 ) The method comprises the steps of carrying out a first treatment on the surface of the Calculated by the following formula:
obtaining a distance matrix relative to the origin, and storing the distance matrix to T Δx 、T Δy
Preferably, in this embodiment, step S2 specifically includes:
step S21, obtaining a distance matrix T Δx 、T Δy The unmanned aerial vehicle expected distance matrix as the part;
s22, splitting a motion track of the unmanned aerial vehicle into three directions of x, y and z axes, and independently planning a path in each direction;
step S23, initializing the coordinate position of the Leader;
step S24, controlling the speed of the unmanned aerial vehicle in each direction shaft at each time point through the control rate, and restraining the running track of the unmanned aerial vehicle;
step S25: in each iteration, taking the speed as an object of iteration control; at each point in time, the drone is subjected to iterations by the control rate effect:
p n+1 =p n +v n *t,,p=x,y,z。
preferably, in this embodiment, step S24 specifically includes:
(1) The total control rate formula:
the total control of the unmanned aerial vehicle consists of three sub-controls, namely position control, speed control and obstacle avoidance control; taking the speed of the unmanned aerial vehicle as a carrier of the total control, and realizing path planning of the unmanned aerial vehicle in the position iteration of each time point;
(2) Restraint control for unmanned aerial vehicle position
In the formula, p i ,p j The real-time position of the ith and j unmanned aerial vehicle; Δp is the difference in distance between i aircraft and j aircraft in the desired distance of the drone. P is p j -p i Is the difference of the real-time position distance of the unmanned aerial vehicle. (p) j -p i Δp) is the deviation of the actual position distance from the ideal position distance. a, a pj The position control rate is used for controlling the position deviation, and regulating and controlling the deviation between the positions of any two real-time unmanned aerial vehicles and the expected unmanned aerial vehicle, so that the unmanned aerial vehicle can finally reach the expected position;
(3) Constrained control of speed of unmanned aerial vehicle
j b pj (v pj -v pi ),p=x,y,z
Wherein the sum obtained by calculation is the real-time speed of the ith and j-th unmanned aerial vehicles, b pj The speed control rate is used for controlling the speed similarity of the two unmanned aerial vehicles, so that the formation integrity is ensured;
(4) Obstacle avoidance control for unmanned aerial vehicle
When the two unmanned aerial vehicles are smaller than the expected safe distance, the risk of collision is judged, and the control rate c is passed pj And restraining the flight track of the unmanned aerial vehicle to realize obstacle avoidance.
Preferably, in this embodiment, the matrix switching module performs matrix switching control, including the following steps:
step 1, initially determining the maximum number N of unmanned aerial vehicles which can be involved in all patterns in the whole performance process i Acquiring the number N of unmanned aerial vehicles required by the next array i+1
Step 2, if N i+1 <N i Through the self-judgment of the optimal planning scheme, redundant unmanned aerial vehicle off-light retreats to the B plane, and other unmanned aerial vehicles switch array to fingerPositioning;
referring to fig. 2-4, the invention relates to a system and a method for forming a multi-unmanned aerial vehicle aerial medium, wherein the system comprises an image recognition module, a distributed formation control module, a matrix switching module and an OpenGL display module; the image recognition module comprises parts such as image acquisition, image preprocessing, character processing, character recognition, discretization, coordinate information storage and the like; the distributed formation control module performs formation control by adopting an improved pilot-Follower (Leader-Follower) control method; the array type switching module is used for realizing the rapid switching of formation array types by combining a self-created exit algorithm based on a potential field function; and finally, imaging is carried out on OpenGL to realize formation planning simulation.
This experiment
(1) Taking a filtering template M (5 multiplied by 5) (sigma is approximately equal to 1.4), and obtaining an x-direction gradient M x Gradient m in y direction y The following steps are:
convolving the image with a Sobel template to obtain:
(2) pixel distance value d=30 mm
The control rate parameter takes a=7, b=3, c=2.
(3) The unmanned aerial vehicle group is set to reach the designated target position for two minutes, and the unmanned aerial vehicle group is circulated once every 0.02 s.
Example 1 formation matrix display experiments
Taking the Chinese character as an example, the display of a desired array form from a random point is realized. The algorithm is visualized by using OpenGL under visual studio2017 based on C++ language, and meanwhile, a trajectory chart of iteration of the unmanned aerial vehicle along with time change is simulated under Matlab. The method comprises the following steps:
step S1, inputting a picture 'Hua' word, and discretizing to obtain 57 discrete coordinate points (as shown in figure 2) of the 'Hua' word;
step S2: calculating the discrete points to obtain a distance matrix relative to the origin, and storing the distance matrix to T Δx 、T Δy
Step S3, obtaining a distance matrix T Δx 、T Δy As a matrix of desired distances for the drone,
step S4: the initial coordinate point of the unmanned aerial vehicle is arbitrarily selected, the initial coordinate of the experimental case is arbitrarily designed into a rectangle, and the unmanned aerial vehicle is placed on the ground (such as the initial matrix of fig. 3).
Step S5: through formation control module, set up and export a new position every 0.02s, so each unmanned aerial vehicle can produce a series of continuous position points, obtains unmanned aerial vehicle's orbit route diagram (like fig. 3).
And S6, finally, displaying through OpenGL to form an image (shown in figure 4) which automatically moves to a target position after planning.
Example 2: array type switching experiment
On the basis of the first experiment, the target coordinates of "Hua" were obtained and used as the initial coordinates of experiment two. And realizing array switching experiments. The method comprises the following steps:
step S1, inputting a picture 'king' word, and discretizing to obtain 53 discrete coordinate points (as shown in figure 5);
step S2: calculating the discrete points to obtain a distance matrix relative to the origin, and storing the distance matrix to T Δx 、T Δy
Step S3, obtaining a distance matrix T Δx 、T Δy As a matrix of desired distances for the drone,
step S4: the transformation (e.g., the initial matrix of fig. 6) was performed using the coordinates of the first experimental "hua" word image as the initial coordinates.
Step S5: through the matrix switching module, a new position is set to be output every 0.02s, so that each unmanned aerial vehicle can generate a series of continuous position points to obtain a track path diagram of the unmanned aerial vehicle, and meanwhile, redundant unmanned aerial vehicle extinguishing lights are controlled to stay on a plane B, and other unmanned aerial vehicles are formed on a plane A.
(see FIG. 6).
And S6, finally, displaying through OpenGL to form an image of the target position after matrix switching (as shown in FIG. 4).
The foregoing description is only of the preferred embodiments of the invention, and all changes and modifications that come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.

Claims (5)

1. A control method of a formation planning system of a multi-unmanned aerial vehicle aerial medium is characterized in that the system comprises a control unit and a display unit which are connected; the control unit comprises an image recognition module, a distributed formation control module and a matrix switching module; the display unit adopts an OpenGL display module; the image recognition module, the distributed formation control module and the array type switching module are connected in sequence;
the method comprises the following steps:
step S1, preprocessing an input image, and acquiring character recognition to obtain a distance matrix;
s2, taking the distance matrix as an expected distance matrix of the unmanned aerial vehicle of the part, splitting the movement track of the unmanned aerial vehicle into three directions of x, y and z axes, and independently planning a path in each direction to obtain a track path diagram of the unmanned aerial vehicle;
the step S1 specifically comprises the following steps:
s11, collecting RGB information of a picture;
step S12, converting the RGB information into a gray image and a binary image through a weighted average method;
s13, preprocessing a binary image based on an image edge detection method of an improved Canny algorithm;
step S14, unifying the size of the character image by adopting a normalization processing mode based on image scaling through a preset threshold value;
step S15, character recognition is carried out by adopting a character recognition method based on template matching;
step S16: the pixel coordinate information of the character image which is obtained after recognition and is reserved is recorded as (xi, yi), the recorded distance pixel value is d, and the pixel coordinate information is calculated by the following formula:
obtaining a distance matrix relative to the origin, and storing the distance matrix to T Δx 、T Δy
The step S13 specifically includes:
(1) Reducing noise by adopting a median filtering algorithm, simultaneously keeping edge information, and setting g (x, y) as a gray value on a (x, y) point and w (x, y) as the size of a current gray window; let w=3, g initially min ,g mid ,g max Respectively the minimum gray level, the middle gray level and the maximum gray level; if g min <g(x,y)<g max Output g (x, y), otherwise output g mid
(2) Smoothing and filtering the image by using a Gaussian function to remove Gaussian noise;
let I be the input image matrix of size m n, G (x, y) be a Gaussian function, O be the output result matrix, the filtering process is defined as follows:
define a one-dimensional Gaussian function as G (x), and
then
G(x,y)=G(x)G(y) (4)
Thus (2)
All pixel points on the image are subjected to convolution processing by utilizing the separability of Gaussian filtering;
(3) Respectively calculating gradient values (|delta f|) and direction angles (theta) of the denoised image in the x and y directions by using a Sobel operator; merging the gradient direction angles of 0-360 DEG into 4 directions theta': 0 °,45 °,90 °,135 °;
for all edges, 180 ° =0°,225 ° =45°; the gradient pixels with the maximum value in the gradient direction are reserved, and other pixels are deleted; additionally, the filtering template M, sigma is approximately equal to 1.4, and the gradient M in the x direction x Gradient m in y direction y The following steps are:
convolving the image with a Sobel template to obtain:
setting two thresholds, namely a high threshold t high And a low threshold t low The method comprises the steps of carrying out a first treatment on the surface of the If |Δf|<t low Then pixel (x, y) is a non-edge pixel, if |Δf|>t high The pixel (x, y) is an edge pixel, if t low <|Δf|<t high The neighborhood is further enlarged for judgment;
finally, optimizing an edge detection result through morphological operation on the image;
the step S15 specifically includes: establishing a standardized character template library according to preset conditions, and then checking the sample with templates in the standard library; adopting a multi-mode sparse self-coding algorithm based on sparse self-coding algorithm improvement and a hierarchical image coding method based on multi-level tree set division and matching tracking to unify input and output and optimize image reconstruction quality; finally, outputting a result;
the step S2 specifically comprises the following steps:
step S21, obtaining a distance matrix T Δx 、T Δy The unmanned aerial vehicle expected distance matrix as the part;
s22, splitting a motion track of the unmanned aerial vehicle into three directions of x, y and z axes, and independently planning a path in each direction;
step S23, initializing the coordinate position of the Leader;
and S24, controlling the speed of the unmanned aerial vehicle in each direction shaft at each time point through the control rate, and restraining the running track of the unmanned aerial vehicle.
2. The method for controlling a multi-unmanned aerial vehicle aerial medium formation planning system according to claim 1, wherein,
the image recognition module comprises image acquisition, image preprocessing, character processing, character recognition and discretization coordinate information storage; the distributed control module comprises position control, speed control and obstacle avoidance control; the array type switching module realizes the rapid switching of formation array types based on potential field function in combination with an improved exit algorithm.
3. The method for controlling a multi-unmanned aerial vehicle aerial medium formation planning system according to claim 1, wherein the step S14 is specifically that if a single threshold is adopted, a single threshold T is taken, and a binary image g (x, y) can be obtained after an original image is segmented; if multi-threshold segmentation is adopted, defining a segmentation threshold as T 0 ,T 1 ,……,T k The method comprises the steps of carrying out a first treatment on the surface of the The output image is expressed as:
g(x,y)=k
T k-1 ≤f(x,y)≤T k (k=1,2...n)
wherein n is an integer, and the size of the unified character image is normalized based on image scaling, so that the subsequent recognition is facilitated.
4. According to claim 1The control method of the formation planning system for the aerial media of the multiple unmanned aerial vehicles is characterized in that the step S16 specifically comprises the following steps: the pixel coordinate information of the character image which is obtained after recognition and is reserved is recorded as (x) i ,y i ) Recording the distance pixel value as d; the matrix formed by the coordinates of the pixels is denoted as T 1 Taking x 0 =maxx i 、y 0 =maxy i Record origin O t (x 0 ,y 0 ) The method comprises the steps of carrying out a first treatment on the surface of the Calculated by the following formula:
obtaining a distance matrix relative to the origin, and storing the distance matrix to T Δx 、T Δy
5. The method for controlling a multi-unmanned aerial vehicle aerial medium formation planning system according to claim 1, wherein the step S24 is specifically:
(1) The total control rate formula:
the total control of the unmanned aerial vehicle consists of three sub-controls, namely position control, speed control and obstacle avoidance control; taking the speed of the unmanned aerial vehicle as a carrier of the total control, and realizing path planning of the unmanned aerial vehicle in the position iteration of each time point;
(2) Restraint control for unmanned aerial vehicle position
In the formula, p i ,p j The real-time position of the ith and j unmanned aerial vehicle; Δp is the difference between the distances of i aircraft and j aircraft in the expected distance of the unmanned aerial vehicle; p is p j -p i The difference of real-time position distances of the unmanned aerial vehicle; (p) j -p i Δp) is the actual position distanceDeviation from the ideal position distance; a, a pj The position control rate is used for controlling the position deviation, and regulating and controlling the deviation between the positions of any two real-time unmanned aerial vehicles and the expected unmanned aerial vehicle, so that the unmanned aerial vehicle can finally reach the expected position;
(3) Constrained control of speed of unmanned aerial vehicle
j b pj (v pj -v pi ),p=x,y,z
Wherein the sum obtained by calculation is the real-time speed of the ith and j-th unmanned aerial vehicles, b pj The speed control rate is used for controlling the speed similarity of the two unmanned aerial vehicles, so that the formation integrity is ensured;
(4) Obstacle avoidance control for unmanned aerial vehicle
When the two unmanned aerial vehicles are smaller than the expected safe distance, the risk of collision is judged, and the control rate c is passed pj And restraining the flight track of the unmanned aerial vehicle to realize obstacle avoidance.
CN202110897965.5A 2021-08-05 2021-08-05 Formation planning system and method for multi-unmanned aerial vehicle aerial media Active CN113627445B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110897965.5A CN113627445B (en) 2021-08-05 2021-08-05 Formation planning system and method for multi-unmanned aerial vehicle aerial media

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110897965.5A CN113627445B (en) 2021-08-05 2021-08-05 Formation planning system and method for multi-unmanned aerial vehicle aerial media

Publications (2)

Publication Number Publication Date
CN113627445A CN113627445A (en) 2021-11-09
CN113627445B true CN113627445B (en) 2023-08-15

Family

ID=78383232

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110897965.5A Active CN113627445B (en) 2021-08-05 2021-08-05 Formation planning system and method for multi-unmanned aerial vehicle aerial media

Country Status (1)

Country Link
CN (1) CN113627445B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114326703A (en) * 2021-11-25 2022-04-12 珠海云洲智能科技股份有限公司 Unmanned ship performance script generation method, device and system
CN115185267B (en) * 2022-06-21 2023-07-18 北京远度互联科技有限公司 Method and device for generating target formation lattice, electronic equipment and storage medium
CN117668574B (en) * 2024-01-31 2024-05-14 利亚德智慧科技集团有限公司 Data model optimization method, device and equipment for light shadow show and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108388270A (en) * 2018-03-21 2018-08-10 天津大学 Cluster unmanned plane track posture cooperative control method towards security domain
CN109213200A (en) * 2018-11-07 2019-01-15 长光卫星技术有限公司 Multiple no-manned plane cooperates with formation flight management system and method
CN111127498A (en) * 2019-12-12 2020-05-08 重庆邮电大学 Canny edge detection method based on edge self-growth
CN111580554A (en) * 2020-05-13 2020-08-25 东南大学 Indoor unmanned aerial vehicle formation flying method based on frame-by-frame identification and generation of original point cloud
CN113050672A (en) * 2021-03-25 2021-06-29 福州大学 Unmanned aerial vehicle path planning method for emergency information acquisition and transmission

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108388270A (en) * 2018-03-21 2018-08-10 天津大学 Cluster unmanned plane track posture cooperative control method towards security domain
CN109213200A (en) * 2018-11-07 2019-01-15 长光卫星技术有限公司 Multiple no-manned plane cooperates with formation flight management system and method
CN111127498A (en) * 2019-12-12 2020-05-08 重庆邮电大学 Canny edge detection method based on edge self-growth
CN111580554A (en) * 2020-05-13 2020-08-25 东南大学 Indoor unmanned aerial vehicle formation flying method based on frame-by-frame identification and generation of original point cloud
CN113050672A (en) * 2021-03-25 2021-06-29 福州大学 Unmanned aerial vehicle path planning method for emergency information acquisition and transmission

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
High accuracy visual servoing for aerial manipulation using a 7 degrees of freedom industrial manipulator;LAIACKER M;IEEE(第12期);全文 *

Also Published As

Publication number Publication date
CN113627445A (en) 2021-11-09

Similar Documents

Publication Publication Date Title
CN113627445B (en) Formation planning system and method for multi-unmanned aerial vehicle aerial media
CN109543502B (en) Semantic segmentation method based on deep multi-scale neural network
CN111126359B (en) High-definition image small target detection method based on self-encoder and YOLO algorithm
CN113065546B (en) Target pose estimation method and system based on attention mechanism and Hough voting
CN110163271B (en) Panoramic image target detection method based on spherical projection grid and spherical convolution
CN111914698B (en) Human body segmentation method, segmentation system, electronic equipment and storage medium in image
US20110182469A1 (en) 3d convolutional neural networks for automatic human action recognition
CN111191583A (en) Space target identification system and method based on convolutional neural network
Zhang et al. Efficient inductive vision transformer for oriented object detection in remote sensing imagery
CN110738690A (en) unmanned aerial vehicle video middle vehicle speed correction method based on multi-target tracking framework
CN113657560B (en) Weak supervision image semantic segmentation method and system based on node classification
CN111126127B (en) High-resolution remote sensing image classification method guided by multi-level spatial context characteristics
CN110874566B (en) Method and device for generating data set, learning method and learning device using same
US8238650B2 (en) Adaptive scene dependent filters in online learning environments
Yin et al. Graph neural network for 6D object pose estimation
CN112330701A (en) Tissue pathology image cell nucleus segmentation method and system based on polar coordinate representation
CN116429082A (en) Visual SLAM method based on ST-ORB feature extraction
Pavlov et al. Detection and recognition of objects on aerial photographs using convolutional neural networks
Gong et al. FastRoadSeg: Fast monocular road segmentation network
Safadoust et al. Self-supervised monocular scene decomposition and depth estimation
CN107798329A (en) Adaptive particle filter method for tracking target based on CNN
CN112686247A (en) Identification card number detection method and device, readable storage medium and terminal
US20230118401A1 (en) Graph-based video instance segmentation
CN112508007B (en) Space target 6D attitude estimation method based on image segmentation Mask and neural rendering
CN114627139A (en) Unsupervised image segmentation method, unsupervised image segmentation device and unsupervised image segmentation equipment based on pixel feature learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant