CN115576358A - Unmanned aerial vehicle distributed control method based on machine vision - Google Patents

Unmanned aerial vehicle distributed control method based on machine vision Download PDF

Info

Publication number
CN115576358A
CN115576358A CN202211560263.9A CN202211560263A CN115576358A CN 115576358 A CN115576358 A CN 115576358A CN 202211560263 A CN202211560263 A CN 202211560263A CN 115576358 A CN115576358 A CN 115576358A
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
sub
real
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211560263.9A
Other languages
Chinese (zh)
Other versions
CN115576358B (en
Inventor
邓磊
冯璟煕
郑佳伟
曹洋舟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202211560263.9A priority Critical patent/CN115576358B/en
Publication of CN115576358A publication Critical patent/CN115576358A/en
Application granted granted Critical
Publication of CN115576358B publication Critical patent/CN115576358B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • G05D1/104Simultaneous control of position or course in three dimensions specially adapted for aircraft involving a plurality of aircrafts, e.g. formation flying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Remote Sensing (AREA)
  • Health & Medical Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention relates to the technical field of intelligent control, in particular to a distributed control method of an unmanned aerial vehicle based on machine vision, which comprises the following steps: acquiring a target image of a target to be searched and a real-time image of each unmanned aerial vehicle detection area; the method comprises the steps of obtaining corresponding sub-areas by sliding a target image in a real-time image, obtaining the similarity between the sub-areas and the target image according to gray information, adjusting sliding step length according to the similarity between the two sub-areas to slide to obtain the target area in the real-time image, further selecting the optimal sub-area in the real-time image, obtaining the optimal value of a corresponding unmanned aerial vehicle according to the position information of the central point of the optimal sub-area, selecting the unmanned aerial vehicle with the largest optimal value as a long machine, and performing formation control on the unmanned aerial vehicle cluster according to the determined long machine at the moment, so that the control effect on the unmanned aerial vehicle cluster is improved, the control efficiency of the unmanned aerial vehicle cluster during working is improved, and the control adjustment on the unmanned aerial vehicle is more timely.

Description

Unmanned aerial vehicle distributed control method based on machine vision
Technical Field
The invention relates to the technical field of intelligent control, in particular to a distributed control method of an unmanned aerial vehicle based on machine vision.
Background
The unmanned aerial vehicle has unique superiority and flexibility, and is responsible for various application scenes such as battlefields, rescue reconnaissance, target monitoring and other tasks; the idea of a Leader-Follower method in a traditional unmanned aerial vehicle formation control method is derived from cooperative control of multiple ground mobile robots, and is a method which is mature in application at present, but the method also has some problems, for example, when formation control is actually performed, the determination of a long machine is often preset, but the formation control is performed according to the preset long machine ignores the area detected by the unmanned aerial vehicle in the actual detection process, so that the long machine which is fixed in advance can not be timely positioned to a target area to be detected, and reasonable formation control is performed, which results in poor real-time performance and low efficiency in the actual search process.
Disclosure of Invention
In order to solve the problems of low efficiency and poor real-time performance of fixed-length machine search, the invention aims to provide a distributed control method of an unmanned aerial vehicle based on machine vision, and the adopted technical scheme is as follows:
one embodiment of the invention provides a distributed control method of an unmanned aerial vehicle based on machine vision, which comprises the following steps:
acquiring a target image of a target to be searched and a real-time image of each unmanned aerial vehicle detection area;
sliding a target image in a real-time image to obtain corresponding sub-regions, and acquiring the similarity between the sub-regions and the target image based on the gray information of each sub-region in the target image and the gray information of each sub-region in the real-time image;
adjusting a sliding step length according to the similarity of the current sub-area and the previous sub-area, and sliding the target image in the real-time image according to the adjusted sliding step length to obtain a target area;
selecting a target area with the maximum similarity in a real-time image as an optimal subregion, acquiring the Euclidean distance between the center point of the optimal subregion and each edge in the real-time image to which the optimal subregion belongs, acquiring an optimal value of a corresponding unmanned aerial vehicle based on the Euclidean distance, and selecting the unmanned aerial vehicle with the maximum optimal value as a long plane to perform formation control.
Preferably, the step of selecting the unmanned aerial vehicle with the largest preferred value as the long machine to perform formation control includes:
constructing a three-dimensional coordinate system by taking the position of the long machine as an origin; acquiring position coordinates of each unmanned aerial vehicle except the long aircraft based on the three-dimensional coordinate system;
and controlling and adjusting the position coordinates of the unmanned aerial vehicle based on a preset formation form and a preset interval.
Preferably, after the step of selecting the unmanned aerial vehicle with the largest preferred value as the long aircraft for formation control, the method further includes:
obtaining auxiliary wing machines based on the positions of the lead machines, obtaining the confidence of each auxiliary wing machine when the lead machines have faults, and taking the auxiliary wing machine with the highest confidence as a new lead machine to carry out formation control.
Preferably, said step of obtaining an auxiliary wing plane based on the position of said long plane comprises:
recording unmanned aerial vehicles except the long unmanned aerial vehicle in the unmanned aerial vehicle cluster as wing unmanned aerial vehicles; clustering based on a preferred value corresponding to each said bureaucratic plane to obtain at least two categories;
the category with the highest number of bureaucratic machines is the target category, and the bureaucratic machines that are within the communication range of the location of the long machine and belong to the target category are auxiliary bureaucratic machines.
Preferably, said step of obtaining the confidence of each auxiliary wing comprises:
acquiring three-dimensional coordinate vectors of all unmanned aerial vehicles in the unmanned aerial vehicle cluster according to the three-dimensional coordinate system, and acquiring corresponding spatial distance and angle characteristic values according to the three-dimensional coordinate vectors between every two unmanned aerial vehicles;
the confidence coefficient calculation method of each auxiliary long machine comprises the following steps:
Figure 416309DEST_PATH_IMAGE002
wherein, the first and the second end of the pipe are connected with each other,
Figure DEST_PATH_IMAGE003
representing a confidence level;
Figure 428697DEST_PATH_IMAGE004
representing the number of drones, with the exception of the long plane a and the auxiliary wing planes B;
Figure DEST_PATH_IMAGE005
represents the second of M drones
Figure 210839DEST_PATH_IMAGE006
The spatial distance between the individual drone and the auxiliary bureaucratic plane B;
Figure DEST_PATH_IMAGE007
represents the second of M drones
Figure 975664DEST_PATH_IMAGE006
The spatial distance between each unmanned aerial vehicle and the long aircraft A;
Figure 120337DEST_PATH_IMAGE008
is a natural constant;
Figure DEST_PATH_IMAGE009
represents the second of M drones
Figure 575065DEST_PATH_IMAGE006
An angular characteristic value between the individual drone and an auxiliary wing plane B;
Figure 703558DEST_PATH_IMAGE010
represents the second of M drones
Figure 68811DEST_PATH_IMAGE006
The angle characteristic value between each unmanned aerial vehicle and the long aircraft A.
Preferably, the step of obtaining the similarity between the sub-region and the target image based on the gray scale information of each sub-region in the target image and the real-time image respectively includes:
for any pixel point in the sub-region, acquiring a neighborhood pixel point in a preset range with the pixel point as a center, wherein the pixel point has neighborhood pixel points in the same preset range at corresponding positions in a target image, and respectively calculating a gray difference absolute value between each neighborhood pixel point and the pixel point to serve as a corresponding gray difference;
and obtaining the similarity between the sub-region and the target image according to the gray difference as follows:
Figure 333571DEST_PATH_IMAGE012
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE013
representing a similarity;
Figure 758867DEST_PATH_IMAGE014
representing pixel points in a target image E
Figure DEST_PATH_IMAGE015
The gray value of (a);
Figure 456214DEST_PATH_IMAGE016
representing pixel points in sub-region R
Figure 687475DEST_PATH_IMAGE015
The gray value of (a);
Figure DEST_PATH_IMAGE017
represents taking the maximum function;
Figure 744424DEST_PATH_IMAGE018
representing pixel points in a target image E
Figure 340622DEST_PATH_IMAGE015
And a first
Figure DEST_PATH_IMAGE019
Gray level difference between adjacent pixel points;
Figure 581723DEST_PATH_IMAGE020
representing pixel points in sub-region R
Figure 351096DEST_PATH_IMAGE015
And a first
Figure 324868DEST_PATH_IMAGE019
Gray level difference between adjacent pixel points;
Figure DEST_PATH_IMAGE021
the number of the pixel points in the target image E is represented and is consistent with the number of the pixel points in the sub-region R;
Figure 295229DEST_PATH_IMAGE008
representing a natural constant.
Preferably, the step of adjusting the sliding step size according to the similarity between the current sub-region and the previous sub-region includes:
acquiring a difference value of the similarity between the current sub-region and the previous sub-region as a similarity difference characteristic value; taking the product of the absolute value of the similarity difference characteristic value and a preset initial sliding step length as a variable quantity;
if the similarity difference characteristic value is larger than zero, subtracting the variable quantity from the initial sliding step length and rounding up to obtain an adjusted sliding step length;
if the similarity difference characteristic value is smaller than zero, adding the variable quantity to the initial sliding step length and rounding down to obtain an adjusted sliding step length;
if the similarity difference characteristic value is equal to zero, the initial sliding step length is not adjusted.
Preferably, the step of acquiring the target region includes:
and sliding the target image in the real-time image according to the adjusted sliding step length to obtain corresponding sub-regions, and obtaining the similarity of each sub-region, wherein the sub-region with the similarity larger than a preset similarity threshold is the target region.
Preferably, the step of acquiring the central point of the preferred sub-region includes:
and acquiring edge pixel points at the edge of the preferred subregion, and calculating the sum of Euclidean distances between each pixel point and all edge pixel points in the preferred subregion, wherein the corresponding pixel point when the sum of the Euclidean distances is minimum is a central point.
Preferably, the step of obtaining the preferred value of the corresponding drone based on the euclidean distance includes:
the Euclidean distance between the center point of the preferred subregion and each edge in the real-time image to which the preferred subregion belongs comprises: recording Euclidean distance from a central point to the upper edge of the real-time image to which the preferred sub-region belongs, euclidean distance from the central point to the lower edge of the real-time image to which the preferred sub-region belongs, euclidean distance from the central point to the left edge of the real-time image to which the preferred sub-region belongs, and Euclidean distance from the central point to the right edge of the real-time image to which the preferred sub-region belongs as a first distance, a second distance, a third distance and a fourth distance respectively;
respectively obtaining the absolute value of the difference between the first distance and the second distance and the absolute value of the difference between the third distance and the fourth distance, and taking the sum of the absolute value of the difference between the first distance and the second distance and the absolute value of the difference between the third distance and the fourth distance as an accumulated distance difference;
and constructing an exponential function by taking the negative number of the accumulated distance difference value as an index, wherein the product result of the exponential function and the similarity of the preferred sub-region is the preferred value of the corresponding unmanned aerial vehicle.
The invention has the following beneficial effects: the embodiment of the invention obtains the real-time image of each unmanned aerial vehicle detection area, calculates the similarity between each subregion and the target image in the real-time image to obtain the target area, and adjusts the sliding step length in real time according to the similarity between each subregion and the target image obtained in real time when the target area is obtained, thereby avoiding the problem of poor analysis effect of the fixed sliding step length on the real-time image, improving the efficiency of obtaining the subregion by sliding the target image, and simultaneously ensuring the integrity of the information of the subregion obtained by sliding; furthermore, an optimal subregion is obtained in a target region in each real-time image, optimal values are obtained through Euclidean distances from the center point of the optimal subregion to the edges of the real-time image to determine long machines in the unmanned aerial vehicle cluster, the optimal subregion determined through gray scale information is combined with the position of the center point of the optimal subregion to obtain the optimal values, the problem of inaccuracy considered by a single index is avoided, the accuracy of determining the long machines in the unmanned aerial vehicle cluster is improved, and the accurate and reliable long machines are utilized to carry out formation control on the unmanned aerial vehicles more timely, more efficiently and better in control effect.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a distributed control method for an unmanned aerial vehicle based on machine vision according to an embodiment of the present invention.
Detailed Description
To further illustrate the technical means and effects of the present invention adopted to achieve the predetermined invention purpose, the following detailed description, the structure, the features and the effects of the distributed control method of the unmanned aerial vehicle based on machine vision according to the present invention are provided with the accompanying drawings and the preferred embodiments. In the following description, different "one embodiment" or "another embodiment" refers to not necessarily the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The method is suitable for formation control of the unmanned aerial vehicle cluster, and formation control is carried out by a bureau-to-bureau plane machine by acquiring the most suitable bureau plane machine in the unmanned aerial vehicle cluster, so that the efficiency of formation control is improved. The following describes a specific scheme of the distributed control method for the unmanned aerial vehicle based on the machine vision in detail with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of a distributed control method for a machine vision-based drone according to an embodiment of the present invention is shown, where the method includes the following steps:
and step S100, acquiring a target image of a target to be searched and a real-time image of each unmanned aerial vehicle detection area.
Because of the flexibility and uniqueness of the unmanned aerial vehicle, the unmanned aerial vehicle is increasingly applied to scenes such as search and rescue scouting, target monitoring and the like, when a plurality of unmanned aerial vehicles carry out tasks together, the unmanned aerial vehicles need to be formed and deployed, the unmanned aerial vehicle cluster in the embodiment of the invention is controlled by a ground command center to search firstly, a pilot plane and a wing plane in the unmanned aerial vehicle are determined according to real-time situations corresponding to unmanned aerial vehicle search in the searching process, and then the wing plane is controlled in real time according to the pilot plane.
When determining a long machine in the unmanned aerial vehicle, firstly, judging according to an area monitored by the unmanned aerial vehicle in real time, recording an image of a target to be searched acquired in advance as a target image, wherein the target searched by the unmanned aerial vehicle is the target to be searched; and then acquiring a real-time image of each unmanned aerial vehicle detection area, wherein the real-time image is a ground image acquired by the unmanned aerial vehicles during flying in the air, each unmanned aerial vehicle can correspond to one real-time image at each sampling moment, the size of the real-time image is larger than that of the target image, the real-time images acquired by each unmanned aerial vehicle at the same sampling moment are analyzed, and the long aircraft in the unmanned aerial vehicle cluster is determined according to the characteristics of the real-time images acquired in real time.
Since the acquired target image and the real-time image are both RGB images, in order to reduce the subsequent calculation amount, both the target image and the real-time image are grayed, the graying method is a known method, and an implementer can select different methods to graye.
Step S200, sliding the target image in the real-time image to obtain a corresponding sub-region, and acquiring the similarity between the sub-region and the target image based on the gray scale information of each sub-region in the target image and the real-time image respectively.
After all unmanned aerial vehicles are collected and corresponding real-time images are acquired at the same sampling time, the real-time images corresponding to all unmanned aerial vehicles are analyzed, and the similarity between the real-time images shot by the unmanned aerial vehicles and the target images is judged, so that the unmanned aerial vehicles can be selected conveniently.
Specifically, taking an unmanned aerial vehicle Q as an example, a real-time image acquired by the unmanned aerial vehicle Q at the current sampling time is recorded as a real-time image W, a target image is recorded as an image E, in order to facilitate analysis of the similarity between the real-time image W and the target image E, the target image E is slid on the real-time image W, an initial sliding step length D =5 is set, since the size of the target image E is smaller than that of the real-time image W, a sub-region can be obtained by each sliding of the target image E on the real-time image W, the size of the sub-region is the same as that of the target image E, each pixel point on the sub-region has a corresponding pixel point on the target image E, and in order to facilitate subsequent distinction and analysis, the corresponding sub-region obtained by each sliding of the target image E on the real-time image W is recorded as an image R.
Furthermore, any pixel point on the sub-region R
Figure 354452DEST_PATH_IMAGE015
For example, to operate on pixel points
Figure 879847DEST_PATH_IMAGE015
The characteristics of the image are more accurate to compare and analyze, and pixel points are used for comparing and analyzing
Figure 239284DEST_PATH_IMAGE015
Acquiring neighborhood pixel points in a preset range for a center to analyze; for any pixel point in the sub-region, acquiring a neighborhood pixel point in a preset range with the pixel point as a center, wherein the pixel point has the neighborhood pixel point in the same preset range at the corresponding position in the target image, and respectively calculating the gray difference absolute value between each neighborhood pixel point and the pixel point as the corresponding gray difference.
The embodiment of the invention sets and acquires the pixel points
Figure 974022DEST_PATH_IMAGE015
The neighborhood pixel point of 3*3 size as the center is the pixel point at the corresponding position in the target image E
Figure 520541DEST_PATH_IMAGE015
Neighborhood pixel points of 3*3 also exist, and each neighborhood pixel point and pixel point corresponding to the pixel point q in the sub-area are respectively calculated
Figure 366138DEST_PATH_IMAGE015
Gray level difference between and pixel point in target image E
Figure 517764DEST_PATH_IMAGE015
Each corresponding neighborhood pixel point and pixel point
Figure 423403DEST_PATH_IMAGE015
The gray scale difference between each other, that is, each neighborhood pixel point and the pixel point
Figure 454289DEST_PATH_IMAGE015
The absolute value of the gray difference between the two is specifically as follows:
Figure DEST_PATH_IMAGE023
wherein the content of the first and second substances,
Figure 510101DEST_PATH_IMAGE024
is shown as
Figure 844130DEST_PATH_IMAGE019
Each neighborhood pixel point and each pixel point
Figure 920670DEST_PATH_IMAGE015
The difference in gray scale between;
Figure DEST_PATH_IMAGE025
is shown as
Figure 582727DEST_PATH_IMAGE019
Gray values of the neighborhood pixels;
Figure 241897DEST_PATH_IMAGE026
representing pixel points
Figure 430433DEST_PATH_IMAGE015
The gray value of (a).
By pixel points
Figure 677875DEST_PATH_IMAGE015
Taking the gray difference between the adjacent pixel points as the pixel point
Figure 623965DEST_PATH_IMAGE015
And the similarity between the subregion R and the target image E is obtained according to the surrounding characteristic information, and the similarity between the subregion R and the target image E is calculated as follows:
Figure 615055DEST_PATH_IMAGE012
wherein the content of the first and second substances,
Figure 392518DEST_PATH_IMAGE013
representing a similarity;
Figure 76440DEST_PATH_IMAGE014
representing pixel points in a target image E
Figure 569214DEST_PATH_IMAGE015
The gray value of (a);
Figure 301678DEST_PATH_IMAGE016
representing pixel points in sub-region R
Figure 464806DEST_PATH_IMAGE015
The gray value of (a);
Figure 257312DEST_PATH_IMAGE017
represents a maximum function;
Figure 974733DEST_PATH_IMAGE018
representing pixel points in a target image E
Figure 307625DEST_PATH_IMAGE015
And a first
Figure 325260DEST_PATH_IMAGE019
Gray level difference among the neighborhood pixel points;
Figure 76616DEST_PATH_IMAGE020
representing pixel points in sub-region R
Figure 484595DEST_PATH_IMAGE015
And a first
Figure 621178DEST_PATH_IMAGE019
Gray level difference between adjacent pixel points;
Figure 493319DEST_PATH_IMAGE021
the number of the pixel points in the target image E is represented and is consistent with the number of the pixel points in the sub-region R;
Figure 424366DEST_PATH_IMAGE008
representing a natural constant.
Figure DEST_PATH_IMAGE027
Representing pixels
Figure 257324DEST_PATH_IMAGE015
The absolute value of the difference between the gray value in the sub-region R and the gray value on the target image E is larger, and the larger the absolute value of the difference is, the pixel point is indicated
Figure 460248DEST_PATH_IMAGE015
The greater the difference between the gray scale information represented on the subregion R and the target image E;
Figure 124578DEST_PATH_IMAGE028
middle pixel point
Figure 492106DEST_PATH_IMAGE015
Normalizing the maximum gray values in the two images as denominators so as to enable the pixel points to be in a normalized mode
Figure 671414DEST_PATH_IMAGE015
The difference of the gray features in the two images is between 0 and 1, so that the subsequent analysis and calculation are facilitated; thus it can be seen that
Figure 149800DEST_PATH_IMAGE027
The larger the value of (a) is,
Figure 465375DEST_PATH_IMAGE028
the larger the value of (A), the larger the value of
Figure DEST_PATH_IMAGE029
The smaller the value of (A) is, the pixel point is indicated
Figure 616521DEST_PATH_IMAGE015
The smaller the similarity between the sub-region R and the target image E is;
Figure 17546DEST_PATH_IMAGE030
representing pixel points in a target image E
Figure 299623DEST_PATH_IMAGE015
And a first step of
Figure 735283DEST_PATH_IMAGE019
Gray scale difference between individual neighborhood pixels
Figure 647876DEST_PATH_IMAGE018
And pixels in the sub-region R
Figure 536197DEST_PATH_IMAGE015
And a first
Figure 621965DEST_PATH_IMAGE019
Gray scale difference between individual neighborhood pixels
Figure 378044DEST_PATH_IMAGE020
The absolute value of the difference of (a),
Figure 258275DEST_PATH_IMAGE030
the larger the value of (A) is, the pixel point is explained
Figure 837155DEST_PATH_IMAGE015
The greater the difference in gray scale difference information between the target image E and the pixel points in the sub-region R and the neighborhood,
Figure 726614DEST_PATH_IMAGE030
the larger the corresponding degree of similarity the smaller,
Figure DEST_PATH_IMAGE031
for use in
Figure 277812DEST_PATH_IMAGE030
The relationship with the similarity exhibits a negative correlation, an
Figure 812435DEST_PATH_IMAGE031
The value range is limited between 0 and 1;
Figure 878611DEST_PATH_IMAGE031
the larger the value of (A), the greater the similarity;
Figure 40602DEST_PATH_IMAGE032
Representing correspondence of all neighbourhood pixel points
Figure 774203DEST_PATH_IMAGE031
The larger the value is, the larger the corresponding similarity is; to be provided with
Figure DEST_PATH_IMAGE033
Representing pixel points on the target image E and the sub-region R
Figure 606023DEST_PATH_IMAGE015
The similarity degree of the sub-region R and the target image E is further averaged to obtain the final similarity degree, and the larger the similarity degree is, the more similar the gray information between the pixel points on the sub-region R and the target image E is.
The greater the similarity between the subregion R and the target image E, the more likely the subregion R is to be a region of the target to be detected; different sub-regions are obtained by the target image E continuously sliding on the real-time image W, and the similarity between each sub-region on the real-time image W and the target image E is obtained based on the method for calculating the similarity between the sub-regions R and the target image E, wherein the greater the similarity is, the more likely the region where the target to be detected is located is.
And S300, adjusting the sliding step length according to the similarity of the current subregion and the previous subregion, and sliding the target image in the real-time image according to the adjusted sliding step length to obtain the target region.
Step S200 shows that each time the template image slides on the real-time image, a sub-region can be obtained and the similarity corresponding to each sub-region can be obtained according to the method in step S200, and when the unmanned aerial vehicle actually searches for a target, the most important is the search efficiency and the search accuracy, so that in order to improve the search efficiency in the search process of the unmanned aerial vehicle, the sliding step length of the template image is adaptively adjusted.
Obtaining the difference value of the corresponding similarity of the current subregion and the previous subregion as a similarity difference characteristic value; taking the product of the absolute value of the similarity difference characteristic value and a preset initial sliding step length as a variable quantity; if the similarity difference characteristic value is larger than zero, subtracting the variable quantity from the initial sliding step length and rounding up to obtain an adjusted sliding step length; if the similarity difference characteristic value is less than zero, adding the variable quantity to the initial sliding step length and rounding down to obtain an adjusted sliding step length; if the similarity difference characteristic value is equal to 0, the initial sliding step length is not adjusted.
Specifically, assuming that the subregion R is a subregion obtained by first sliding the target image on the real-time image, marking the subregion obtained by second sliding the target image on the real-time image as R1, and the subregion R1 obtained by second sliding is a current subregion, and the subregion R1 respectively correspond to a similarity, adjusting the sliding step length according to the similarities of the subregions obtained by two times of sliding, and calculating the similarity difference characteristic value:
Figure DEST_PATH_IMAGE035
wherein the content of the first and second substances,
Figure 625408DEST_PATH_IMAGE036
representing similarity difference characteristic values;
Figure DEST_PATH_IMAGE037
representing the corresponding similarity of the sub-region R1;
Figure 997614DEST_PATH_IMAGE038
representing the corresponding similarity of the sub-regions R.
To be provided with
Figure DEST_PATH_IMAGE039
As the amount of change,
Figure 726667DEST_PATH_IMAGE040
for the step length of the slide before adjustment, i.e.
Figure DEST_PATH_IMAGE041
Figure 529056DEST_PATH_IMAGE036
The characteristic value of the similarity difference is represented,
Figure 366562DEST_PATH_IMAGE042
represents taking the absolute value; when the value of the similarity difference characteristic value F is greater than zero, it indicates that the current sliding is a sliding that gradually approaches the target area, so in order to make the matching precision higher, the sliding step length is adaptively reduced, and the sliding step length is adjusted to be:
Figure DEST_PATH_IMAGE043
wherein, in the step (A),
Figure 276881DEST_PATH_IMAGE040
for the step length of the slide before adjustment, i.e.
Figure 516232DEST_PATH_IMAGE041
Figure 283331DEST_PATH_IMAGE044
In order to adjust the sliding step length after the adjustment,
Figure 605203DEST_PATH_IMAGE036
the feature value of the similarity difference is represented,
Figure 912688DEST_PATH_IMAGE042
it is indicated that the absolute value is taken,
Figure DEST_PATH_IMAGE045
indicating that the rounding up calculation is performed on the value in parentheses.
When the value of the similarity difference characteristic value F is less than zero, it is indicated that the current sliding is gradually far away from the target area, so that in order to reduce the calculated amount and improve the matching efficiency, the sliding step length is adaptively increased, and the sliding step length is adjusted to be:
Figure 413071DEST_PATH_IMAGE046
wherein, in the step (A),
Figure 351071DEST_PATH_IMAGE040
for the step length of the slide before adjustment, i.e.
Figure 897590DEST_PATH_IMAGE041
Figure 8765DEST_PATH_IMAGE044
In order to adjust the sliding step length after the adjustment,
Figure 948341DEST_PATH_IMAGE036
the characteristic value of the similarity difference is represented,
Figure 57242DEST_PATH_IMAGE042
it is indicated that the absolute value is taken,
Figure DEST_PATH_IMAGE047
indicating that the rounding down calculation is performed on the value in parentheses.
When the value of the similarity difference value F is zero, the sliding step is not changed, so that the template image is slid on the real-time image by the sliding step which is adaptively adjusted to obtain different sub-regions, and the similarity corresponding to each sub-region is obtained based on the same method in step S200, the greater the similarity, the more likely the sub-region is to be the region where the target to be searched is, a similarity threshold is set for distinguishing an interference region and a target region in the real-time image, when the similarity corresponding to the sub-region obtained after the sliding step is adjusted is greater than the similarity threshold, the sub-region is marked as the target region, and when the similarity corresponding to the sub-region is not greater than the similarity threshold, the corresponding sub-region is the interference region.
Preferably, in the embodiment of the present invention, the similarity threshold is set to be 0.8, and each sub-region in the real-time image acquired by the unmanned aerial vehicle at different sampling times is analyzed to obtain the target region therein.
Step S400, selecting a target area with the maximum similarity in the real-time image as an optimal subregion, acquiring Euclidean distances between the center point of the optimal subregion and each edge in the real-time image to which the optimal subregion belongs, acquiring an optimal value of the corresponding unmanned aerial vehicle based on the Euclidean distances, and selecting the unmanned aerial vehicle with the largest optimal value as a long aircraft to perform formation control.
Based on the method of step S300, analyzing the real-time image acquired by the unmanned aerial vehicle at each sampling time to obtain a target area therein, and acquiring the target area with the maximum similarity between each real-time image and the target image, wherein the real-time image at each sampling time when the unmanned aerial vehicle cluster starts to search may not have the target area, so that only the real-time image of the unmanned aerial vehicle at each sampling time when the target area exists is subjected to subsequent analysis; assuming that there are N unmanned aerial vehicles at this time, the same sampling time corresponds to N real-time images, and each real-time image has a target area, so that each real-time image has a target area with the maximum similarity, and the target area with the maximum similarity in each real-time image is recorded as an optimal sub-area
Figure 497582DEST_PATH_IMAGE048
Acquiring a preferred sub-region in each real-time image
Figure 412448DEST_PATH_IMAGE048
Central point, central point and preferred sub-region of
Figure 215319DEST_PATH_IMAGE048
The sum of the Euclidean distances between all the edge pixel points is minimum, and the edge pixel points and the Euclidean distances are obtained by the prior known technology and are not described again; obtaining a preferred sub-region
Figure 492192DEST_PATH_IMAGE048
The euclidean distance between the central point of the preferred sub-region and each edge in the real-time image to which the preferred sub-region belongs in the horizontal direction and the vertical direction comprises: euclidean distance from the central point to the upper edge of the real-time image of the preferred sub-region, and Euclidean distance from the central point to the upper edge of the real-time image of the preferred sub-regionRecording the Euclidean distance from the lower edge of the real-time image to which the preferred sub-region belongs, the Euclidean distance from the central point to the left edge of the real-time image to which the preferred sub-region belongs, and the Euclidean distance from the central point to the right edge of the real-time image to which the preferred sub-region belongs as a first distance, a second distance, a third distance and a fourth distance respectively; respectively obtaining the absolute value of the difference between the first distance and the second distance and the absolute value of the difference between the third distance and the fourth distance, and taking the sum of the absolute value of the difference between the first distance and the second distance and the absolute value of the difference between the third distance and the fourth distance as an accumulated distance difference; and constructing an exponential function by taking the negative number of the accumulated distance difference value as a power exponent, wherein the product result of the exponential function and the similarity of the optimized sub-region is the optimized value of the corresponding unmanned aerial vehicle.
By means of preferred sub-regions in each real-time image
Figure 278883DEST_PATH_IMAGE048
Obtaining an optimal value of each unmanned aerial vehicle as a long aircraft, wherein the optimal value is calculated as follows:
Figure 935123DEST_PATH_IMAGE050
wherein, the first and the second end of the pipe are connected with each other,
Figure DEST_PATH_IMAGE051
represents a preferred value;
Figure 264604DEST_PATH_IMAGE052
represents a preferred subregion
Figure 980888DEST_PATH_IMAGE048
The corresponding maximum similarity;
Figure DEST_PATH_IMAGE053
represents a preferred subregion
Figure 133170DEST_PATH_IMAGE048
The Euclidean distance from the central point of the image to the upper edge of the real-time image, namely the first distance;
Figure 389839DEST_PATH_IMAGE054
represents a preferred subregion
Figure 370564DEST_PATH_IMAGE048
The Euclidean distance from the central point of the image to the lower edge of the real-time image, namely the second distance;
Figure DEST_PATH_IMAGE055
represents a preferred subregion
Figure 992170DEST_PATH_IMAGE048
The euclidean distance from the center point of (a) to the left edge of the real-time image, that is, the third distance;
Figure 487873DEST_PATH_IMAGE056
representing preferred sub-regions
Figure 217407DEST_PATH_IMAGE048
The Euclidean distance from the central point of the image to the right edge of the real-time image, namely the fourth distance;
Figure 849377DEST_PATH_IMAGE008
is a natural constant;
Figure 438621DEST_PATH_IMAGE042
calculated as absolute values.
Figure DEST_PATH_IMAGE057
Represents a preferred subregion
Figure 296987DEST_PATH_IMAGE048
The absolute value of the difference between the Euclidean distance from the central point to the upper edge of the real-time image and the Euclidean distance from the lower edge of the real-time image is smaller, and the position of the central point in the vertical direction is closer to the center of the real-time image;
Figure 895458DEST_PATH_IMAGE058
representing preferred sub-regions
Figure 647514DEST_PATH_IMAGE048
The absolute value of the difference between the Euclidean distance from the central point to the left edge of the real-time image and the Euclidean distance from the right edge of the real-time image is smaller, and the position of the central point in the horizontal direction is closer to the center of the real-time image; thus, the device
Figure DEST_PATH_IMAGE059
The larger the value of (A), the accumulated distance difference is indicated
Figure 782278DEST_PATH_IMAGE060
The smaller the value of (2), the closer the central point is to the central position of the real-time image, and the better the detection and tracking effect of the corresponding unmanned aerial vehicle is; but preferably sub-regions
Figure 455836DEST_PATH_IMAGE048
The greater the similarity between the unmanned aerial vehicle and the target image, the better the detection effect of the unmanned aerial vehicle at the moment is, so the greater the preferred value is, the greater the possibility that the corresponding unmanned aerial vehicle serves as a long machine to carry out formation command is, and the better the effect that the corresponding unmanned aerial vehicle serves as a long machine is.
And by analogy, the optimal value of the unmanned aerial vehicle corresponding to each real-time image is obtained according to the optimal sub-area in each real-time image, the unmanned aerial vehicle corresponding to the optimal value at the maximum is selected as the leader to conduct formation command, and other unmanned aerial vehicles are used as the leader to receive control of the leader.
Further, considering that in the traditional formation control, if a lead plane fails in control, the robustness of the formation is poor because the following wing plane has no independent decision-making capability, and the wing plane followed by the lead plane is also paralyzed at all lines; and when multi-level formation is carried out, the lower-level nobody has a larger position deviation, namely the transmission iteration of the error. Therefore, in the embodiment of the present invention, each of the wing machines is analyzed, an auxiliary wing machine is selected from all the wing machines, and when a lead machine controls another wing machine other than the auxiliary wing machine, the auxiliary wing machine participates in the position decision, and further when the lead machine has a fault or the like, a new lead machine can be obtained by reselecting among the auxiliary wing machines.
Because different long machines corresponding to the maximum preferred values can be obtained at each sampling moment, if the long machine corresponding to the maximum preferred value at the next moment is inconsistent with the long machine determined at the current sampling moment, judging whether the preferred value is greater than a set change threshold value, and if so, changing the long machine in real time; if the current value is not greater than the preset change threshold value, the long machine at the current moment continues to be subjected to formation control until the preferred value is greater than the preset change threshold value or the long machine fails, and the long machine is changed; as a preferable example, the modification threshold is set to 0.95 in the embodiment of the present invention.
Specifically, assuming that the unmanned aerial vehicle a is a leader, the unmanned aerial vehicle B and the unmanned aerial vehicle U are auxiliary wing machines, and the other unmanned aerial vehicles are wing machines, if formation of the unmanned aerial vehicles is required to be a horizontal linear formation in this embodiment, when formation control is performed, information transmission is performed on the auxiliary wing machines and control of a three-dimensional channel is performed; the position of the long plane A is taken as the origin of the three-dimensional coordinate axis, so as to obtain the initial coordinates corresponding to the auxiliary wing plane B and the auxiliary wing plane U respectively as
Figure DEST_PATH_IMAGE061
And
Figure 530102DEST_PATH_IMAGE062
(ii) a If a preset interval of formation is required to be K at this time, that is, the interval between adjacent unmanned aerial vehicles is K, information control is transmitted to the auxiliary wing plane B, the speed control of the auxiliary wing plane is consistent with the lead plane a, and the position of the auxiliary wing plane B is adjusted to
Figure DEST_PATH_IMAGE063
Or
Figure 504706DEST_PATH_IMAGE064
(ii) a The transmission of information to the auxiliary wing machines U is controlled, based on the same method of control of the auxiliary wing machines B, so that the speed of the auxiliary wing machines U coincides with the incumbent machines a, and the position of the auxiliary wing machines U is adjustedThe auxiliary wing machines U and B are symmetrically distributed with respect to the main machine a, i.e. if an auxiliary wing machine B is in position
Figure 701332DEST_PATH_IMAGE063
The auxiliary wing aircraft U is positioned
Figure 393345DEST_PATH_IMAGE064
(ii) a By analogy, after the formation control of the positions of the auxiliary wing machines is finished, the positions of other wing machines are subjected to coordinate control.
Acquiring three-dimensional coordinate vectors of all unmanned aerial vehicles according to the three-dimensional coordinate system, and acquiring corresponding spatial distance and angle characteristic values according to the three-dimensional coordinate vectors between every two unmanned aerial vehicles; when a farm plane a has a fault, an auxiliary wing plane B and an auxiliary wing plane U are taken as candidate farm planes, the confidence of the auxiliary wing plane B and the auxiliary wing plane U as new farm planes is calculated, taking the auxiliary wing plane B as an example, and when the farm plane a has a fault, the confidence of the auxiliary wing plane B as a new farm plane is calculated as:
Figure 257920DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure 922251DEST_PATH_IMAGE003
representing a confidence level;
Figure 758620DEST_PATH_IMAGE004
representing the number of drones, with the exception of the long craft a and the auxiliary wing craft B;
Figure 344453DEST_PATH_IMAGE005
indicate to M drones
Figure 819909DEST_PATH_IMAGE006
Unmanned plane and auxiliary wing plane
Figure DEST_PATH_IMAGE065
The spatial distance therebetween;
Figure 807588DEST_PATH_IMAGE007
represents the second of M drones
Figure 283700DEST_PATH_IMAGE006
The spatial distance between each unmanned aerial vehicle and the long aircraft A;
Figure 950304DEST_PATH_IMAGE008
is a natural constant;
Figure 232381DEST_PATH_IMAGE009
indicate to M drones
Figure 871304DEST_PATH_IMAGE006
An angular characteristic value between the individual drone and an auxiliary wing plane B;
Figure 860862DEST_PATH_IMAGE010
represents the second of M drones
Figure 749183DEST_PATH_IMAGE006
Angle eigenvalue between individual unmanned aerial vehicle and long aircraft A.
The method for acquiring the spatial distance is a known means and is not described in detail; the angle characteristic value is obtained based on a three-dimensional coordinate axis with a long machine A as an origin: obtaining a corresponding three-dimensional coordinate vector according to the position of each unmanned aerial vehicle, and obtaining a corresponding included angle based on the three-dimensional coordinate vectors corresponding to the two unmanned aerial vehicles, wherein the included angle is a corresponding angle characteristic value, and a calculation formula of the included angle between the three-dimensional vectors is a known means and is not repeated;
Figure 772634DEST_PATH_IMAGE066
is shown as
Figure 328380DEST_PATH_IMAGE006
The difference absolute value of the space distance between each unmanned aerial vehicle and the auxiliary wing aircraft B and the long aircraft A,
Figure 677453DEST_PATH_IMAGE066
the smaller the value of (A), the
Figure 53071DEST_PATH_IMAGE006
The smaller the difference in spatial distance between an unmanned aerial vehicle to an auxiliary wing aircraft B and a farm aircraft a, the greater the possibility of the auxiliary wing aircraft B as a new farm aircraft; in the same way, the method for preparing the composite material,
Figure DEST_PATH_IMAGE067
is shown as
Figure 814966DEST_PATH_IMAGE006
The absolute value of the difference of the characteristic values of the angles between the individual unmanned plane and the auxiliary wing plane B and the long plane A respectively,
Figure 428481DEST_PATH_IMAGE066
the smaller the value of (A) is, the first one is
Figure 745193DEST_PATH_IMAGE006
The closer the angular characteristic value between the unmanned aerial vehicle and the auxiliary wing plane B and the long plane a, the greater the possibility that the auxiliary wing plane B is a new long plane; a negative correlation index function with the natural constant e as the base is used for normalization, and the absolute value of the difference of the spatial distance and the absolute value of the difference of the angle characteristic value are ensured to be in a negative correlation relation with the confidence coefficient, so that the subsequent analysis and calculation are facilitated; the confidence coefficient that the auxiliary wing plane B is the new farm plane is obtained through the average of the spatial distance information and the angle characteristic value information, and the result is more reliable and convincing.
According to the method for obtaining the same confidence of the auxiliary wing plane B as the new farm, the confidence corresponding to other auxiliary wing planes is obtained, when the farm A has a fault, the auxiliary wing plane with the highest confidence is used as the new farm, and the unmanned planes in the unmanned plane cluster are controlled by the new farm.
The method for selecting an auxiliary bureaucratic plane in this embodiment is as follows: performing one-dimensional k-means mean clustering on the corresponding optimized values of all unmanned aerial vehicles except the long unmanned aerial vehicle, wherein the clustering distance is the absolute value of the difference of the optimized values among the unmanned aerial vehicles; setting a cluster category k =2, then clustering all the unmanned aerial vehicles to obtain two categories, selecting the category with the largest number of unmanned aerial vehicles as a target category, wherein all the unmanned aerial vehicles in the target category and in the communication range of the farm aircraft are auxiliary wing aircraft, and the communication range is determined by the configuration of the unmanned aerial vehicles and is known, so that the auxiliary wing aircraft corresponding to the farm aircraft can be obtained.
In summary, in the embodiment of the present invention, a target image of a target to be searched and a real-time image of each unmanned aerial vehicle detection area are obtained; sliding the target image in the real-time image to obtain corresponding sub-regions, and acquiring the similarity between the sub-regions and the target image based on the gray information of each sub-region in the target image and the real-time image respectively; adjusting the sliding step length according to the similarity of the current subregion and the previous subregion, and sliding the target image in the real-time image according to the adjusted sliding step length to obtain a target region; selecting a target area with the maximum similarity in the real-time image as an optimal subregion, acquiring Euclidean distances between the center point of the optimal subregion and each edge of the real-time image to which the optimal subregion belongs, acquiring an optimal value corresponding to the unmanned aerial vehicle based on the Euclidean distances, and selecting the unmanned aerial vehicle with the largest optimal value as a long plane to perform formation control; the reliability of selection of the long machine is improved, the formation control of the unmanned aerial vehicle cluster is carried out based on the more reliable and accurate long machine, and the control effect and the control efficiency are ensured; in addition, in this embodiment, a three-dimensional coordinate system is established according to the position of the lead aircraft, so that the lead aircraft can determine the position of each unmanned aerial vehicle when controlling formation of the unmanned aerial vehicles, and an auxiliary wing aircraft is selected according to the position of the lead aircraft and the preferred value of each unmanned aerial vehicle, and when the lead aircraft fails, the most suitable auxiliary wing aircraft can be timely and accurately found out as a new lead aircraft to perform formation control by obtaining the confidence of the auxiliary wing aircraft, thereby improving the robustness of overall control and avoiding control paralysis of the unmanned aerial vehicle fleet due to special situations such as failure of the unmanned aerial vehicles.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that are within the spirit of the present invention are intended to be included therein.

Claims (10)

1. A distributed control method for an unmanned aerial vehicle based on machine vision is characterized by comprising the following steps:
acquiring a target image of a target to be searched and a real-time image of each unmanned aerial vehicle detection area;
sliding a target image in a real-time image to obtain corresponding sub-regions, and acquiring the similarity between the sub-regions and the target image based on the gray information of each sub-region in the target image and the gray information of each sub-region in the real-time image;
adjusting a sliding step length according to the similarity of the current sub-area and the previous sub-area, and sliding the target image in the real-time image according to the adjusted sliding step length to obtain a target area;
selecting a target area with the maximum similarity in a real-time image as an optimal subregion, acquiring the Euclidean distance between the center point of the optimal subregion and each edge in the real-time image to which the optimal subregion belongs, acquiring an optimal value of a corresponding unmanned aerial vehicle based on the Euclidean distance, and selecting the unmanned aerial vehicle with the maximum optimal value as a long plane to perform formation control.
2. The distributed control method for unmanned aerial vehicles based on machine vision as claimed in claim 1, wherein the step of selecting the unmanned aerial vehicle with the largest preferred value as the long machine for formation control comprises:
constructing a three-dimensional coordinate system by taking the position of the long machine as an origin; acquiring the position coordinates of each unmanned aerial vehicle except the long aircraft based on the three-dimensional coordinate system;
and controlling and adjusting the position coordinates of the unmanned aerial vehicle based on the preset formation form and the preset interval.
3. The distributed control method for unmanned aerial vehicles based on machine vision according to claim 2, wherein after the step of selecting the unmanned aerial vehicle with the largest preferred value as the long machine for formation control, the method further comprises:
the method comprises the steps of obtaining auxiliary wing planes based on the positions of the farm planes, obtaining the confidence of each auxiliary wing plane when the farm planes have faults, and controlling formation by taking the auxiliary wing plane with the highest confidence as a new farm plane.
4. A machine vision based unmanned aerial vehicle distributed control method as defined in claim 3, wherein the step of obtaining an auxiliary wing plane based on the position of the farm plane comprises:
recording unmanned aerial vehicles except the long unmanned aerial vehicle in the unmanned aerial vehicle cluster as wing unmanned aerial vehicles; clustering based on a preferred value corresponding to each said bureaucratic plane to obtain at least two categories;
the category with the highest number of wing machines is taken as the target category, and the wing machines belonging to the target category and within the communication range of the location of the farm machine are auxiliary wing machines.
5. A machine vision based unmanned aerial vehicle distributed control method as defined in claim 4, wherein the step of obtaining confidence level of each auxiliary wing plane comprises:
acquiring three-dimensional coordinate vectors of all unmanned aerial vehicles in the unmanned aerial vehicle cluster according to the three-dimensional coordinate system, and acquiring corresponding spatial distance and angle characteristic values according to the three-dimensional coordinate vectors between every two unmanned aerial vehicles;
the confidence coefficient calculation method of each auxiliary long machine comprises the following steps:
Figure 461424DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 623415DEST_PATH_IMAGE002
representing a confidence level;
Figure 622595DEST_PATH_IMAGE003
representing the number of drones, with the exception of the long craft a and the auxiliary wing craft B;
Figure 844629DEST_PATH_IMAGE004
represents the second of M drones
Figure 194839DEST_PATH_IMAGE005
The spatial distance between the individual drone and the auxiliary bureaucratic plane B;
Figure 160521DEST_PATH_IMAGE006
represents the second of M drones
Figure 279787DEST_PATH_IMAGE005
The spatial distance between each unmanned aerial vehicle and the long aircraft A;
Figure 407143DEST_PATH_IMAGE007
is a natural constant;
Figure 38456DEST_PATH_IMAGE008
represents the second of M drones
Figure 542250DEST_PATH_IMAGE005
An angular characteristic value between the individual drone and an auxiliary wing plane B;
Figure 781602DEST_PATH_IMAGE009
represents the second of M drones
Figure 79859DEST_PATH_IMAGE005
The angle characteristic value between each unmanned aerial vehicle and the long aircraft A.
6. The distributed control method for unmanned aerial vehicle based on machine vision according to claim 1, wherein the step of obtaining the similarity between each sub-region and the target image based on the gray scale information of each sub-region in the target image and the real-time image respectively comprises:
for any pixel point in the sub-region, acquiring a neighborhood pixel point in a preset range with the pixel point as a center, wherein the pixel point has neighborhood pixel points in the same preset range at corresponding positions in a target image, and respectively calculating a gray difference absolute value between each neighborhood pixel point and the pixel point to serve as a corresponding gray difference;
and obtaining the similarity between the sub-region and the target image according to the gray difference as follows:
Figure 404661DEST_PATH_IMAGE010
wherein the content of the first and second substances,
Figure 712146DEST_PATH_IMAGE011
representing a similarity;
Figure 540424DEST_PATH_IMAGE012
representing pixel points in a target image E
Figure 275162DEST_PATH_IMAGE013
The gray value of (a);
Figure 836330DEST_PATH_IMAGE014
representing pixel points in sub-region R
Figure 947505DEST_PATH_IMAGE013
The gray value of (a);
Figure 895869DEST_PATH_IMAGE015
represents a maximum function;
Figure 4771DEST_PATH_IMAGE016
representing pixel points in a target image E
Figure 38586DEST_PATH_IMAGE013
And a first
Figure 687873DEST_PATH_IMAGE017
Gray level difference between adjacent pixel points;
Figure 490744DEST_PATH_IMAGE018
representing pixel points in sub-region R
Figure 832864DEST_PATH_IMAGE013
And a first step of
Figure 85466DEST_PATH_IMAGE017
Gray level difference among the neighborhood pixel points;
Figure 538444DEST_PATH_IMAGE019
the number of the pixel points in the target image E is represented and is consistent with the number of the pixel points in the sub-region R;
Figure 195821DEST_PATH_IMAGE007
representing a natural constant.
7. The distributed control method for unmanned aerial vehicle based on machine vision according to claim 1, wherein the step of adjusting the sliding step length according to the similarity between the current sub-area and the previous sub-area comprises:
acquiring a difference value of the similarity between the current sub-region and the previous sub-region as a similarity difference characteristic value; taking the product of the absolute value of the similarity difference characteristic value and a preset initial sliding step length as a variable quantity;
if the similarity difference characteristic value is larger than zero, subtracting the variable quantity from the initial sliding step length and rounding up to obtain an adjusted sliding step length;
if the similarity difference characteristic value is smaller than zero, adding the variable quantity to the initial sliding step length and rounding down to obtain an adjusted sliding step length;
if the similarity difference characteristic value is equal to zero, the initial sliding step length is not adjusted.
8. The distributed control method for unmanned aerial vehicle based on machine vision according to claim 1, wherein the step of obtaining the target area comprises:
and sliding the target image in the real-time image according to the adjusted sliding step length to obtain corresponding sub-regions, and obtaining the similarity of each sub-region, wherein the sub-region with the similarity larger than a preset similarity threshold is the target region.
9. The distributed control method for unmanned aerial vehicle based on machine vision according to claim 1, wherein the step of obtaining the central point of the preferred sub-area comprises:
and acquiring edge pixel points at the edge of the preferred subregion, and calculating the sum of Euclidean distances between each pixel point and all edge pixel points in the preferred subregion, wherein the pixel point corresponding to the minimum sum of the Euclidean distances is a central point.
10. The distributed control method for unmanned aerial vehicles based on machine vision as claimed in claim 1, wherein the step of obtaining the preferred value of corresponding unmanned aerial vehicle based on the euclidean distance comprises:
the Euclidean distance between the center point of the preferred subregion and each edge in the real-time image to which the preferred subregion belongs comprises: recording Euclidean distance from a central point to the upper edge of the real-time image to which the preferred sub-region belongs, euclidean distance from the central point to the lower edge of the real-time image to which the preferred sub-region belongs, euclidean distance from the central point to the left edge of the real-time image to which the preferred sub-region belongs, and Euclidean distance from the central point to the right edge of the real-time image to which the preferred sub-region belongs as a first distance, a second distance, a third distance and a fourth distance respectively;
respectively obtaining the absolute value of the difference between the first distance and the second distance and the absolute value of the difference between the third distance and the fourth distance, and taking the sum of the absolute value of the difference between the first distance and the second distance and the absolute value of the difference between the third distance and the fourth distance as an accumulated distance difference;
and constructing an exponential function by taking the negative number of the accumulated distance difference value as an exponent, wherein the product result of the exponential function and the similarity of the preferred sub-region is the preferred value of the corresponding unmanned aerial vehicle.
CN202211560263.9A 2022-12-07 2022-12-07 Unmanned aerial vehicle distributed control method based on machine vision Active CN115576358B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211560263.9A CN115576358B (en) 2022-12-07 2022-12-07 Unmanned aerial vehicle distributed control method based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211560263.9A CN115576358B (en) 2022-12-07 2022-12-07 Unmanned aerial vehicle distributed control method based on machine vision

Publications (2)

Publication Number Publication Date
CN115576358A true CN115576358A (en) 2023-01-06
CN115576358B CN115576358B (en) 2023-03-10

Family

ID=84590177

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211560263.9A Active CN115576358B (en) 2022-12-07 2022-12-07 Unmanned aerial vehicle distributed control method based on machine vision

Country Status (1)

Country Link
CN (1) CN115576358B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107491731A (en) * 2017-07-17 2017-12-19 南京航空航天大学 A kind of Ground moving target detection and recognition methods towards precision strike
CN108021868A (en) * 2017-11-06 2018-05-11 南京航空航天大学 A kind of quick highly reliable circular target detection recognition method
CN109949229A (en) * 2019-03-01 2019-06-28 北京航空航天大学 A kind of target cooperative detection method under multi-platform multi-angle of view
CN110262553A (en) * 2019-06-27 2019-09-20 西北工业大学 Fixed-wing UAV Formation Flight apparatus and method based on location information
CN111412788A (en) * 2020-03-26 2020-07-14 湖南科技大学 Suspected target detection system of thunder field
CN113658085A (en) * 2021-10-20 2021-11-16 北京优幕科技有限责任公司 Image processing method and device
CN113867393A (en) * 2021-10-19 2021-12-31 中国人民解放军军事科学院国防科技创新研究院 Flight path controllable unmanned aerial vehicle formation form reconstruction method
CN113989308A (en) * 2021-11-04 2022-01-28 浙江大学 Polygonal target segmentation method based on Hough transform and template matching
CN114967742A (en) * 2022-05-27 2022-08-30 北京理工大学 Wild goose group-imitated multi-fixed-wing unmanned aerial vehicle formation reconstruction method
CN115374652A (en) * 2022-10-21 2022-11-22 中国人民解放军国防科技大学 Evidence reasoning-based unmanned aerial vehicle cluster cooperative obstacle avoidance capability test evaluation method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107491731A (en) * 2017-07-17 2017-12-19 南京航空航天大学 A kind of Ground moving target detection and recognition methods towards precision strike
CN108021868A (en) * 2017-11-06 2018-05-11 南京航空航天大学 A kind of quick highly reliable circular target detection recognition method
CN109949229A (en) * 2019-03-01 2019-06-28 北京航空航天大学 A kind of target cooperative detection method under multi-platform multi-angle of view
CN110262553A (en) * 2019-06-27 2019-09-20 西北工业大学 Fixed-wing UAV Formation Flight apparatus and method based on location information
CN111412788A (en) * 2020-03-26 2020-07-14 湖南科技大学 Suspected target detection system of thunder field
CN113867393A (en) * 2021-10-19 2021-12-31 中国人民解放军军事科学院国防科技创新研究院 Flight path controllable unmanned aerial vehicle formation form reconstruction method
CN113658085A (en) * 2021-10-20 2021-11-16 北京优幕科技有限责任公司 Image processing method and device
CN113989308A (en) * 2021-11-04 2022-01-28 浙江大学 Polygonal target segmentation method based on Hough transform and template matching
CN114967742A (en) * 2022-05-27 2022-08-30 北京理工大学 Wild goose group-imitated multi-fixed-wing unmanned aerial vehicle formation reconstruction method
CN115374652A (en) * 2022-10-21 2022-11-22 中国人民解放军国防科技大学 Evidence reasoning-based unmanned aerial vehicle cluster cooperative obstacle avoidance capability test evaluation method

Also Published As

Publication number Publication date
CN115576358B (en) 2023-03-10

Similar Documents

Publication Publication Date Title
CN108958282B (en) Three-dimensional space path planning method based on dynamic spherical window
US7054724B2 (en) Behavior control apparatus and method
CN115147437B (en) Intelligent robot guiding machining method and system
CN109483507B (en) Indoor visual positioning method for walking of multiple wheeled robots
CN112927264B (en) Unmanned aerial vehicle tracking shooting system and RGBD tracking method thereof
CN109328615B (en) Lawn boundary recognition method, control method of mowing device and mowing device
CN115880674B (en) Obstacle avoidance steering correction method based on unmanned mine car
CN113313701B (en) Electric vehicle charging port two-stage visual detection positioning method based on shape prior
Himri et al. Semantic SLAM for an AUV using object recognition from point clouds
CN113902862B (en) Visual SLAM loop verification system based on consistency cluster
CN115576358B (en) Unmanned aerial vehicle distributed control method based on machine vision
CN106771329B (en) Method for detecting running speed of unmanned aerial vehicle in deceleration process
CN113469195B (en) Target identification method based on self-adaptive color quick point feature histogram
CN112529891B (en) Method and device for identifying hollow holes and detecting contours based on point cloud and storage medium
CN117237902B (en) Robot character recognition system based on deep learning
CN111461194B (en) Point cloud processing method and device, driving control method, electronic device and vehicle
Wang et al. Research on vehicle detection based on faster R-CNN for UAV images
CN113316080B (en) Indoor positioning method based on Wi-Fi and image fusion fingerprint
CN109803234A (en) Unsupervised fusion and positioning method based on the constraint of weight different degree
CN115511853A (en) Remote sensing ship detection and identification method based on direction variable characteristics
CN115615436A (en) Multi-machine repositioning unmanned aerial vehicle positioning method
CN111814662B (en) Visible light image airplane rapid detection method based on miniature convolutional neural network
CN115239902A (en) Method, device and equipment for establishing surrounding map of mobile equipment and storage medium
CN117891258B (en) Intelligent planning method for track welding path
CN112766037B (en) 3D point cloud target identification and positioning method based on maximum likelihood estimation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant