CN115576358A - Unmanned aerial vehicle distributed control method based on machine vision - Google Patents
Unmanned aerial vehicle distributed control method based on machine vision Download PDFInfo
- Publication number
- CN115576358A CN115576358A CN202211560263.9A CN202211560263A CN115576358A CN 115576358 A CN115576358 A CN 115576358A CN 202211560263 A CN202211560263 A CN 202211560263A CN 115576358 A CN115576358 A CN 115576358A
- Authority
- CN
- China
- Prior art keywords
- unmanned aerial
- aerial vehicle
- sub
- real
- similarity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 230000015572 biosynthetic process Effects 0.000 claims abstract description 35
- 238000001514 detection method Methods 0.000 claims abstract description 10
- 239000013598 vector Substances 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 8
- 239000000126 substance Substances 0.000 claims description 7
- 238000004891 communication Methods 0.000 claims description 4
- 230000000694 effects Effects 0.000 abstract description 9
- 238000005070 sampling Methods 0.000 description 11
- 238000004458 analytical method Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 206010033799 Paralysis Diseases 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/10—Simultaneous control of position or course in three dimensions
- G05D1/101—Simultaneous control of position or course in three dimensions specially adapted for aircraft
- G05D1/104—Simultaneous control of position or course in three dimensions specially adapted for aircraft involving a plurality of aircrafts, e.g. formation flying
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/17—Terrestrial scenes taken from planes or by drones
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Remote Sensing (AREA)
- Health & Medical Sciences (AREA)
- Automation & Control Theory (AREA)
- Radar, Positioning & Navigation (AREA)
- Aviation & Aerospace Engineering (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention relates to the technical field of intelligent control, in particular to a distributed control method of an unmanned aerial vehicle based on machine vision, which comprises the following steps: acquiring a target image of a target to be searched and a real-time image of each unmanned aerial vehicle detection area; the method comprises the steps of obtaining corresponding sub-areas by sliding a target image in a real-time image, obtaining the similarity between the sub-areas and the target image according to gray information, adjusting sliding step length according to the similarity between the two sub-areas to slide to obtain the target area in the real-time image, further selecting the optimal sub-area in the real-time image, obtaining the optimal value of a corresponding unmanned aerial vehicle according to the position information of the central point of the optimal sub-area, selecting the unmanned aerial vehicle with the largest optimal value as a long machine, and performing formation control on the unmanned aerial vehicle cluster according to the determined long machine at the moment, so that the control effect on the unmanned aerial vehicle cluster is improved, the control efficiency of the unmanned aerial vehicle cluster during working is improved, and the control adjustment on the unmanned aerial vehicle is more timely.
Description
Technical Field
The invention relates to the technical field of intelligent control, in particular to a distributed control method of an unmanned aerial vehicle based on machine vision.
Background
The unmanned aerial vehicle has unique superiority and flexibility, and is responsible for various application scenes such as battlefields, rescue reconnaissance, target monitoring and other tasks; the idea of a Leader-Follower method in a traditional unmanned aerial vehicle formation control method is derived from cooperative control of multiple ground mobile robots, and is a method which is mature in application at present, but the method also has some problems, for example, when formation control is actually performed, the determination of a long machine is often preset, but the formation control is performed according to the preset long machine ignores the area detected by the unmanned aerial vehicle in the actual detection process, so that the long machine which is fixed in advance can not be timely positioned to a target area to be detected, and reasonable formation control is performed, which results in poor real-time performance and low efficiency in the actual search process.
Disclosure of Invention
In order to solve the problems of low efficiency and poor real-time performance of fixed-length machine search, the invention aims to provide a distributed control method of an unmanned aerial vehicle based on machine vision, and the adopted technical scheme is as follows:
one embodiment of the invention provides a distributed control method of an unmanned aerial vehicle based on machine vision, which comprises the following steps:
acquiring a target image of a target to be searched and a real-time image of each unmanned aerial vehicle detection area;
sliding a target image in a real-time image to obtain corresponding sub-regions, and acquiring the similarity between the sub-regions and the target image based on the gray information of each sub-region in the target image and the gray information of each sub-region in the real-time image;
adjusting a sliding step length according to the similarity of the current sub-area and the previous sub-area, and sliding the target image in the real-time image according to the adjusted sliding step length to obtain a target area;
selecting a target area with the maximum similarity in a real-time image as an optimal subregion, acquiring the Euclidean distance between the center point of the optimal subregion and each edge in the real-time image to which the optimal subregion belongs, acquiring an optimal value of a corresponding unmanned aerial vehicle based on the Euclidean distance, and selecting the unmanned aerial vehicle with the maximum optimal value as a long plane to perform formation control.
Preferably, the step of selecting the unmanned aerial vehicle with the largest preferred value as the long machine to perform formation control includes:
constructing a three-dimensional coordinate system by taking the position of the long machine as an origin; acquiring position coordinates of each unmanned aerial vehicle except the long aircraft based on the three-dimensional coordinate system;
and controlling and adjusting the position coordinates of the unmanned aerial vehicle based on a preset formation form and a preset interval.
Preferably, after the step of selecting the unmanned aerial vehicle with the largest preferred value as the long aircraft for formation control, the method further includes:
obtaining auxiliary wing machines based on the positions of the lead machines, obtaining the confidence of each auxiliary wing machine when the lead machines have faults, and taking the auxiliary wing machine with the highest confidence as a new lead machine to carry out formation control.
Preferably, said step of obtaining an auxiliary wing plane based on the position of said long plane comprises:
recording unmanned aerial vehicles except the long unmanned aerial vehicle in the unmanned aerial vehicle cluster as wing unmanned aerial vehicles; clustering based on a preferred value corresponding to each said bureaucratic plane to obtain at least two categories;
the category with the highest number of bureaucratic machines is the target category, and the bureaucratic machines that are within the communication range of the location of the long machine and belong to the target category are auxiliary bureaucratic machines.
Preferably, said step of obtaining the confidence of each auxiliary wing comprises:
acquiring three-dimensional coordinate vectors of all unmanned aerial vehicles in the unmanned aerial vehicle cluster according to the three-dimensional coordinate system, and acquiring corresponding spatial distance and angle characteristic values according to the three-dimensional coordinate vectors between every two unmanned aerial vehicles;
the confidence coefficient calculation method of each auxiliary long machine comprises the following steps:
wherein, the first and the second end of the pipe are connected with each other,representing a confidence level;representing the number of drones, with the exception of the long plane a and the auxiliary wing planes B;represents the second of M dronesThe spatial distance between the individual drone and the auxiliary bureaucratic plane B;represents the second of M dronesThe spatial distance between each unmanned aerial vehicle and the long aircraft A;is a natural constant;represents the second of M dronesAn angular characteristic value between the individual drone and an auxiliary wing plane B;represents the second of M dronesThe angle characteristic value between each unmanned aerial vehicle and the long aircraft A.
Preferably, the step of obtaining the similarity between the sub-region and the target image based on the gray scale information of each sub-region in the target image and the real-time image respectively includes:
for any pixel point in the sub-region, acquiring a neighborhood pixel point in a preset range with the pixel point as a center, wherein the pixel point has neighborhood pixel points in the same preset range at corresponding positions in a target image, and respectively calculating a gray difference absolute value between each neighborhood pixel point and the pixel point to serve as a corresponding gray difference;
and obtaining the similarity between the sub-region and the target image according to the gray difference as follows:
wherein the content of the first and second substances,representing a similarity;representing pixel points in a target image EThe gray value of (a);representing pixel points in sub-region RThe gray value of (a);represents taking the maximum function;representing pixel points in a target image EAnd a firstGray level difference between adjacent pixel points;representing pixel points in sub-region RAnd a firstGray level difference between adjacent pixel points;the number of the pixel points in the target image E is represented and is consistent with the number of the pixel points in the sub-region R;representing a natural constant.
Preferably, the step of adjusting the sliding step size according to the similarity between the current sub-region and the previous sub-region includes:
acquiring a difference value of the similarity between the current sub-region and the previous sub-region as a similarity difference characteristic value; taking the product of the absolute value of the similarity difference characteristic value and a preset initial sliding step length as a variable quantity;
if the similarity difference characteristic value is larger than zero, subtracting the variable quantity from the initial sliding step length and rounding up to obtain an adjusted sliding step length;
if the similarity difference characteristic value is smaller than zero, adding the variable quantity to the initial sliding step length and rounding down to obtain an adjusted sliding step length;
if the similarity difference characteristic value is equal to zero, the initial sliding step length is not adjusted.
Preferably, the step of acquiring the target region includes:
and sliding the target image in the real-time image according to the adjusted sliding step length to obtain corresponding sub-regions, and obtaining the similarity of each sub-region, wherein the sub-region with the similarity larger than a preset similarity threshold is the target region.
Preferably, the step of acquiring the central point of the preferred sub-region includes:
and acquiring edge pixel points at the edge of the preferred subregion, and calculating the sum of Euclidean distances between each pixel point and all edge pixel points in the preferred subregion, wherein the corresponding pixel point when the sum of the Euclidean distances is minimum is a central point.
Preferably, the step of obtaining the preferred value of the corresponding drone based on the euclidean distance includes:
the Euclidean distance between the center point of the preferred subregion and each edge in the real-time image to which the preferred subregion belongs comprises: recording Euclidean distance from a central point to the upper edge of the real-time image to which the preferred sub-region belongs, euclidean distance from the central point to the lower edge of the real-time image to which the preferred sub-region belongs, euclidean distance from the central point to the left edge of the real-time image to which the preferred sub-region belongs, and Euclidean distance from the central point to the right edge of the real-time image to which the preferred sub-region belongs as a first distance, a second distance, a third distance and a fourth distance respectively;
respectively obtaining the absolute value of the difference between the first distance and the second distance and the absolute value of the difference between the third distance and the fourth distance, and taking the sum of the absolute value of the difference between the first distance and the second distance and the absolute value of the difference between the third distance and the fourth distance as an accumulated distance difference;
and constructing an exponential function by taking the negative number of the accumulated distance difference value as an index, wherein the product result of the exponential function and the similarity of the preferred sub-region is the preferred value of the corresponding unmanned aerial vehicle.
The invention has the following beneficial effects: the embodiment of the invention obtains the real-time image of each unmanned aerial vehicle detection area, calculates the similarity between each subregion and the target image in the real-time image to obtain the target area, and adjusts the sliding step length in real time according to the similarity between each subregion and the target image obtained in real time when the target area is obtained, thereby avoiding the problem of poor analysis effect of the fixed sliding step length on the real-time image, improving the efficiency of obtaining the subregion by sliding the target image, and simultaneously ensuring the integrity of the information of the subregion obtained by sliding; furthermore, an optimal subregion is obtained in a target region in each real-time image, optimal values are obtained through Euclidean distances from the center point of the optimal subregion to the edges of the real-time image to determine long machines in the unmanned aerial vehicle cluster, the optimal subregion determined through gray scale information is combined with the position of the center point of the optimal subregion to obtain the optimal values, the problem of inaccuracy considered by a single index is avoided, the accuracy of determining the long machines in the unmanned aerial vehicle cluster is improved, and the accurate and reliable long machines are utilized to carry out formation control on the unmanned aerial vehicles more timely, more efficiently and better in control effect.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a distributed control method for an unmanned aerial vehicle based on machine vision according to an embodiment of the present invention.
Detailed Description
To further illustrate the technical means and effects of the present invention adopted to achieve the predetermined invention purpose, the following detailed description, the structure, the features and the effects of the distributed control method of the unmanned aerial vehicle based on machine vision according to the present invention are provided with the accompanying drawings and the preferred embodiments. In the following description, different "one embodiment" or "another embodiment" refers to not necessarily the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The method is suitable for formation control of the unmanned aerial vehicle cluster, and formation control is carried out by a bureau-to-bureau plane machine by acquiring the most suitable bureau plane machine in the unmanned aerial vehicle cluster, so that the efficiency of formation control is improved. The following describes a specific scheme of the distributed control method for the unmanned aerial vehicle based on the machine vision in detail with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of a distributed control method for a machine vision-based drone according to an embodiment of the present invention is shown, where the method includes the following steps:
and step S100, acquiring a target image of a target to be searched and a real-time image of each unmanned aerial vehicle detection area.
Because of the flexibility and uniqueness of the unmanned aerial vehicle, the unmanned aerial vehicle is increasingly applied to scenes such as search and rescue scouting, target monitoring and the like, when a plurality of unmanned aerial vehicles carry out tasks together, the unmanned aerial vehicles need to be formed and deployed, the unmanned aerial vehicle cluster in the embodiment of the invention is controlled by a ground command center to search firstly, a pilot plane and a wing plane in the unmanned aerial vehicle are determined according to real-time situations corresponding to unmanned aerial vehicle search in the searching process, and then the wing plane is controlled in real time according to the pilot plane.
When determining a long machine in the unmanned aerial vehicle, firstly, judging according to an area monitored by the unmanned aerial vehicle in real time, recording an image of a target to be searched acquired in advance as a target image, wherein the target searched by the unmanned aerial vehicle is the target to be searched; and then acquiring a real-time image of each unmanned aerial vehicle detection area, wherein the real-time image is a ground image acquired by the unmanned aerial vehicles during flying in the air, each unmanned aerial vehicle can correspond to one real-time image at each sampling moment, the size of the real-time image is larger than that of the target image, the real-time images acquired by each unmanned aerial vehicle at the same sampling moment are analyzed, and the long aircraft in the unmanned aerial vehicle cluster is determined according to the characteristics of the real-time images acquired in real time.
Since the acquired target image and the real-time image are both RGB images, in order to reduce the subsequent calculation amount, both the target image and the real-time image are grayed, the graying method is a known method, and an implementer can select different methods to graye.
Step S200, sliding the target image in the real-time image to obtain a corresponding sub-region, and acquiring the similarity between the sub-region and the target image based on the gray scale information of each sub-region in the target image and the real-time image respectively.
After all unmanned aerial vehicles are collected and corresponding real-time images are acquired at the same sampling time, the real-time images corresponding to all unmanned aerial vehicles are analyzed, and the similarity between the real-time images shot by the unmanned aerial vehicles and the target images is judged, so that the unmanned aerial vehicles can be selected conveniently.
Specifically, taking an unmanned aerial vehicle Q as an example, a real-time image acquired by the unmanned aerial vehicle Q at the current sampling time is recorded as a real-time image W, a target image is recorded as an image E, in order to facilitate analysis of the similarity between the real-time image W and the target image E, the target image E is slid on the real-time image W, an initial sliding step length D =5 is set, since the size of the target image E is smaller than that of the real-time image W, a sub-region can be obtained by each sliding of the target image E on the real-time image W, the size of the sub-region is the same as that of the target image E, each pixel point on the sub-region has a corresponding pixel point on the target image E, and in order to facilitate subsequent distinction and analysis, the corresponding sub-region obtained by each sliding of the target image E on the real-time image W is recorded as an image R.
Furthermore, any pixel point on the sub-region RFor example, to operate on pixel pointsThe characteristics of the image are more accurate to compare and analyze, and pixel points are used for comparing and analyzingAcquiring neighborhood pixel points in a preset range for a center to analyze; for any pixel point in the sub-region, acquiring a neighborhood pixel point in a preset range with the pixel point as a center, wherein the pixel point has the neighborhood pixel point in the same preset range at the corresponding position in the target image, and respectively calculating the gray difference absolute value between each neighborhood pixel point and the pixel point as the corresponding gray difference.
The embodiment of the invention sets and acquires the pixel pointsThe neighborhood pixel point of 3*3 size as the center is the pixel point at the corresponding position in the target image ENeighborhood pixel points of 3*3 also exist, and each neighborhood pixel point and pixel point corresponding to the pixel point q in the sub-area are respectively calculatedGray level difference between and pixel point in target image EEach corresponding neighborhood pixel point and pixel pointThe gray scale difference between each other, that is, each neighborhood pixel point and the pixel pointThe absolute value of the gray difference between the two is specifically as follows:
wherein the content of the first and second substances,is shown asEach neighborhood pixel point and each pixel pointThe difference in gray scale between;is shown asGray values of the neighborhood pixels;representing pixel pointsThe gray value of (a).
By pixel pointsTaking the gray difference between the adjacent pixel points as the pixel pointAnd the similarity between the subregion R and the target image E is obtained according to the surrounding characteristic information, and the similarity between the subregion R and the target image E is calculated as follows:
wherein the content of the first and second substances,representing a similarity;representing pixel points in a target image EThe gray value of (a);representing pixel points in sub-region RThe gray value of (a);represents a maximum function;representing pixel points in a target image EAnd a firstGray level difference among the neighborhood pixel points;representing pixel points in sub-region RAnd a firstGray level difference between adjacent pixel points;the number of the pixel points in the target image E is represented and is consistent with the number of the pixel points in the sub-region R;representing a natural constant.
Representing pixelsThe absolute value of the difference between the gray value in the sub-region R and the gray value on the target image E is larger, and the larger the absolute value of the difference is, the pixel point is indicatedThe greater the difference between the gray scale information represented on the subregion R and the target image E;middle pixel pointNormalizing the maximum gray values in the two images as denominators so as to enable the pixel points to be in a normalized modeThe difference of the gray features in the two images is between 0 and 1, so that the subsequent analysis and calculation are facilitated; thus it can be seen thatThe larger the value of (a) is,the larger the value of (A), the larger the value ofThe smaller the value of (A) is, the pixel point is indicatedThe smaller the similarity between the sub-region R and the target image E is;representing pixel points in a target image EAnd a first step ofGray scale difference between individual neighborhood pixelsAnd pixels in the sub-region RAnd a firstGray scale difference between individual neighborhood pixelsThe absolute value of the difference of (a),the larger the value of (A) is, the pixel point is explainedThe greater the difference in gray scale difference information between the target image E and the pixel points in the sub-region R and the neighborhood,the larger the corresponding degree of similarity the smaller,for use inThe relationship with the similarity exhibits a negative correlation, anThe value range is limited between 0 and 1;the larger the value of (A), the greater the similarity;Representing correspondence of all neighbourhood pixel pointsThe larger the value is, the larger the corresponding similarity is; to be provided withRepresenting pixel points on the target image E and the sub-region RThe similarity degree of the sub-region R and the target image E is further averaged to obtain the final similarity degree, and the larger the similarity degree is, the more similar the gray information between the pixel points on the sub-region R and the target image E is.
The greater the similarity between the subregion R and the target image E, the more likely the subregion R is to be a region of the target to be detected; different sub-regions are obtained by the target image E continuously sliding on the real-time image W, and the similarity between each sub-region on the real-time image W and the target image E is obtained based on the method for calculating the similarity between the sub-regions R and the target image E, wherein the greater the similarity is, the more likely the region where the target to be detected is located is.
And S300, adjusting the sliding step length according to the similarity of the current subregion and the previous subregion, and sliding the target image in the real-time image according to the adjusted sliding step length to obtain the target region.
Step S200 shows that each time the template image slides on the real-time image, a sub-region can be obtained and the similarity corresponding to each sub-region can be obtained according to the method in step S200, and when the unmanned aerial vehicle actually searches for a target, the most important is the search efficiency and the search accuracy, so that in order to improve the search efficiency in the search process of the unmanned aerial vehicle, the sliding step length of the template image is adaptively adjusted.
Obtaining the difference value of the corresponding similarity of the current subregion and the previous subregion as a similarity difference characteristic value; taking the product of the absolute value of the similarity difference characteristic value and a preset initial sliding step length as a variable quantity; if the similarity difference characteristic value is larger than zero, subtracting the variable quantity from the initial sliding step length and rounding up to obtain an adjusted sliding step length; if the similarity difference characteristic value is less than zero, adding the variable quantity to the initial sliding step length and rounding down to obtain an adjusted sliding step length; if the similarity difference characteristic value is equal to 0, the initial sliding step length is not adjusted.
Specifically, assuming that the subregion R is a subregion obtained by first sliding the target image on the real-time image, marking the subregion obtained by second sliding the target image on the real-time image as R1, and the subregion R1 obtained by second sliding is a current subregion, and the subregion R1 respectively correspond to a similarity, adjusting the sliding step length according to the similarities of the subregions obtained by two times of sliding, and calculating the similarity difference characteristic value:
wherein the content of the first and second substances,representing similarity difference characteristic values;representing the corresponding similarity of the sub-region R1;representing the corresponding similarity of the sub-regions R.
To be provided withAs the amount of change,for the step length of the slide before adjustment, i.e.,The characteristic value of the similarity difference is represented,represents taking the absolute value; when the value of the similarity difference characteristic value F is greater than zero, it indicates that the current sliding is a sliding that gradually approaches the target area, so in order to make the matching precision higher, the sliding step length is adaptively reduced, and the sliding step length is adjusted to be:wherein, in the step (A),for the step length of the slide before adjustment, i.e.,In order to adjust the sliding step length after the adjustment,the feature value of the similarity difference is represented,it is indicated that the absolute value is taken,indicating that the rounding up calculation is performed on the value in parentheses.
When the value of the similarity difference characteristic value F is less than zero, it is indicated that the current sliding is gradually far away from the target area, so that in order to reduce the calculated amount and improve the matching efficiency, the sliding step length is adaptively increased, and the sliding step length is adjusted to be:wherein, in the step (A),for the step length of the slide before adjustment, i.e.,In order to adjust the sliding step length after the adjustment,the characteristic value of the similarity difference is represented,it is indicated that the absolute value is taken,indicating that the rounding down calculation is performed on the value in parentheses.
When the value of the similarity difference value F is zero, the sliding step is not changed, so that the template image is slid on the real-time image by the sliding step which is adaptively adjusted to obtain different sub-regions, and the similarity corresponding to each sub-region is obtained based on the same method in step S200, the greater the similarity, the more likely the sub-region is to be the region where the target to be searched is, a similarity threshold is set for distinguishing an interference region and a target region in the real-time image, when the similarity corresponding to the sub-region obtained after the sliding step is adjusted is greater than the similarity threshold, the sub-region is marked as the target region, and when the similarity corresponding to the sub-region is not greater than the similarity threshold, the corresponding sub-region is the interference region.
Preferably, in the embodiment of the present invention, the similarity threshold is set to be 0.8, and each sub-region in the real-time image acquired by the unmanned aerial vehicle at different sampling times is analyzed to obtain the target region therein.
Step S400, selecting a target area with the maximum similarity in the real-time image as an optimal subregion, acquiring Euclidean distances between the center point of the optimal subregion and each edge in the real-time image to which the optimal subregion belongs, acquiring an optimal value of the corresponding unmanned aerial vehicle based on the Euclidean distances, and selecting the unmanned aerial vehicle with the largest optimal value as a long aircraft to perform formation control.
Based on the method of step S300, analyzing the real-time image acquired by the unmanned aerial vehicle at each sampling time to obtain a target area therein, and acquiring the target area with the maximum similarity between each real-time image and the target image, wherein the real-time image at each sampling time when the unmanned aerial vehicle cluster starts to search may not have the target area, so that only the real-time image of the unmanned aerial vehicle at each sampling time when the target area exists is subjected to subsequent analysis; assuming that there are N unmanned aerial vehicles at this time, the same sampling time corresponds to N real-time images, and each real-time image has a target area, so that each real-time image has a target area with the maximum similarity, and the target area with the maximum similarity in each real-time image is recorded as an optimal sub-areaAcquiring a preferred sub-region in each real-time imageCentral point, central point and preferred sub-region ofThe sum of the Euclidean distances between all the edge pixel points is minimum, and the edge pixel points and the Euclidean distances are obtained by the prior known technology and are not described again; obtaining a preferred sub-regionThe euclidean distance between the central point of the preferred sub-region and each edge in the real-time image to which the preferred sub-region belongs in the horizontal direction and the vertical direction comprises: euclidean distance from the central point to the upper edge of the real-time image of the preferred sub-region, and Euclidean distance from the central point to the upper edge of the real-time image of the preferred sub-regionRecording the Euclidean distance from the lower edge of the real-time image to which the preferred sub-region belongs, the Euclidean distance from the central point to the left edge of the real-time image to which the preferred sub-region belongs, and the Euclidean distance from the central point to the right edge of the real-time image to which the preferred sub-region belongs as a first distance, a second distance, a third distance and a fourth distance respectively; respectively obtaining the absolute value of the difference between the first distance and the second distance and the absolute value of the difference between the third distance and the fourth distance, and taking the sum of the absolute value of the difference between the first distance and the second distance and the absolute value of the difference between the third distance and the fourth distance as an accumulated distance difference; and constructing an exponential function by taking the negative number of the accumulated distance difference value as a power exponent, wherein the product result of the exponential function and the similarity of the optimized sub-region is the optimized value of the corresponding unmanned aerial vehicle.
By means of preferred sub-regions in each real-time imageObtaining an optimal value of each unmanned aerial vehicle as a long aircraft, wherein the optimal value is calculated as follows:
wherein, the first and the second end of the pipe are connected with each other,represents a preferred value;represents a preferred subregionThe corresponding maximum similarity;represents a preferred subregionThe Euclidean distance from the central point of the image to the upper edge of the real-time image, namely the first distance;represents a preferred subregionThe Euclidean distance from the central point of the image to the lower edge of the real-time image, namely the second distance;represents a preferred subregionThe euclidean distance from the center point of (a) to the left edge of the real-time image, that is, the third distance;representing preferred sub-regionsThe Euclidean distance from the central point of the image to the right edge of the real-time image, namely the fourth distance;is a natural constant;calculated as absolute values.
Represents a preferred subregionThe absolute value of the difference between the Euclidean distance from the central point to the upper edge of the real-time image and the Euclidean distance from the lower edge of the real-time image is smaller, and the position of the central point in the vertical direction is closer to the center of the real-time image;representing preferred sub-regionsThe absolute value of the difference between the Euclidean distance from the central point to the left edge of the real-time image and the Euclidean distance from the right edge of the real-time image is smaller, and the position of the central point in the horizontal direction is closer to the center of the real-time image; thus, the deviceThe larger the value of (A), the accumulated distance difference is indicatedThe smaller the value of (2), the closer the central point is to the central position of the real-time image, and the better the detection and tracking effect of the corresponding unmanned aerial vehicle is; but preferably sub-regionsThe greater the similarity between the unmanned aerial vehicle and the target image, the better the detection effect of the unmanned aerial vehicle at the moment is, so the greater the preferred value is, the greater the possibility that the corresponding unmanned aerial vehicle serves as a long machine to carry out formation command is, and the better the effect that the corresponding unmanned aerial vehicle serves as a long machine is.
And by analogy, the optimal value of the unmanned aerial vehicle corresponding to each real-time image is obtained according to the optimal sub-area in each real-time image, the unmanned aerial vehicle corresponding to the optimal value at the maximum is selected as the leader to conduct formation command, and other unmanned aerial vehicles are used as the leader to receive control of the leader.
Further, considering that in the traditional formation control, if a lead plane fails in control, the robustness of the formation is poor because the following wing plane has no independent decision-making capability, and the wing plane followed by the lead plane is also paralyzed at all lines; and when multi-level formation is carried out, the lower-level nobody has a larger position deviation, namely the transmission iteration of the error. Therefore, in the embodiment of the present invention, each of the wing machines is analyzed, an auxiliary wing machine is selected from all the wing machines, and when a lead machine controls another wing machine other than the auxiliary wing machine, the auxiliary wing machine participates in the position decision, and further when the lead machine has a fault or the like, a new lead machine can be obtained by reselecting among the auxiliary wing machines.
Because different long machines corresponding to the maximum preferred values can be obtained at each sampling moment, if the long machine corresponding to the maximum preferred value at the next moment is inconsistent with the long machine determined at the current sampling moment, judging whether the preferred value is greater than a set change threshold value, and if so, changing the long machine in real time; if the current value is not greater than the preset change threshold value, the long machine at the current moment continues to be subjected to formation control until the preferred value is greater than the preset change threshold value or the long machine fails, and the long machine is changed; as a preferable example, the modification threshold is set to 0.95 in the embodiment of the present invention.
Specifically, assuming that the unmanned aerial vehicle a is a leader, the unmanned aerial vehicle B and the unmanned aerial vehicle U are auxiliary wing machines, and the other unmanned aerial vehicles are wing machines, if formation of the unmanned aerial vehicles is required to be a horizontal linear formation in this embodiment, when formation control is performed, information transmission is performed on the auxiliary wing machines and control of a three-dimensional channel is performed; the position of the long plane A is taken as the origin of the three-dimensional coordinate axis, so as to obtain the initial coordinates corresponding to the auxiliary wing plane B and the auxiliary wing plane U respectively asAnd(ii) a If a preset interval of formation is required to be K at this time, that is, the interval between adjacent unmanned aerial vehicles is K, information control is transmitted to the auxiliary wing plane B, the speed control of the auxiliary wing plane is consistent with the lead plane a, and the position of the auxiliary wing plane B is adjusted toOr(ii) a The transmission of information to the auxiliary wing machines U is controlled, based on the same method of control of the auxiliary wing machines B, so that the speed of the auxiliary wing machines U coincides with the incumbent machines a, and the position of the auxiliary wing machines U is adjustedThe auxiliary wing machines U and B are symmetrically distributed with respect to the main machine a, i.e. if an auxiliary wing machine B is in positionThe auxiliary wing aircraft U is positioned(ii) a By analogy, after the formation control of the positions of the auxiliary wing machines is finished, the positions of other wing machines are subjected to coordinate control.
Acquiring three-dimensional coordinate vectors of all unmanned aerial vehicles according to the three-dimensional coordinate system, and acquiring corresponding spatial distance and angle characteristic values according to the three-dimensional coordinate vectors between every two unmanned aerial vehicles; when a farm plane a has a fault, an auxiliary wing plane B and an auxiliary wing plane U are taken as candidate farm planes, the confidence of the auxiliary wing plane B and the auxiliary wing plane U as new farm planes is calculated, taking the auxiliary wing plane B as an example, and when the farm plane a has a fault, the confidence of the auxiliary wing plane B as a new farm plane is calculated as:
wherein the content of the first and second substances,representing a confidence level;representing the number of drones, with the exception of the long craft a and the auxiliary wing craft B;indicate to M dronesUnmanned plane and auxiliary wing planeThe spatial distance therebetween;represents the second of M dronesThe spatial distance between each unmanned aerial vehicle and the long aircraft A;is a natural constant;indicate to M dronesAn angular characteristic value between the individual drone and an auxiliary wing plane B;represents the second of M dronesAngle eigenvalue between individual unmanned aerial vehicle and long aircraft A.
The method for acquiring the spatial distance is a known means and is not described in detail; the angle characteristic value is obtained based on a three-dimensional coordinate axis with a long machine A as an origin: obtaining a corresponding three-dimensional coordinate vector according to the position of each unmanned aerial vehicle, and obtaining a corresponding included angle based on the three-dimensional coordinate vectors corresponding to the two unmanned aerial vehicles, wherein the included angle is a corresponding angle characteristic value, and a calculation formula of the included angle between the three-dimensional vectors is a known means and is not repeated;is shown asThe difference absolute value of the space distance between each unmanned aerial vehicle and the auxiliary wing aircraft B and the long aircraft A,the smaller the value of (A), theThe smaller the difference in spatial distance between an unmanned aerial vehicle to an auxiliary wing aircraft B and a farm aircraft a, the greater the possibility of the auxiliary wing aircraft B as a new farm aircraft; in the same way, the method for preparing the composite material,is shown asThe absolute value of the difference of the characteristic values of the angles between the individual unmanned plane and the auxiliary wing plane B and the long plane A respectively,the smaller the value of (A) is, the first one isThe closer the angular characteristic value between the unmanned aerial vehicle and the auxiliary wing plane B and the long plane a, the greater the possibility that the auxiliary wing plane B is a new long plane; a negative correlation index function with the natural constant e as the base is used for normalization, and the absolute value of the difference of the spatial distance and the absolute value of the difference of the angle characteristic value are ensured to be in a negative correlation relation with the confidence coefficient, so that the subsequent analysis and calculation are facilitated; the confidence coefficient that the auxiliary wing plane B is the new farm plane is obtained through the average of the spatial distance information and the angle characteristic value information, and the result is more reliable and convincing.
According to the method for obtaining the same confidence of the auxiliary wing plane B as the new farm, the confidence corresponding to other auxiliary wing planes is obtained, when the farm A has a fault, the auxiliary wing plane with the highest confidence is used as the new farm, and the unmanned planes in the unmanned plane cluster are controlled by the new farm.
The method for selecting an auxiliary bureaucratic plane in this embodiment is as follows: performing one-dimensional k-means mean clustering on the corresponding optimized values of all unmanned aerial vehicles except the long unmanned aerial vehicle, wherein the clustering distance is the absolute value of the difference of the optimized values among the unmanned aerial vehicles; setting a cluster category k =2, then clustering all the unmanned aerial vehicles to obtain two categories, selecting the category with the largest number of unmanned aerial vehicles as a target category, wherein all the unmanned aerial vehicles in the target category and in the communication range of the farm aircraft are auxiliary wing aircraft, and the communication range is determined by the configuration of the unmanned aerial vehicles and is known, so that the auxiliary wing aircraft corresponding to the farm aircraft can be obtained.
In summary, in the embodiment of the present invention, a target image of a target to be searched and a real-time image of each unmanned aerial vehicle detection area are obtained; sliding the target image in the real-time image to obtain corresponding sub-regions, and acquiring the similarity between the sub-regions and the target image based on the gray information of each sub-region in the target image and the real-time image respectively; adjusting the sliding step length according to the similarity of the current subregion and the previous subregion, and sliding the target image in the real-time image according to the adjusted sliding step length to obtain a target region; selecting a target area with the maximum similarity in the real-time image as an optimal subregion, acquiring Euclidean distances between the center point of the optimal subregion and each edge of the real-time image to which the optimal subregion belongs, acquiring an optimal value corresponding to the unmanned aerial vehicle based on the Euclidean distances, and selecting the unmanned aerial vehicle with the largest optimal value as a long plane to perform formation control; the reliability of selection of the long machine is improved, the formation control of the unmanned aerial vehicle cluster is carried out based on the more reliable and accurate long machine, and the control effect and the control efficiency are ensured; in addition, in this embodiment, a three-dimensional coordinate system is established according to the position of the lead aircraft, so that the lead aircraft can determine the position of each unmanned aerial vehicle when controlling formation of the unmanned aerial vehicles, and an auxiliary wing aircraft is selected according to the position of the lead aircraft and the preferred value of each unmanned aerial vehicle, and when the lead aircraft fails, the most suitable auxiliary wing aircraft can be timely and accurately found out as a new lead aircraft to perform formation control by obtaining the confidence of the auxiliary wing aircraft, thereby improving the robustness of overall control and avoiding control paralysis of the unmanned aerial vehicle fleet due to special situations such as failure of the unmanned aerial vehicles.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that are within the spirit of the present invention are intended to be included therein.
Claims (10)
1. A distributed control method for an unmanned aerial vehicle based on machine vision is characterized by comprising the following steps:
acquiring a target image of a target to be searched and a real-time image of each unmanned aerial vehicle detection area;
sliding a target image in a real-time image to obtain corresponding sub-regions, and acquiring the similarity between the sub-regions and the target image based on the gray information of each sub-region in the target image and the gray information of each sub-region in the real-time image;
adjusting a sliding step length according to the similarity of the current sub-area and the previous sub-area, and sliding the target image in the real-time image according to the adjusted sliding step length to obtain a target area;
selecting a target area with the maximum similarity in a real-time image as an optimal subregion, acquiring the Euclidean distance between the center point of the optimal subregion and each edge in the real-time image to which the optimal subregion belongs, acquiring an optimal value of a corresponding unmanned aerial vehicle based on the Euclidean distance, and selecting the unmanned aerial vehicle with the maximum optimal value as a long plane to perform formation control.
2. The distributed control method for unmanned aerial vehicles based on machine vision as claimed in claim 1, wherein the step of selecting the unmanned aerial vehicle with the largest preferred value as the long machine for formation control comprises:
constructing a three-dimensional coordinate system by taking the position of the long machine as an origin; acquiring the position coordinates of each unmanned aerial vehicle except the long aircraft based on the three-dimensional coordinate system;
and controlling and adjusting the position coordinates of the unmanned aerial vehicle based on the preset formation form and the preset interval.
3. The distributed control method for unmanned aerial vehicles based on machine vision according to claim 2, wherein after the step of selecting the unmanned aerial vehicle with the largest preferred value as the long machine for formation control, the method further comprises:
the method comprises the steps of obtaining auxiliary wing planes based on the positions of the farm planes, obtaining the confidence of each auxiliary wing plane when the farm planes have faults, and controlling formation by taking the auxiliary wing plane with the highest confidence as a new farm plane.
4. A machine vision based unmanned aerial vehicle distributed control method as defined in claim 3, wherein the step of obtaining an auxiliary wing plane based on the position of the farm plane comprises:
recording unmanned aerial vehicles except the long unmanned aerial vehicle in the unmanned aerial vehicle cluster as wing unmanned aerial vehicles; clustering based on a preferred value corresponding to each said bureaucratic plane to obtain at least two categories;
the category with the highest number of wing machines is taken as the target category, and the wing machines belonging to the target category and within the communication range of the location of the farm machine are auxiliary wing machines.
5. A machine vision based unmanned aerial vehicle distributed control method as defined in claim 4, wherein the step of obtaining confidence level of each auxiliary wing plane comprises:
acquiring three-dimensional coordinate vectors of all unmanned aerial vehicles in the unmanned aerial vehicle cluster according to the three-dimensional coordinate system, and acquiring corresponding spatial distance and angle characteristic values according to the three-dimensional coordinate vectors between every two unmanned aerial vehicles;
the confidence coefficient calculation method of each auxiliary long machine comprises the following steps:
wherein the content of the first and second substances,representing a confidence level;representing the number of drones, with the exception of the long craft a and the auxiliary wing craft B;represents the second of M dronesThe spatial distance between the individual drone and the auxiliary bureaucratic plane B;represents the second of M dronesThe spatial distance between each unmanned aerial vehicle and the long aircraft A;is a natural constant;represents the second of M dronesAn angular characteristic value between the individual drone and an auxiliary wing plane B;represents the second of M dronesThe angle characteristic value between each unmanned aerial vehicle and the long aircraft A.
6. The distributed control method for unmanned aerial vehicle based on machine vision according to claim 1, wherein the step of obtaining the similarity between each sub-region and the target image based on the gray scale information of each sub-region in the target image and the real-time image respectively comprises:
for any pixel point in the sub-region, acquiring a neighborhood pixel point in a preset range with the pixel point as a center, wherein the pixel point has neighborhood pixel points in the same preset range at corresponding positions in a target image, and respectively calculating a gray difference absolute value between each neighborhood pixel point and the pixel point to serve as a corresponding gray difference;
and obtaining the similarity between the sub-region and the target image according to the gray difference as follows:
wherein the content of the first and second substances,representing a similarity;representing pixel points in a target image EThe gray value of (a);representing pixel points in sub-region RThe gray value of (a);represents a maximum function;representing pixel points in a target image EAnd a firstGray level difference between adjacent pixel points;representing pixel points in sub-region RAnd a first step ofGray level difference among the neighborhood pixel points;the number of the pixel points in the target image E is represented and is consistent with the number of the pixel points in the sub-region R;representing a natural constant.
7. The distributed control method for unmanned aerial vehicle based on machine vision according to claim 1, wherein the step of adjusting the sliding step length according to the similarity between the current sub-area and the previous sub-area comprises:
acquiring a difference value of the similarity between the current sub-region and the previous sub-region as a similarity difference characteristic value; taking the product of the absolute value of the similarity difference characteristic value and a preset initial sliding step length as a variable quantity;
if the similarity difference characteristic value is larger than zero, subtracting the variable quantity from the initial sliding step length and rounding up to obtain an adjusted sliding step length;
if the similarity difference characteristic value is smaller than zero, adding the variable quantity to the initial sliding step length and rounding down to obtain an adjusted sliding step length;
if the similarity difference characteristic value is equal to zero, the initial sliding step length is not adjusted.
8. The distributed control method for unmanned aerial vehicle based on machine vision according to claim 1, wherein the step of obtaining the target area comprises:
and sliding the target image in the real-time image according to the adjusted sliding step length to obtain corresponding sub-regions, and obtaining the similarity of each sub-region, wherein the sub-region with the similarity larger than a preset similarity threshold is the target region.
9. The distributed control method for unmanned aerial vehicle based on machine vision according to claim 1, wherein the step of obtaining the central point of the preferred sub-area comprises:
and acquiring edge pixel points at the edge of the preferred subregion, and calculating the sum of Euclidean distances between each pixel point and all edge pixel points in the preferred subregion, wherein the pixel point corresponding to the minimum sum of the Euclidean distances is a central point.
10. The distributed control method for unmanned aerial vehicles based on machine vision as claimed in claim 1, wherein the step of obtaining the preferred value of corresponding unmanned aerial vehicle based on the euclidean distance comprises:
the Euclidean distance between the center point of the preferred subregion and each edge in the real-time image to which the preferred subregion belongs comprises: recording Euclidean distance from a central point to the upper edge of the real-time image to which the preferred sub-region belongs, euclidean distance from the central point to the lower edge of the real-time image to which the preferred sub-region belongs, euclidean distance from the central point to the left edge of the real-time image to which the preferred sub-region belongs, and Euclidean distance from the central point to the right edge of the real-time image to which the preferred sub-region belongs as a first distance, a second distance, a third distance and a fourth distance respectively;
respectively obtaining the absolute value of the difference between the first distance and the second distance and the absolute value of the difference between the third distance and the fourth distance, and taking the sum of the absolute value of the difference between the first distance and the second distance and the absolute value of the difference between the third distance and the fourth distance as an accumulated distance difference;
and constructing an exponential function by taking the negative number of the accumulated distance difference value as an exponent, wherein the product result of the exponential function and the similarity of the preferred sub-region is the preferred value of the corresponding unmanned aerial vehicle.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211560263.9A CN115576358B (en) | 2022-12-07 | 2022-12-07 | Unmanned aerial vehicle distributed control method based on machine vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211560263.9A CN115576358B (en) | 2022-12-07 | 2022-12-07 | Unmanned aerial vehicle distributed control method based on machine vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115576358A true CN115576358A (en) | 2023-01-06 |
CN115576358B CN115576358B (en) | 2023-03-10 |
Family
ID=84590177
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211560263.9A Active CN115576358B (en) | 2022-12-07 | 2022-12-07 | Unmanned aerial vehicle distributed control method based on machine vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115576358B (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107491731A (en) * | 2017-07-17 | 2017-12-19 | 南京航空航天大学 | A kind of Ground moving target detection and recognition methods towards precision strike |
CN108021868A (en) * | 2017-11-06 | 2018-05-11 | 南京航空航天大学 | A kind of quick highly reliable circular target detection recognition method |
CN109949229A (en) * | 2019-03-01 | 2019-06-28 | 北京航空航天大学 | A kind of target cooperative detection method under multi-platform multi-angle of view |
CN110262553A (en) * | 2019-06-27 | 2019-09-20 | 西北工业大学 | Fixed-wing UAV Formation Flight apparatus and method based on location information |
CN111412788A (en) * | 2020-03-26 | 2020-07-14 | 湖南科技大学 | Suspected target detection system of thunder field |
CN113658085A (en) * | 2021-10-20 | 2021-11-16 | 北京优幕科技有限责任公司 | Image processing method and device |
CN113867393A (en) * | 2021-10-19 | 2021-12-31 | 中国人民解放军军事科学院国防科技创新研究院 | Flight path controllable unmanned aerial vehicle formation form reconstruction method |
CN113989308A (en) * | 2021-11-04 | 2022-01-28 | 浙江大学 | Polygonal target segmentation method based on Hough transform and template matching |
CN114967742A (en) * | 2022-05-27 | 2022-08-30 | 北京理工大学 | Wild goose group-imitated multi-fixed-wing unmanned aerial vehicle formation reconstruction method |
CN115374652A (en) * | 2022-10-21 | 2022-11-22 | 中国人民解放军国防科技大学 | Evidence reasoning-based unmanned aerial vehicle cluster cooperative obstacle avoidance capability test evaluation method |
-
2022
- 2022-12-07 CN CN202211560263.9A patent/CN115576358B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107491731A (en) * | 2017-07-17 | 2017-12-19 | 南京航空航天大学 | A kind of Ground moving target detection and recognition methods towards precision strike |
CN108021868A (en) * | 2017-11-06 | 2018-05-11 | 南京航空航天大学 | A kind of quick highly reliable circular target detection recognition method |
CN109949229A (en) * | 2019-03-01 | 2019-06-28 | 北京航空航天大学 | A kind of target cooperative detection method under multi-platform multi-angle of view |
CN110262553A (en) * | 2019-06-27 | 2019-09-20 | 西北工业大学 | Fixed-wing UAV Formation Flight apparatus and method based on location information |
CN111412788A (en) * | 2020-03-26 | 2020-07-14 | 湖南科技大学 | Suspected target detection system of thunder field |
CN113867393A (en) * | 2021-10-19 | 2021-12-31 | 中国人民解放军军事科学院国防科技创新研究院 | Flight path controllable unmanned aerial vehicle formation form reconstruction method |
CN113658085A (en) * | 2021-10-20 | 2021-11-16 | 北京优幕科技有限责任公司 | Image processing method and device |
CN113989308A (en) * | 2021-11-04 | 2022-01-28 | 浙江大学 | Polygonal target segmentation method based on Hough transform and template matching |
CN114967742A (en) * | 2022-05-27 | 2022-08-30 | 北京理工大学 | Wild goose group-imitated multi-fixed-wing unmanned aerial vehicle formation reconstruction method |
CN115374652A (en) * | 2022-10-21 | 2022-11-22 | 中国人民解放军国防科技大学 | Evidence reasoning-based unmanned aerial vehicle cluster cooperative obstacle avoidance capability test evaluation method |
Also Published As
Publication number | Publication date |
---|---|
CN115576358B (en) | 2023-03-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108958282B (en) | Three-dimensional space path planning method based on dynamic spherical window | |
US7054724B2 (en) | Behavior control apparatus and method | |
CN115147437B (en) | Intelligent robot guiding machining method and system | |
CN109483507B (en) | Indoor visual positioning method for walking of multiple wheeled robots | |
CN112927264B (en) | Unmanned aerial vehicle tracking shooting system and RGBD tracking method thereof | |
CN109328615B (en) | Lawn boundary recognition method, control method of mowing device and mowing device | |
CN115880674B (en) | Obstacle avoidance steering correction method based on unmanned mine car | |
CN113313701B (en) | Electric vehicle charging port two-stage visual detection positioning method based on shape prior | |
Himri et al. | Semantic SLAM for an AUV using object recognition from point clouds | |
CN113902862B (en) | Visual SLAM loop verification system based on consistency cluster | |
CN115576358B (en) | Unmanned aerial vehicle distributed control method based on machine vision | |
CN106771329B (en) | Method for detecting running speed of unmanned aerial vehicle in deceleration process | |
CN113469195B (en) | Target identification method based on self-adaptive color quick point feature histogram | |
CN112529891B (en) | Method and device for identifying hollow holes and detecting contours based on point cloud and storage medium | |
CN117237902B (en) | Robot character recognition system based on deep learning | |
CN111461194B (en) | Point cloud processing method and device, driving control method, electronic device and vehicle | |
Wang et al. | Research on vehicle detection based on faster R-CNN for UAV images | |
CN113316080B (en) | Indoor positioning method based on Wi-Fi and image fusion fingerprint | |
CN109803234A (en) | Unsupervised fusion and positioning method based on the constraint of weight different degree | |
CN115511853A (en) | Remote sensing ship detection and identification method based on direction variable characteristics | |
CN115615436A (en) | Multi-machine repositioning unmanned aerial vehicle positioning method | |
CN111814662B (en) | Visible light image airplane rapid detection method based on miniature convolutional neural network | |
CN115239902A (en) | Method, device and equipment for establishing surrounding map of mobile equipment and storage medium | |
CN117891258B (en) | Intelligent planning method for track welding path | |
CN112766037B (en) | 3D point cloud target identification and positioning method based on maximum likelihood estimation method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |