CN112967290A - Method for automatically identifying enemies of target aircraft in air by unmanned aerial vehicle - Google Patents
Method for automatically identifying enemies of target aircraft in air by unmanned aerial vehicle Download PDFInfo
- Publication number
- CN112967290A CN112967290A CN202110195448.3A CN202110195448A CN112967290A CN 112967290 A CN112967290 A CN 112967290A CN 202110195448 A CN202110195448 A CN 202110195448A CN 112967290 A CN112967290 A CN 112967290A
- Authority
- CN
- China
- Prior art keywords
- target image
- image
- target
- moment
- contour
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 22
- 238000013528 artificial neural network Methods 0.000 claims abstract description 19
- 238000012545 processing Methods 0.000 claims abstract description 7
- 230000008569 process Effects 0.000 claims description 5
- 238000012549 training Methods 0.000 claims description 5
- 238000000605 extraction Methods 0.000 claims description 4
- 230000005540 biological transmission Effects 0.000 claims description 3
- 238000013459 approach Methods 0.000 claims description 2
- 230000006870 function Effects 0.000 claims description 2
- 238000013519 translation Methods 0.000 claims description 2
- 238000004364 calculation method Methods 0.000 abstract description 5
- 238000001514 detection method Methods 0.000 abstract description 2
- 230000011218 segmentation Effects 0.000 abstract description 2
- 210000002569 neuron Anatomy 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 230000007123 defense Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method for automatically identifying an aerial target airplane friend or foe by an unmanned aerial vehicle, which comprises the steps of carrying out image graying, segmentation, target detection and other processing according to an image transmitted to a system by an unmanned aerial vehicle sensor to obtain a target image, calculating the area moment and the contour moment of the target image, taking the calculation result as the input of neural network classification and identification, comparing the calculation result with corresponding data of six types of airplanes of a given target airplane database, judging whether the airplane defined in the database exists in the target image, and judging whether the airplane is the friend or the foe if the airplane exists, namely providing qualitative description of the airplane in the target image.
Description
Technical Field
The invention discloses a method for automatically identifying aerial target airplane enemies by an unmanned aerial vehicle, which can be used for automatically judging the target airplane enemies during autonomous combat or cooperative combat with a man and a machine, aims to enhance military combat efficiency of the unmanned aerial vehicle, and belongs to the technical field of airplane computer vision.
Background
At present, the strategic positions of the world change deeply, the informationized local wars with the game backgrounds of the great country are normalized under the nuclear deterrence condition, and the local wars of the recent fields show that: unmanned aerial vehicles have become one of the main aerial forces of information-based local wars, and have great or even decisive influence on the victory or defeat of the wars. Although drones play an increasingly important role in winning local war wins, the manipulation of drones has so far required human involvement, and the target identification of drones is manually achieved by drone operators or sensor operators. With the development of the mode identification technology, particularly the development of the computer vision technology, various high and new technologies are introduced into an image transmission system, an image processing system and image identification, the automatic identification of the enemy of the target airplane in the air of the unmanned aerial vehicle is researched, and the battlefield situation perception capability of the unmanned aerial vehicle during autonomous combat or cooperative combat and the timeliness and the accuracy of identification information of the unmanned aerial vehicle during autonomous combat or cooperative combat are enhanced.
At present, unmanned aerial vehicles are widely applied to various industries, and a large number of achievements are made in domestic research on interpretation of important military targets such as ground airplanes, bridges and ports in reconnaissance pictures. And no relevant reports are seen about the automatic identification of the target airplane friend of the unmanned plane for the reason of confidentiality.
Disclosure of Invention
The invention aims to provide an automatic identification method for the aerial friend or foe of the target airplane of the unmanned aerial vehicle, which can enhance the automatic identification capability of the aerial friend or foe target airplane of the unmanned aerial vehicle and improve the military efficiency of autonomous operation or cooperative operation of the unmanned aerial vehicle.
The invention relates to a method for automatically identifying an aerial target airplane friend or foe by an unmanned aerial vehicle, which adopts the following technical scheme:
1) segmenting an input image, and determining a target image area and a contour in the image;
2) calculating a target image area moment and a contour moment;
3) inputting the target image area moment and the contour moment into a neural network for learning, classifying and identifying, and respectively comparing with six types of target airplane data in a database:
if the data error with a certain airplane is smaller than a set value, the target image is the airplane;
if the error of the maximum learning step number set by the arbitrary plane data of the six-type plane exceeds the set value even after the set is completed, the target image is not included in the database, and the image recognition is continued or the exit recognition is continued.
The invention relates to a method for automatically identifying an aerial target airplane friend or foe by an unmanned aerial vehicle, which comprises the following specific steps:
step 1, segmenting an input image, and determining a target image area and a contour
The input image is preprocessed and treated as two types of areas with different gray levels: combining a target area and a background area, converting an input image into a gray-scale image, calculating according to the gray-scale image to obtain a gray-scale histogram, selecting a reasonable threshold value according to gray-scale histogram information, comparing the gray-scale value of each pixel in the image with the threshold value, and obtaining the target image area and the contour according to the comparison result;
let the gray value of an image be 1-k levels, and the number of pixels with gray value i be riI.e. the grey value of the image is r respectively0,r1,…rkI is 0, …, k, and the total number of pixels N is
The gray value is riProbability of (r) P (r)i) Comprises the following steps:
P(ri) Gray value of riPixel count/total pixel count of image; i is 0, …, k;
doing ri-P(ri) Obtaining a curve to obtain a gray level histogram of the image;
calculating the lowest value between two peaks according to the gray level histogram, wherein the value is the threshold value of the image; segmenting the image by using the threshold value, and separating the target image from the sky background image to obtain a target image area and a target image outline;
Identifying a target image by utilizing the characteristics of translation, rotation and scale factor invariance of the image moment, and then judging the friend or foe attribute of the target image;
calculating invariant moment of the segmented target image by using the binary image after contour extraction as a processing area;
moment mpqThe definition is as follows:
p + q order moment m of arbitrary point (x, y) in the target image region or on the contourpqComprises the following steps:
wherein p is the highest order of x, and q is the highest order of y; f (x, y) is a density function, and the values are divided into two cases: when calculating the moment of a target image area, f (x, y) takes 1 in the target image area and 0 outside the target image area; when calculating the contour moment of the target image, f (x, y) takes 1 on the contour of the target image, takes 0 outside the contour of the target image:
center of the target image (x)0,y0) The definition is as follows:
about the center (x) of the target image0,y0) Center distance η ofpqThe definition is as follows:
then define M1,M2,M3,M4,M5,M6,M7The following were used:
M1=η20+η02;
M2=(η20+η02)2+4η1 2 1;
M3=(η30-3η12)2+(3η21-η03)2;
M4=(η30+η12)2+(η03+η21)2;
Converting the invariant into an orthogonal projection invariant:
M′1=r2;
M′2=M2/r4;
M′3=M3/r6;
M′4=M4/r7;
M′5=M5/r12;
M′6=M6/r8;
M′7=M7/r12;
invariant moment M 'by orthogonal projection of target image region'2,M′3,M′4,M′5,M′6,M′7And the orthogonal projection invariant moment M 'of the contour'2,M′3,M′4,M′5,M′6,M′7The feature vector of the target image is formed by 12 variables in total;
the feature vector of the target image is used as an input quantity for neural network classification and identification;
The basic method for classifying the identified objects into a certain category is to determine a certain judgment rule on the basis of sample training, so that the error identification rate caused by classifying the identified objects according to the judgment rule is minimum or the loss caused by the classification is minimum;
training by using a known sample so as to establish a nonlinear model; then, setting an expected output value for each input sample, and transmitting the expected output value from the input layer to the output layer through the hidden layer, namely mode forward transmission; the difference between the actual output and the expected output is the error; according to the rule of the minimum square error, correcting the connection weight value from the output layer to the hidden layer, namely error inverse propagation; along with the alternate and repeated operation of the mode forward propagation process and the error reverse propagation process, the actual output of the network gradually approaches to the corresponding expected output, and the error is reduced to an acceptable degree;
if the errors of the characteristic input values of the area and the contour moment of the target image and the characteristic errors of the area and the contour moment of the target airplane database are smaller than the set error value, the target image is the airplane in the target database; or until the preset learning times are carried out, if the times are exceeded or the error is greater than a set value, judging that the model is not the type, exiting the neural network learning, entering the comparison with the next type of target aircraft, namely, the input data are sequentially compared with the six types of airplane moment characteristic data obtained by the target aircraft database, and if the model of the target image is identified or the data of the six types of airplanes in the target aircraft database are not matched, judging that the identification fails or is not identified.
The invention has the positive effects that:
the method comprises the steps of carrying out image graying, segmentation, target detection and other processing according to an image transmitted to a system by an unmanned aerial vehicle sensor to obtain a target image, calculating the area moment and the contour moment of the target image, using a calculation result as input of neural network classification and identification, comparing the input with corresponding data of six types of airplanes of a given target airplane database, judging whether the airplane defined in the database exists in the target image, and judging whether the airplane is an enemy airplane or a friend airplane if the airplane exists, namely, giving qualitative description of the airplane in the target image.
Drawings
FIG. 1 is a schematic view of a viewpoint-to-aircraft relationship as observed in the application of the present invention;
FIG. 2 is a diagram of an embodiment of practical application case 1 of the present invention;
fig. 3 is a diagram of an implementation process of practical application case 2 of the present invention.
Detailed Description
The present invention is further illustrated by the following examples, which do not limit the present invention in any way, and any modifications or changes that can be easily made by a person skilled in the art to the present invention will fall within the scope of the claims of the present invention without departing from the technical solution of the present invention.
Example 1
Establishing a target airplane database:
(1) six types of target airplanes are selected, namely FF6 fighters, FF3 fighters, FB bombs, FA attackers, FM1 fighters and FF4 fighters. Generating a 3D model of the six-type target aircraft on the 3DMAX according to the data of the front view, the top view and the side view of the six-type target aircraft;
(2) and (3) solving a projection area and a contour map of the 3D model of the target airplane according to the viewpoint change, and storing area moment and contour moment data. The viewpoint change rule is as follows: the variation range of the pitching visual angle is (-90 degrees and 90 degrees), and the variation step length is 15 degrees; the variation range of the azimuth viewing angle is (-90 degrees and 90 degrees), the variation step length is 15 degrees, and 1014(13X13X6) 3D models of the area moment and the outline moment of the orthogonal projection of the six-type target aircraft under the viewing point are obtained according to the change of the viewing point; the view point and airplane relation graph is shown in the attached figure 1;
(3) respectively calculating 1014(13X13X6) projected area moments and contour moments of the six-model airplane according to the (2), and storing the area moments and the contour moments in a target airplane data class of a corresponding model; the data structure of the data stored in the database is as follows: the method comprises the following steps of (1) a target airplane model, a pitching visual angle and a direction visual angle of a viewpoint, and an area moment and a contour moment of the target airplane model under the viewpoint;
(4) and (4) inputting 1014 groups of training sample data into the designed neural network by using the data in the step (3) for learning and classification.
Example 2
Step 1: segmenting the input image to determine the target image
Preprocessing an input image (image 1 in figure 2), converting the input image into a gray-scale image, calculating to obtain a gray-scale histogram (gray-scale histogram 1 in figure 2) according to the gray-scale image, selecting a reasonable threshold value according to gray-scale histogram information, comparing the gray-scale value of each pixel in the image with the threshold value, and obtaining a target image area and a target image contour (binary image 1 in figure 2) according to a comparison result;
step 2: calculating the moment of the target image area and the moment of the target image contour
The invariant moment of the segmented target image is calculated using the binarized image (binary image 1 in fig. 2) after contour extraction as a processing region:
orthographic projection area moment of the target image: m'2=2.6222,M′3=2.1174,M′4=2.2106,M′5=2.0906,M′6=2.1245,M′7=2.1628;
Orthographic projection contour moment of the target image: m'2=0.713,M′3=0.2137,M′4=0.1798,M′5=0.1716, M′6=0.2378,M′7=0.1978;
And step 3: classifying and identifying target images using neural networks
Taking the 12 moments obtained by calculation in the step 2 as input quantities for variable classification and identification of the neural network;
the neural network used in the present embodiment is composed of an input layer, a hidden layer, and an output layer. All the neurons in adjacent layers are in full connection, that is, all the neurons in the next layer are in full connection with all the neurons in the previous layer, and all the neurons in each layer are not in connection. The input layer comprises 12 nodes, the hidden layer comprises 20 nodes, and the output layer comprises 6 nodes.
Setting an error value to be 0.001, and if the errors of the area and the profile moment characteristic input values of the target image and the area and the profile moment characteristic of the target airplane database are smaller than the set error value, determining that the target image is the airplane in the target database; or until a predetermined set number of learning times. The selected learning times are 60000 times which is the maximum value of the database learning times, if the selected learning times exceed the maximum value of the database learning times or the error is greater than a set value, the model is judged not to be the type, the neural network learning is exited, the model is compared with a next type target airplane, namely, input data are sequentially compared with six types of airplane moment characteristic data obtained by a target airplane database, and if the model of the target image is identified or the target image is not matched with the six types of airplane data of the target airplane database, the identification is judged to be failed or not identified.
And extracting the characteristic vector of the target image according to the steps, and judging whether the target image belongs to six known airplane types or not so as to finish friend and foe identification.
Example 3
Step 1: segmenting the input image to determine the target image
Preprocessing an input image (image 2 in figure 3), converting the input image into a gray-scale image, calculating a gray-scale histogram (gray-scale histogram 2 in figure 3) according to the gray-scale image, selecting a reasonable threshold value according to gray-scale histogram information, comparing the gray-scale value of each pixel in the image with the threshold value, and obtaining a target image region and a contour (binary image 2 in figure 3) according to a comparison result;
step 2: calculating the moment of the target image area and the moment of the target image contour
The invariant moment of the segmented target image is calculated using the binarized image (binary image 2 in fig. 3) after contour extraction as a processing region:
orthographic projection area moment of the target image: m'2=3.6923,M′3=3.4308,M′4=3.1697,M′5=3.2811,M′6=3.4213,M′7=3.2081;
Orthographic projection contour moment of the target image: m'2=0.7729,M′3=0.3607,M′4=0.2368,M′5=0.2628, M′6=0.3182,M′7=0.2834;
And step 3: classifying and identifying target images using neural networks
Taking the 12 moments obtained by calculation in the step 2 as input quantities for variable classification and identification of the neural network;
the neural network used in the present embodiment is composed of an input layer, a hidden layer, and an output layer. All the neurons in adjacent layers are in full connection, that is, all the neurons in the next layer are in full connection with all the neurons in the previous layer, and all the neurons in each layer are not in connection. The input layer comprises 12 nodes, the hidden layer comprises 20 nodes, and the output layer comprises 6 nodes.
Setting an error value to be 0.001, and if the errors of the area and the profile moment characteristic input values of the target image and the area and the profile moment characteristic of the target airplane database are smaller than the set error value, determining that the target image is the airplane in the target database; or until a predetermined set number of learning times. The learning times selected in this embodiment is the maximum value 60000 times of the database learning times, if the learning times exceed the maximum value 60000 times, it is determined that the model is not the target model, the neural network learning exits, and the target aircraft enters a comparison with a next model of target aircraft, that is, input data is sequentially compared with six types of aircraft moment characteristic data obtained from a target aircraft database, and if the model of the target image is identified or the target image is not matched with the six types of aircraft data from the target aircraft database, it is determined that the identification fails or is not identified.
And extracting the characteristic vector of the target image according to the steps, and judging whether the target image belongs to six known airplane types or not so as to finish friend and foe identification.
Example 3:
76 single-plane pictures which are the same as one of the six airplanes in the database are found on the network and are identified by the method, and the identification result is shown in the table 1:
TABLE 1 airplane image recognition result table
With the application of high and new technologies, the amount of information to be processed by a combat system is greatly increased, the accuracy requirement is higher and higher, higher requirements are provided for the accuracy and speed of target identification, enemy targets are discovered and identified as soon as possible, rapid response is made, and the initiative of war is obtained, which is one of the key factors for winning in modern high-technology war.
The method can effectively solve the problem of identifying the enemies of the aerial airplanes during the military operation of the unmanned aerial vehicle, can automatically identify more target airplanes along with the expansion of the target airplane database subsequently, and provides a feasible method for the autonomous operation or the cooperative operation of the unmanned aerial vehicle and the airplanes.
The method can also be applied to navigation guidance, warning systems and defense systems.
Claims (2)
1. A method for an unmanned aerial vehicle to automatically identify an airborne target airplane friend or foe is characterized by comprising the following steps:
1) segmenting an input image, and determining a target image area and a contour in the image;
2) calculating a target image area moment and a contour moment;
3) inputting the target image area moment and the contour moment into a neural network for learning, classifying and identifying, and respectively comparing with six types of target airplane data in a database:
if the data error with a certain airplane is smaller than a set value, the target image is the airplane;
if the error of the maximum learning step number set by the arbitrary plane data of the six-type plane exceeds the set value even after the set is completed, the target image is not included in the database, and the image recognition is continued or the exit recognition is continued.
2. A method for unmanned aerial vehicle automatic identification of an airborne target aircraft enemy, according to claim 1, wherein:
the step 1: segmenting the input image to determine the target image area and contour
The input image is preprocessed and treated as two types of areas with different gray levels: combining a target area and a background area, converting an input image into a gray-scale image, calculating according to the gray-scale image to obtain a gray-scale histogram, selecting a reasonable threshold value according to gray-scale histogram information, comparing the gray-scale value of each pixel in the image with the threshold value, and obtaining the target image area and the contour according to the comparison result;
let the gray value of an image be 1-k levels, and the number of pixels with gray value i be riI.e. the grey value of the image is r respectively0,r1,…rkI is 0, …, k, and the total number of pixels N is
The gray value is riProbability of (r) P (r)i) Comprises the following steps:
P(ri) Gray value of riPixel count/total pixel count of image; i is 0, …, k;
doing ri-P(ri) Obtaining a curve to obtain a gray level histogram of the image;
calculating the lowest value between two peaks according to the gray level histogram, wherein the value is the threshold value of the image; segmenting the image by using the threshold value, and separating the target image from the sky background image to obtain a target image area and a target image outline;
step 2: calculating the moment of the target image area and the moment of the target image contour
Identifying a target image by utilizing the characteristics of translation, rotation and scale factor invariance of the image moment, and then judging the friend or foe attribute of the target image;
calculating invariant moment of the segmented target image by using the binary image after contour extraction as a processing area;
moment mpqThe definition is as follows:
p + q order moment m of arbitrary point (x, y) in the target image region or on the contourpqComprises the following steps:
wherein p is the highest order of x, and q is the highest order of y; f (x, y) is a density function, and the values are divided into two cases: when calculating the moment of a target image area, f (x, y) takes 1 in the target image area and 0 outside the target image area; when calculating the contour moment of the target image, f (x, y) takes 1 on the contour of the target image, takes 0 outside the contour of the target image:
center of the target image (x)0,y0) The definition is as follows:
about the center (x) of the target image0,y0) Center distance η ofpqThe definition is as follows:
then define M1,M2,M3,M4,M5,M6,M7The following were used:
M1=η20+η02;
M3=(η30-3η12)2+(3η21-η03)2;
M4=(η30+η12)2+(η03+η21)2;
Converting the invariant into an orthogonal projection invariant:
M′1=r2;
M′2=M2/r4;
M′3=M3/r6;
M′4=M4/r7;
M′5=M5/r12;
M′6=M6/r8;
M′7=M7/r12;
invariant moment M 'by orthogonal projection of target image region'2,M′3,M′4,M′5,M′6,M′7And the orthogonal projection invariant moment M 'of the contour'2,M′3,M′4,M′5,M′6,M′7The feature vector of the target image is formed by 12 variables in total;
the feature vector of the target image is used as an input quantity for neural network classification and identification;
and step 3: classifying and identifying target images using neural networks
The basic method for classifying the identified objects into a certain category is to determine a certain judgment rule on the basis of sample training, so that the error identification rate caused by classifying the identified objects according to the judgment rule is minimum or the loss caused by the classification is minimum;
training by using a known sample so as to establish a nonlinear model; then, setting an expected output value for each input sample, and transmitting the expected output value from the input layer to the output layer through the hidden layer, namely mode forward transmission; the difference between the actual output and the expected output is the error; according to the rule of the minimum square error, correcting the connection weight value from the output layer to the hidden layer, namely error inverse propagation; along with the alternate and repeated operation of the mode forward propagation process and the error reverse propagation process, the actual output of the network gradually approaches to the corresponding expected output, and the error is reduced to an acceptable degree;
if the errors of the characteristic input values of the area and the contour moment of the target image and the characteristic errors of the area and the contour moment of the target airplane database are smaller than the set error value, the target image is the airplane in the target database; or until the preset learning times are carried out, if the times are exceeded or the error is greater than a set value, judging that the model is not the type, exiting the neural network learning, entering the comparison with the next type of target aircraft, namely, the input data are sequentially compared with the six types of airplane moment characteristic data obtained by the target aircraft database, and if the model of the target image is identified or the data of the six types of airplanes in the target aircraft database are not matched, judging that the identification fails or is not identified.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110195448.3A CN112967290A (en) | 2021-02-22 | 2021-02-22 | Method for automatically identifying enemies of target aircraft in air by unmanned aerial vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110195448.3A CN112967290A (en) | 2021-02-22 | 2021-02-22 | Method for automatically identifying enemies of target aircraft in air by unmanned aerial vehicle |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112967290A true CN112967290A (en) | 2021-06-15 |
Family
ID=76285376
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110195448.3A Pending CN112967290A (en) | 2021-02-22 | 2021-02-22 | Method for automatically identifying enemies of target aircraft in air by unmanned aerial vehicle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112967290A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116580290A (en) * | 2023-07-11 | 2023-08-11 | 成都庆龙航空科技有限公司 | Unmanned aerial vehicle identification method, unmanned aerial vehicle identification device and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0628482A (en) * | 1992-07-10 | 1994-02-04 | Video Res:Kk | Image recognizing device using neural network |
CN102930300A (en) * | 2012-11-21 | 2013-02-13 | 北京航空航天大学 | Method and system for identifying airplane target |
CN104615987A (en) * | 2015-02-02 | 2015-05-13 | 北京航空航天大学 | Method and system for intelligently recognizing aircraft wreckage based on error back propagation neural network |
CN104754340A (en) * | 2015-03-09 | 2015-07-01 | 南京航空航天大学 | Reconnaissance image compression method for unmanned aerial vehicle |
CN106571888A (en) * | 2016-11-10 | 2017-04-19 | 中国人民解放军空军航空大学军事仿真技术研究所 | Automatic synchronous reliable communication method for simulation system |
CN108052942A (en) * | 2017-12-28 | 2018-05-18 | 南京理工大学 | A kind of visual pattern recognition methods of aircraft flight attitude |
CN108614991A (en) * | 2018-03-06 | 2018-10-02 | 上海数迹智能科技有限公司 | A kind of depth image gesture identification method based on Hu not bending moments |
-
2021
- 2021-02-22 CN CN202110195448.3A patent/CN112967290A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0628482A (en) * | 1992-07-10 | 1994-02-04 | Video Res:Kk | Image recognizing device using neural network |
CN102930300A (en) * | 2012-11-21 | 2013-02-13 | 北京航空航天大学 | Method and system for identifying airplane target |
CN104615987A (en) * | 2015-02-02 | 2015-05-13 | 北京航空航天大学 | Method and system for intelligently recognizing aircraft wreckage based on error back propagation neural network |
CN104754340A (en) * | 2015-03-09 | 2015-07-01 | 南京航空航天大学 | Reconnaissance image compression method for unmanned aerial vehicle |
CN106571888A (en) * | 2016-11-10 | 2017-04-19 | 中国人民解放军空军航空大学军事仿真技术研究所 | Automatic synchronous reliable communication method for simulation system |
CN108052942A (en) * | 2017-12-28 | 2018-05-18 | 南京理工大学 | A kind of visual pattern recognition methods of aircraft flight attitude |
CN108614991A (en) * | 2018-03-06 | 2018-10-02 | 上海数迹智能科技有限公司 | A kind of depth image gesture identification method based on Hu not bending moments |
Non-Patent Citations (2)
Title |
---|
杜亚娟: "一种新的不变矩特征在图像识别中的应用", 《系统工程与电子技术》, vol. 21, no. 10, pages 71 - 74 * |
杨远: "基于神经网络的目标图像识别方法研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》, no. 3, pages 34 - 46 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116580290A (en) * | 2023-07-11 | 2023-08-11 | 成都庆龙航空科技有限公司 | Unmanned aerial vehicle identification method, unmanned aerial vehicle identification device and storage medium |
CN116580290B (en) * | 2023-07-11 | 2023-10-20 | 成都庆龙航空科技有限公司 | Unmanned aerial vehicle identification method, unmanned aerial vehicle identification device and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12072705B2 (en) | Intelligent decision-making method and system for unmanned surface vehicle | |
Bhanu | Automatic target recognition: State of the art survey | |
Vasile et al. | Pose-independent automatic target detection and recognition using 3D LADAR data | |
CN108647573A (en) | A kind of military target recognition methods based on deep learning | |
CN111079090A (en) | Threat assessment method for' low-slow small target | |
CN103984936A (en) | Multi-sensor multi-feature fusion recognition method for three-dimensional dynamic target recognition | |
CN112749761A (en) | Enemy combat intention identification method and system based on attention mechanism and recurrent neural network | |
CN110008899B (en) | Method for extracting and classifying candidate targets of visible light remote sensing image | |
CN104732224B (en) | SAR target identification methods based on two-dimentional Zelnick moment characteristics rarefaction representation | |
Kechagias-Stamatis et al. | 3D automatic target recognition for UAV platforms | |
CN108257179B (en) | Image processing method | |
CN106600613B (en) | Improvement LBP infrared target detection method based on embedded gpu | |
CN112417931A (en) | Method for detecting and classifying water surface objects based on visual saliency | |
CN107016371A (en) | UAV Landing Geomorphological Classification method based on improved depth confidence network | |
Kechagias-Stamatis et al. | Local feature based automatic target recognition for future 3D active homing seeker missiles | |
CN109919246A (en) | Pedestrian's recognition methods again based on self-adaptive features cluster and multiple risks fusion | |
CN107766828A (en) | UAV Landing Geomorphological Classification method based on wavelet convolution neutral net | |
CN112967290A (en) | Method for automatically identifying enemies of target aircraft in air by unmanned aerial vehicle | |
CN112598032A (en) | Multi-task defense model construction method for anti-attack of infrared image | |
Zhang et al. | Research on camouflaged human target detection based on deep learning | |
CN112560799A (en) | Unmanned aerial vehicle intelligent vehicle target detection method based on adaptive target area search and game and application | |
CN112464982A (en) | Target detection model, method and application based on improved SSD algorithm | |
CN116740572A (en) | Marine vessel target detection method and system based on improved YOLOX | |
CN116824345A (en) | Bullet hole detection method and device based on computer vision | |
Mitsudome et al. | Autonomous mobile robot searching for persons with specific clothing on urban walkway |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |