CN112967290A - Method for automatically identifying enemies of target aircraft in air by unmanned aerial vehicle - Google Patents

Method for automatically identifying enemies of target aircraft in air by unmanned aerial vehicle Download PDF

Info

Publication number
CN112967290A
CN112967290A CN202110195448.3A CN202110195448A CN112967290A CN 112967290 A CN112967290 A CN 112967290A CN 202110195448 A CN202110195448 A CN 202110195448A CN 112967290 A CN112967290 A CN 112967290A
Authority
CN
China
Prior art keywords
target image
image
target
moment
contour
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110195448.3A
Other languages
Chinese (zh)
Inventor
潘春萍
赵秀影
郭辉
王宇
邸斐
陈忠莹
姜若冲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
PLA AIR FORCE AVIATION UNIVERSITY
Original Assignee
PLA AIR FORCE AVIATION UNIVERSITY
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by PLA AIR FORCE AVIATION UNIVERSITY filed Critical PLA AIR FORCE AVIATION UNIVERSITY
Priority to CN202110195448.3A priority Critical patent/CN112967290A/en
Publication of CN112967290A publication Critical patent/CN112967290A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for automatically identifying an aerial target airplane friend or foe by an unmanned aerial vehicle, which comprises the steps of carrying out image graying, segmentation, target detection and other processing according to an image transmitted to a system by an unmanned aerial vehicle sensor to obtain a target image, calculating the area moment and the contour moment of the target image, taking the calculation result as the input of neural network classification and identification, comparing the calculation result with corresponding data of six types of airplanes of a given target airplane database, judging whether the airplane defined in the database exists in the target image, and judging whether the airplane is the friend or the foe if the airplane exists, namely providing qualitative description of the airplane in the target image.

Description

Method for automatically identifying enemies of target aircraft in air by unmanned aerial vehicle
Technical Field
The invention discloses a method for automatically identifying aerial target airplane enemies by an unmanned aerial vehicle, which can be used for automatically judging the target airplane enemies during autonomous combat or cooperative combat with a man and a machine, aims to enhance military combat efficiency of the unmanned aerial vehicle, and belongs to the technical field of airplane computer vision.
Background
At present, the strategic positions of the world change deeply, the informationized local wars with the game backgrounds of the great country are normalized under the nuclear deterrence condition, and the local wars of the recent fields show that: unmanned aerial vehicles have become one of the main aerial forces of information-based local wars, and have great or even decisive influence on the victory or defeat of the wars. Although drones play an increasingly important role in winning local war wins, the manipulation of drones has so far required human involvement, and the target identification of drones is manually achieved by drone operators or sensor operators. With the development of the mode identification technology, particularly the development of the computer vision technology, various high and new technologies are introduced into an image transmission system, an image processing system and image identification, the automatic identification of the enemy of the target airplane in the air of the unmanned aerial vehicle is researched, and the battlefield situation perception capability of the unmanned aerial vehicle during autonomous combat or cooperative combat and the timeliness and the accuracy of identification information of the unmanned aerial vehicle during autonomous combat or cooperative combat are enhanced.
At present, unmanned aerial vehicles are widely applied to various industries, and a large number of achievements are made in domestic research on interpretation of important military targets such as ground airplanes, bridges and ports in reconnaissance pictures. And no relevant reports are seen about the automatic identification of the target airplane friend of the unmanned plane for the reason of confidentiality.
Disclosure of Invention
The invention aims to provide an automatic identification method for the aerial friend or foe of the target airplane of the unmanned aerial vehicle, which can enhance the automatic identification capability of the aerial friend or foe target airplane of the unmanned aerial vehicle and improve the military efficiency of autonomous operation or cooperative operation of the unmanned aerial vehicle.
The invention relates to a method for automatically identifying an aerial target airplane friend or foe by an unmanned aerial vehicle, which adopts the following technical scheme:
1) segmenting an input image, and determining a target image area and a contour in the image;
2) calculating a target image area moment and a contour moment;
3) inputting the target image area moment and the contour moment into a neural network for learning, classifying and identifying, and respectively comparing with six types of target airplane data in a database:
if the data error with a certain airplane is smaller than a set value, the target image is the airplane;
if the error of the maximum learning step number set by the arbitrary plane data of the six-type plane exceeds the set value even after the set is completed, the target image is not included in the database, and the image recognition is continued or the exit recognition is continued.
The invention relates to a method for automatically identifying an aerial target airplane friend or foe by an unmanned aerial vehicle, which comprises the following specific steps:
step 1, segmenting an input image, and determining a target image area and a contour
The input image is preprocessed and treated as two types of areas with different gray levels: combining a target area and a background area, converting an input image into a gray-scale image, calculating according to the gray-scale image to obtain a gray-scale histogram, selecting a reasonable threshold value according to gray-scale histogram information, comparing the gray-scale value of each pixel in the image with the threshold value, and obtaining the target image area and the contour according to the comparison result;
let the gray value of an image be 1-k levels, and the number of pixels with gray value i be riI.e. the grey value of the image is r respectively0,r1,…rkI is 0, …, k, and the total number of pixels N is
Figure BDA0002946268510000021
The gray value is riProbability of (r) P (r)i) Comprises the following steps:
P(ri) Gray value of riPixel count/total pixel count of image; i is 0, …, k;
namely:
Figure BDA0002946268510000022
doing ri-P(ri) Obtaining a curve to obtain a gray level histogram of the image;
calculating the lowest value between two peaks according to the gray level histogram, wherein the value is the threshold value of the image; segmenting the image by using the threshold value, and separating the target image from the sky background image to obtain a target image area and a target image outline;
step 2, calculating the area moment of the target image and the contour moment of the target image
Identifying a target image by utilizing the characteristics of translation, rotation and scale factor invariance of the image moment, and then judging the friend or foe attribute of the target image;
calculating invariant moment of the segmented target image by using the binary image after contour extraction as a processing area;
moment mpqThe definition is as follows:
p + q order moment m of arbitrary point (x, y) in the target image region or on the contourpqComprises the following steps:
Figure BDA0002946268510000023
wherein p is the highest order of x, and q is the highest order of y; f (x, y) is a density function, and the values are divided into two cases: when calculating the moment of a target image area, f (x, y) takes 1 in the target image area and 0 outside the target image area; when calculating the contour moment of the target image, f (x, y) takes 1 on the contour of the target image, takes 0 outside the contour of the target image:
center of the target image (x)0,y0) The definition is as follows:
Figure BDA0002946268510000031
Figure BDA0002946268510000032
about the center (x) of the target image0,y0) Center distance η ofpqThe definition is as follows:
Figure BDA0002946268510000033
then define M1,M2,M3,M4,M5,M6,M7The following were used:
M1=η2002
M2=(η2002)2+4η1 2 1
M3=(η30-3η12)2+(3η2103)2
M4=(η3012)2+(η0321)2
Figure BDA0002946268510000035
Figure BDA0002946268510000036
Figure BDA0002946268510000037
is provided with
Figure BDA0002946268510000034
Converting the invariant into an orthogonal projection invariant:
M′1=r2
M′2=M2/r4
M′3=M3/r6
M′4=M4/r7
M′5=M5/r12
M′6=M6/r8
M′7=M7/r12
invariant moment M 'by orthogonal projection of target image region'2,M′3,M′4,M′5,M′6,M′7And the orthogonal projection invariant moment M 'of the contour'2,M′3,M′4,M′5,M′6,M′7The feature vector of the target image is formed by 12 variables in total;
the feature vector of the target image is used as an input quantity for neural network classification and identification;
step 3, classifying and identifying the target image by using the neural network
The basic method for classifying the identified objects into a certain category is to determine a certain judgment rule on the basis of sample training, so that the error identification rate caused by classifying the identified objects according to the judgment rule is minimum or the loss caused by the classification is minimum;
training by using a known sample so as to establish a nonlinear model; then, setting an expected output value for each input sample, and transmitting the expected output value from the input layer to the output layer through the hidden layer, namely mode forward transmission; the difference between the actual output and the expected output is the error; according to the rule of the minimum square error, correcting the connection weight value from the output layer to the hidden layer, namely error inverse propagation; along with the alternate and repeated operation of the mode forward propagation process and the error reverse propagation process, the actual output of the network gradually approaches to the corresponding expected output, and the error is reduced to an acceptable degree;
if the errors of the characteristic input values of the area and the contour moment of the target image and the characteristic errors of the area and the contour moment of the target airplane database are smaller than the set error value, the target image is the airplane in the target database; or until the preset learning times are carried out, if the times are exceeded or the error is greater than a set value, judging that the model is not the type, exiting the neural network learning, entering the comparison with the next type of target aircraft, namely, the input data are sequentially compared with the six types of airplane moment characteristic data obtained by the target aircraft database, and if the model of the target image is identified or the data of the six types of airplanes in the target aircraft database are not matched, judging that the identification fails or is not identified.
The invention has the positive effects that:
the method comprises the steps of carrying out image graying, segmentation, target detection and other processing according to an image transmitted to a system by an unmanned aerial vehicle sensor to obtain a target image, calculating the area moment and the contour moment of the target image, using a calculation result as input of neural network classification and identification, comparing the input with corresponding data of six types of airplanes of a given target airplane database, judging whether the airplane defined in the database exists in the target image, and judging whether the airplane is an enemy airplane or a friend airplane if the airplane exists, namely, giving qualitative description of the airplane in the target image.
Drawings
FIG. 1 is a schematic view of a viewpoint-to-aircraft relationship as observed in the application of the present invention;
FIG. 2 is a diagram of an embodiment of practical application case 1 of the present invention;
fig. 3 is a diagram of an implementation process of practical application case 2 of the present invention.
Detailed Description
The present invention is further illustrated by the following examples, which do not limit the present invention in any way, and any modifications or changes that can be easily made by a person skilled in the art to the present invention will fall within the scope of the claims of the present invention without departing from the technical solution of the present invention.
Example 1
Establishing a target airplane database:
(1) six types of target airplanes are selected, namely FF6 fighters, FF3 fighters, FB bombs, FA attackers, FM1 fighters and FF4 fighters. Generating a 3D model of the six-type target aircraft on the 3DMAX according to the data of the front view, the top view and the side view of the six-type target aircraft;
(2) and (3) solving a projection area and a contour map of the 3D model of the target airplane according to the viewpoint change, and storing area moment and contour moment data. The viewpoint change rule is as follows: the variation range of the pitching visual angle is (-90 degrees and 90 degrees), and the variation step length is 15 degrees; the variation range of the azimuth viewing angle is (-90 degrees and 90 degrees), the variation step length is 15 degrees, and 1014(13X13X6) 3D models of the area moment and the outline moment of the orthogonal projection of the six-type target aircraft under the viewing point are obtained according to the change of the viewing point; the view point and airplane relation graph is shown in the attached figure 1;
(3) respectively calculating 1014(13X13X6) projected area moments and contour moments of the six-model airplane according to the (2), and storing the area moments and the contour moments in a target airplane data class of a corresponding model; the data structure of the data stored in the database is as follows: the method comprises the following steps of (1) a target airplane model, a pitching visual angle and a direction visual angle of a viewpoint, and an area moment and a contour moment of the target airplane model under the viewpoint;
(4) and (4) inputting 1014 groups of training sample data into the designed neural network by using the data in the step (3) for learning and classification.
Example 2
Step 1: segmenting the input image to determine the target image
Preprocessing an input image (image 1 in figure 2), converting the input image into a gray-scale image, calculating to obtain a gray-scale histogram (gray-scale histogram 1 in figure 2) according to the gray-scale image, selecting a reasonable threshold value according to gray-scale histogram information, comparing the gray-scale value of each pixel in the image with the threshold value, and obtaining a target image area and a target image contour (binary image 1 in figure 2) according to a comparison result;
step 2: calculating the moment of the target image area and the moment of the target image contour
The invariant moment of the segmented target image is calculated using the binarized image (binary image 1 in fig. 2) after contour extraction as a processing region:
orthographic projection area moment of the target image: m'2=2.6222,M′3=2.1174,M′4=2.2106,M′5=2.0906,M′6=2.1245,M′7=2.1628;
Orthographic projection contour moment of the target image: m'2=0.713,M′3=0.2137,M′4=0.1798,M′5=0.1716, M′6=0.2378,M′7=0.1978;
And step 3: classifying and identifying target images using neural networks
Taking the 12 moments obtained by calculation in the step 2 as input quantities for variable classification and identification of the neural network;
the neural network used in the present embodiment is composed of an input layer, a hidden layer, and an output layer. All the neurons in adjacent layers are in full connection, that is, all the neurons in the next layer are in full connection with all the neurons in the previous layer, and all the neurons in each layer are not in connection. The input layer comprises 12 nodes, the hidden layer comprises 20 nodes, and the output layer comprises 6 nodes.
Setting an error value to be 0.001, and if the errors of the area and the profile moment characteristic input values of the target image and the area and the profile moment characteristic of the target airplane database are smaller than the set error value, determining that the target image is the airplane in the target database; or until a predetermined set number of learning times. The selected learning times are 60000 times which is the maximum value of the database learning times, if the selected learning times exceed the maximum value of the database learning times or the error is greater than a set value, the model is judged not to be the type, the neural network learning is exited, the model is compared with a next type target airplane, namely, input data are sequentially compared with six types of airplane moment characteristic data obtained by a target airplane database, and if the model of the target image is identified or the target image is not matched with the six types of airplane data of the target airplane database, the identification is judged to be failed or not identified.
And extracting the characteristic vector of the target image according to the steps, and judging whether the target image belongs to six known airplane types or not so as to finish friend and foe identification.
Example 3
Step 1: segmenting the input image to determine the target image
Preprocessing an input image (image 2 in figure 3), converting the input image into a gray-scale image, calculating a gray-scale histogram (gray-scale histogram 2 in figure 3) according to the gray-scale image, selecting a reasonable threshold value according to gray-scale histogram information, comparing the gray-scale value of each pixel in the image with the threshold value, and obtaining a target image region and a contour (binary image 2 in figure 3) according to a comparison result;
step 2: calculating the moment of the target image area and the moment of the target image contour
The invariant moment of the segmented target image is calculated using the binarized image (binary image 2 in fig. 3) after contour extraction as a processing region:
orthographic projection area moment of the target image: m'2=3.6923,M′3=3.4308,M′4=3.1697,M′5=3.2811,M′6=3.4213,M′7=3.2081;
Orthographic projection contour moment of the target image: m'2=0.7729,M′3=0.3607,M′4=0.2368,M′5=0.2628, M′6=0.3182,M′7=0.2834;
And step 3: classifying and identifying target images using neural networks
Taking the 12 moments obtained by calculation in the step 2 as input quantities for variable classification and identification of the neural network;
the neural network used in the present embodiment is composed of an input layer, a hidden layer, and an output layer. All the neurons in adjacent layers are in full connection, that is, all the neurons in the next layer are in full connection with all the neurons in the previous layer, and all the neurons in each layer are not in connection. The input layer comprises 12 nodes, the hidden layer comprises 20 nodes, and the output layer comprises 6 nodes.
Setting an error value to be 0.001, and if the errors of the area and the profile moment characteristic input values of the target image and the area and the profile moment characteristic of the target airplane database are smaller than the set error value, determining that the target image is the airplane in the target database; or until a predetermined set number of learning times. The learning times selected in this embodiment is the maximum value 60000 times of the database learning times, if the learning times exceed the maximum value 60000 times, it is determined that the model is not the target model, the neural network learning exits, and the target aircraft enters a comparison with a next model of target aircraft, that is, input data is sequentially compared with six types of aircraft moment characteristic data obtained from a target aircraft database, and if the model of the target image is identified or the target image is not matched with the six types of aircraft data from the target aircraft database, it is determined that the identification fails or is not identified.
And extracting the characteristic vector of the target image according to the steps, and judging whether the target image belongs to six known airplane types or not so as to finish friend and foe identification.
Example 3:
76 single-plane pictures which are the same as one of the six airplanes in the database are found on the network and are identified by the method, and the identification result is shown in the table 1:
TABLE 1 airplane image recognition result table
Figure BDA0002946268510000081
With the application of high and new technologies, the amount of information to be processed by a combat system is greatly increased, the accuracy requirement is higher and higher, higher requirements are provided for the accuracy and speed of target identification, enemy targets are discovered and identified as soon as possible, rapid response is made, and the initiative of war is obtained, which is one of the key factors for winning in modern high-technology war.
The method can effectively solve the problem of identifying the enemies of the aerial airplanes during the military operation of the unmanned aerial vehicle, can automatically identify more target airplanes along with the expansion of the target airplane database subsequently, and provides a feasible method for the autonomous operation or the cooperative operation of the unmanned aerial vehicle and the airplanes.
The method can also be applied to navigation guidance, warning systems and defense systems.

Claims (2)

1. A method for an unmanned aerial vehicle to automatically identify an airborne target airplane friend or foe is characterized by comprising the following steps:
1) segmenting an input image, and determining a target image area and a contour in the image;
2) calculating a target image area moment and a contour moment;
3) inputting the target image area moment and the contour moment into a neural network for learning, classifying and identifying, and respectively comparing with six types of target airplane data in a database:
if the data error with a certain airplane is smaller than a set value, the target image is the airplane;
if the error of the maximum learning step number set by the arbitrary plane data of the six-type plane exceeds the set value even after the set is completed, the target image is not included in the database, and the image recognition is continued or the exit recognition is continued.
2. A method for unmanned aerial vehicle automatic identification of an airborne target aircraft enemy, according to claim 1, wherein:
the step 1: segmenting the input image to determine the target image area and contour
The input image is preprocessed and treated as two types of areas with different gray levels: combining a target area and a background area, converting an input image into a gray-scale image, calculating according to the gray-scale image to obtain a gray-scale histogram, selecting a reasonable threshold value according to gray-scale histogram information, comparing the gray-scale value of each pixel in the image with the threshold value, and obtaining the target image area and the contour according to the comparison result;
let the gray value of an image be 1-k levels, and the number of pixels with gray value i be riI.e. the grey value of the image is r respectively0,r1,…rkI is 0, …, k, and the total number of pixels N is
Figure FDA0002946268500000011
The gray value is riProbability of (r) P (r)i) Comprises the following steps:
P(ri) Gray value of riPixel count/total pixel count of image; i is 0, …, k;
namely:
Figure FDA0002946268500000012
doing ri-P(ri) Obtaining a curve to obtain a gray level histogram of the image;
calculating the lowest value between two peaks according to the gray level histogram, wherein the value is the threshold value of the image; segmenting the image by using the threshold value, and separating the target image from the sky background image to obtain a target image area and a target image outline;
step 2: calculating the moment of the target image area and the moment of the target image contour
Identifying a target image by utilizing the characteristics of translation, rotation and scale factor invariance of the image moment, and then judging the friend or foe attribute of the target image;
calculating invariant moment of the segmented target image by using the binary image after contour extraction as a processing area;
moment mpqThe definition is as follows:
p + q order moment m of arbitrary point (x, y) in the target image region or on the contourpqComprises the following steps:
Figure FDA0002946268500000021
wherein p is the highest order of x, and q is the highest order of y; f (x, y) is a density function, and the values are divided into two cases: when calculating the moment of a target image area, f (x, y) takes 1 in the target image area and 0 outside the target image area; when calculating the contour moment of the target image, f (x, y) takes 1 on the contour of the target image, takes 0 outside the contour of the target image:
center of the target image (x)0,y0) The definition is as follows:
Figure FDA0002946268500000022
Figure FDA0002946268500000023
about the center (x) of the target image0,y0) Center distance η ofpqThe definition is as follows:
Figure FDA0002946268500000024
then define M1,M2,M3,M4,M5,M6,M7The following were used:
M1=η2002
Figure FDA0002946268500000028
M3=(η30-3η12)2+(3η2103)2
M4=(η3012)2+(η0321)2
Figure FDA0002946268500000025
Figure FDA0002946268500000026
Figure FDA0002946268500000027
is provided with
Figure FDA0002946268500000031
Converting the invariant into an orthogonal projection invariant:
M′1=r2
M′2=M2/r4
M′3=M3/r6
M′4=M4/r7
M′5=M5/r12
M′6=M6/r8
M′7=M7/r12
invariant moment M 'by orthogonal projection of target image region'2,M′3,M′4,M′5,M′6,M′7And the orthogonal projection invariant moment M 'of the contour'2,M′3,M′4,M′5,M′6,M′7The feature vector of the target image is formed by 12 variables in total;
the feature vector of the target image is used as an input quantity for neural network classification and identification;
and step 3: classifying and identifying target images using neural networks
The basic method for classifying the identified objects into a certain category is to determine a certain judgment rule on the basis of sample training, so that the error identification rate caused by classifying the identified objects according to the judgment rule is minimum or the loss caused by the classification is minimum;
training by using a known sample so as to establish a nonlinear model; then, setting an expected output value for each input sample, and transmitting the expected output value from the input layer to the output layer through the hidden layer, namely mode forward transmission; the difference between the actual output and the expected output is the error; according to the rule of the minimum square error, correcting the connection weight value from the output layer to the hidden layer, namely error inverse propagation; along with the alternate and repeated operation of the mode forward propagation process and the error reverse propagation process, the actual output of the network gradually approaches to the corresponding expected output, and the error is reduced to an acceptable degree;
if the errors of the characteristic input values of the area and the contour moment of the target image and the characteristic errors of the area and the contour moment of the target airplane database are smaller than the set error value, the target image is the airplane in the target database; or until the preset learning times are carried out, if the times are exceeded or the error is greater than a set value, judging that the model is not the type, exiting the neural network learning, entering the comparison with the next type of target aircraft, namely, the input data are sequentially compared with the six types of airplane moment characteristic data obtained by the target aircraft database, and if the model of the target image is identified or the data of the six types of airplanes in the target aircraft database are not matched, judging that the identification fails or is not identified.
CN202110195448.3A 2021-02-22 2021-02-22 Method for automatically identifying enemies of target aircraft in air by unmanned aerial vehicle Pending CN112967290A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110195448.3A CN112967290A (en) 2021-02-22 2021-02-22 Method for automatically identifying enemies of target aircraft in air by unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110195448.3A CN112967290A (en) 2021-02-22 2021-02-22 Method for automatically identifying enemies of target aircraft in air by unmanned aerial vehicle

Publications (1)

Publication Number Publication Date
CN112967290A true CN112967290A (en) 2021-06-15

Family

ID=76285376

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110195448.3A Pending CN112967290A (en) 2021-02-22 2021-02-22 Method for automatically identifying enemies of target aircraft in air by unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN112967290A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116580290A (en) * 2023-07-11 2023-08-11 成都庆龙航空科技有限公司 Unmanned aerial vehicle identification method, unmanned aerial vehicle identification device and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0628482A (en) * 1992-07-10 1994-02-04 Video Res:Kk Image recognizing device using neural network
CN102930300A (en) * 2012-11-21 2013-02-13 北京航空航天大学 Method and system for identifying airplane target
CN104615987A (en) * 2015-02-02 2015-05-13 北京航空航天大学 Method and system for intelligently recognizing aircraft wreckage based on error back propagation neural network
CN104754340A (en) * 2015-03-09 2015-07-01 南京航空航天大学 Reconnaissance image compression method for unmanned aerial vehicle
CN106571888A (en) * 2016-11-10 2017-04-19 中国人民解放军空军航空大学军事仿真技术研究所 Automatic synchronous reliable communication method for simulation system
CN108052942A (en) * 2017-12-28 2018-05-18 南京理工大学 A kind of visual pattern recognition methods of aircraft flight attitude
CN108614991A (en) * 2018-03-06 2018-10-02 上海数迹智能科技有限公司 A kind of depth image gesture identification method based on Hu not bending moments

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0628482A (en) * 1992-07-10 1994-02-04 Video Res:Kk Image recognizing device using neural network
CN102930300A (en) * 2012-11-21 2013-02-13 北京航空航天大学 Method and system for identifying airplane target
CN104615987A (en) * 2015-02-02 2015-05-13 北京航空航天大学 Method and system for intelligently recognizing aircraft wreckage based on error back propagation neural network
CN104754340A (en) * 2015-03-09 2015-07-01 南京航空航天大学 Reconnaissance image compression method for unmanned aerial vehicle
CN106571888A (en) * 2016-11-10 2017-04-19 中国人民解放军空军航空大学军事仿真技术研究所 Automatic synchronous reliable communication method for simulation system
CN108052942A (en) * 2017-12-28 2018-05-18 南京理工大学 A kind of visual pattern recognition methods of aircraft flight attitude
CN108614991A (en) * 2018-03-06 2018-10-02 上海数迹智能科技有限公司 A kind of depth image gesture identification method based on Hu not bending moments

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
杜亚娟: "一种新的不变矩特征在图像识别中的应用", 《系统工程与电子技术》, vol. 21, no. 10, pages 71 - 74 *
杨远: "基于神经网络的目标图像识别方法研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》, no. 3, pages 34 - 46 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116580290A (en) * 2023-07-11 2023-08-11 成都庆龙航空科技有限公司 Unmanned aerial vehicle identification method, unmanned aerial vehicle identification device and storage medium
CN116580290B (en) * 2023-07-11 2023-10-20 成都庆龙航空科技有限公司 Unmanned aerial vehicle identification method, unmanned aerial vehicle identification device and storage medium

Similar Documents

Publication Publication Date Title
US20220197281A1 (en) Intelligent decision-making method and system for unmanned surface vehicle
Bhanu Automatic target recognition: State of the art survey
CN108647573A (en) A kind of military target recognition methods based on deep learning
CN104408469A (en) Firework identification method and firework identification system based on deep learning of image
CN103984936A (en) Multi-sensor multi-feature fusion recognition method for three-dimensional dynamic target recognition
CN111079090A (en) Threat assessment method for' low-slow small target
CN112749761A (en) Enemy combat intention identification method and system based on attention mechanism and recurrent neural network
CN104732224B (en) SAR target identification methods based on two-dimentional Zelnick moment characteristics rarefaction representation
Kechagias-Stamatis et al. 3D automatic target recognition for UAV platforms
CN106600613B (en) Improvement LBP infrared target detection method based on embedded gpu
Kechagias-Stamatis et al. Local feature based automatic target recognition for future 3D active homing seeker missiles
CN109919246A (en) Pedestrian's recognition methods again based on self-adaptive features cluster and multiple risks fusion
CN109165698A (en) A kind of image classification recognition methods and its storage medium towards wisdom traffic
CN112417931A (en) Method for detecting and classifying water surface objects based on visual saliency
CN112967290A (en) Method for automatically identifying enemies of target aircraft in air by unmanned aerial vehicle
CN108614996A (en) A kind of military ships based on deep learning, civilian boat automatic identifying method
CN108257179B (en) Image processing method
CN112560799B (en) Unmanned aerial vehicle intelligent vehicle target detection method based on adaptive target area search and game and application
CN112598032B (en) Multi-task defense model construction method for anti-attack of infrared image
CN109858499A (en) A kind of tank armor object detection method based on Faster R-CNN
Zhang et al. Research on camouflaged human target detection based on deep learning
CN112464982A (en) Target detection model, method and application based on improved SSD algorithm
CN116824345A (en) Bullet hole detection method and device based on computer vision
Mitsudome et al. Autonomous mobile robot searching for persons with specific clothing on urban walkway
CN112906523B (en) Hardware-accelerated deep learning target machine type identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination