CN112906523B - Hardware-accelerated deep learning target machine type identification method - Google Patents

Hardware-accelerated deep learning target machine type identification method Download PDF

Info

Publication number
CN112906523B
CN112906523B CN202110158349.8A CN202110158349A CN112906523B CN 112906523 B CN112906523 B CN 112906523B CN 202110158349 A CN202110158349 A CN 202110158349A CN 112906523 B CN112906523 B CN 112906523B
Authority
CN
China
Prior art keywords
target
network
layer
infrared
airplane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110158349.8A
Other languages
Chinese (zh)
Other versions
CN112906523A (en
Inventor
邵艳明
钮赛赛
周卫文
朱婧文
史庆杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Aerospace Control Technology Institute
Original Assignee
Shanghai Aerospace Control Technology Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Aerospace Control Technology Institute filed Critical Shanghai Aerospace Control Technology Institute
Priority to CN202110158349.8A priority Critical patent/CN112906523B/en
Publication of CN112906523A publication Critical patent/CN112906523A/en
Application granted granted Critical
Publication of CN112906523B publication Critical patent/CN112906523B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a deep learning target machine type identification method based on hardware acceleration. Acquiring simulated infrared data of the flight process of the airplane by constructing a set of flight simulation equipment based on different airplane types; the hardware acceleration realization of the convolution neural network algorithm based on the improved YOLOv3 is realized through the FPGA board card; the upper computer is compiled through Qt, and functions of real-time reading of simulation data, calling of hardware acceleration board cards, display and storage of intelligent recognition results and the like are achieved. The intelligent machine type identification method based on hardware acceleration verifies the possibility of embedded realization of the machine type identification algorithm, and provides more sufficient verification of software and hardware algorithm design realization for product landing.

Description

Hardware-accelerated deep learning target machine type identification method
Technical Field
The invention belongs to the field of image target identification, and particularly relates to a deep learning target machine type identification method based on a YOLO v3 algorithm and FPGA board card hardware acceleration.
Background
The identification system of the airplane type of the enemy plane is an important functional unit of a 21 st century digital battlefield, can effectively reduce the accidental injury probability in modern information war and improve the operation efficiency. With the increasing application of high-tech means in the military field and the increasing destruction of modern weaponry, the machine type identification system has become one of the key factors influencing the modern war and national defense safety.
According to the fact that whether the airplane target to be identified participates in the identification process or not, the enemy and my airplane type identification system is divided into a cooperative type enemy and my identification system and a non-cooperative type enemy and my identification system. The cooperative type friend or foe identification system consists of an inquiry machine and a transponder, and needs to work in cooperation with a radar during combat, so that the cooperative type friend or foe identification system has the advantages of simple identification process, high speed, high accuracy, small size and the like, but the inquiry/response processes of the cooperative type friend or foe identification system are communicated through encrypted signals, and are easy to decipher, cheat and interfere by the enemy. The non-cooperative friend or foe identification system regards the identified target as the external environment of the system, does not need the participation of the identified target, acquires the characteristics of the radiation signal, the engine vibration frequency, the radar modulation signal and the like of the target through the sensor, obtains the friend or foe attribute of the target through the technologies of characteristic extraction, target classification identification and the like and comprehensive analysis, does not have the interactive processes of inquiry, response and the like, and greatly improves the safety of the system.
Although the non-cooperative enemy and my identification system does not need an inquiry/response process and adopts a passive detection technology to identify an airplane target, the active detection mode represented by radar still has the inherent defects of the radar system, and the existing identification system based on visible light, infrared and ultraviolet imaging has the defects of short identification distance, low identification precision and the like. The target recognition algorithm is the guarantee of the reliability of the non-cooperative friend-foe recognition system. Along with the increasing complexity of battlefield environment, a novel airplane target identification algorithm suitable for different battle platforms is developed, so that the novel airplane target identification algorithm has stronger machine type identification capability and higher identification efficiency, detection and identification are realized, precious battle time is won, and the novel airplane target identification algorithm becomes a key technology which is urgent to solve of the current non-cooperative type enemy and my identification system.
Disclosure of Invention
The invention solves the technical problems that: by adopting the non-cooperative enemy aircraft model identification scheme based on infrared imaging, on the basis of extremely high system safety, the intelligent deep learning algorithm which is rapidly developed can be fully utilized, and the embedded realization of the intelligent deep learning algorithm based on hardware acceleration is combined, so that the target identification capability and the identification efficiency can be effectively enhanced, the all-weather synchronization of target detection and identification is realized, and the real-time and accurate model identification is realized.
The technical scheme of the invention is as follows: a hardware accelerated deep learning target machine type identification method comprises the following steps:
(1) Building an airplane model simulation flight system and an infrared image acquisition system;
(2) Collecting simulated flight infrared data of an airplane target, and constructing an infrared target sample library;
(3) Carrying out airplane target information marking on the pictures in the sample library;
(4) Carrying out improved YOLOv3 network model training;
(5) Configuring a YOLOv3 network to an FPGA acceleration unit;
(6) And testing the real-time identification effect of the network model by the upper computer.
In the step (1), the method for building the simulated flight system specifically comprises the following steps: the airplane model is characterized in that six airplane models of different types are adopted and fixed on a support capable of rotating around a central shaft, the airplane models are connected with a heating device, the airplane models are heated through a direct current power supply, and the central shaft motor is powered on, so that the airplane models can generate infrared heat radiation while rotating, and infrared flying scenes of different types can be simulated.
In the step (2), the construction of the infrared target sample specifically comprises the following steps: and acquiring an infrared image sequence of the target by using infrared acquisition equipment, numbering the images according to an acquisition sequence, and constructing a target data sample set.
The target information mark in the step (3) specifically includes: manually marking the target position and the type of the infrared image in the target data sample set one by one to obtain the central position, the length, the width and the type information of a target machine type in the target infrared image; the acquired sample set is expanded by adopting 6 sample augmentation modes of horizontal turning, rotation, mirror image transformation, brightness transformation, scaling and Gaussian white noise addition; the sample set is adjusted according to the following steps of 8: a ratio of 2 into training samples and test samples.
The improved YOLO v3 network model in the step (4) specifically adopts the following improved method:
modifying the input layer channels of the network model to 256 × 1;
reducing 1024 channels in the network to 512 channels;
adding a residual error unit in a network structure;
increasing the output scale of one channel;
recalculating the size of the candidate box by a k-means algorithm;
merging the convolution layer and the BN layer;
the network upsampling process is removed.
The training in the step (4) specifically comprises the following steps:
the settings of the configuration parameters of the YOLOv3 network model adopted during training are as follows: the input of the network model is set to be 256 × 1, the iteration times are set to be 50000, the initial learning rate of the network model is set to be 0.001, the learning rate after 20000 times of iteration in the training process is set to be reduced to be 0.1 time of the original learning rate, and the learning rate after 25000 times of iteration is set to be reduced to be 0.1 time of the original learning rate; training a network model by using a manually marked training sample in the infrared target sample library in the step (2) at a server end by adopting a YOLO v3 frame algorithm to obtain a network weight parameter of the YOLOv3 network model, performing target identification on the test sample obtained in the step (1) by adopting the network weight parameter, comparing a target type and a target position in an identification result with a target type and a target position obtained by manual marking, taking an mAP value as an evaluation index, and adjusting the network parameter to perform the network training process of the step (4) again when the mAP value is less than 50%; and when the network training reaches the set iteration times and the mAP reaches the standard, storing the weight parameters of the trained network model.
Configuring the YOLOv3 network to the FPGA acceleration unit in step (5), specifically:
firstly, respectively converting a network configuration cfg file and a network parameter weight file of YOLO v3 into files in a prototxt format and a cafemedel format; then fusing the Batch normalization Batch Normal layer and the scale scaling scale layer to the convolution layer, and carrying out 8-bit quantization on the weight parameters by using a quantization tool Ristretto tool on a caffe platform; and then realizing a conv layer, a route layer, a res layer and an upsample layer by the FPGA end, and realizing a yolo output layer by the PC end.
The upper computer test in the step (6) specifically comprises the following steps:
the upper computer reads an image acquired by the infrared image acquisition equipment through the Camerlink image acquisition board card, converts the image into an 8bit format, calls the FPGA board card through a PCIe interface to realize the network model identification of YOLO v3, acquires target position and category information in the image, and realizes the real-time processing, display and storage of a target airplane type identification result.
Compared with the prior art, the invention has the advantages that:
1) The invention adopts a non-cooperative enemy aircraft model identification scheme based on infrared imaging, can fully utilize the currently developed intelligent deep learning algorithm on the basis of extremely high system security, combines with the embedded realization of the intelligent deep learning algorithm based on hardware acceleration, can effectively enhance the target identification capability and identification efficiency, and realizes all-weather target detection and identification synchronization, thereby realizing real-time and accurate model identification.
2) The method adopts an improved YOLO v3 algorithm, increases a residual error structural unit on the basis of the original network, improves the identification capability of the network on weak and small targets, and simultaneously improves the inference rate of the network by combining a network weight quantification method while ensuring the accuracy rate of network identification.
3) The embedded network model is realized by adopting the accelerating unit based on the FPGA board card, so that the power consumption of hardware is effectively reduced while the running speed and the recognition accuracy of the network are ensured.
Drawings
FIG. 1 is a block diagram of the system of the present invention.
FIG. 2 is a schematic view of a flight simulator.
Fig. 3 is a YOLO v3 network structure with increasing output scale.
Fig. 4 is an improved upsampling module.
Detailed Description
The invention is further described below with reference to the accompanying drawings:
as shown in fig. 1, the system for identifying a hardware-accelerated deep learning target machine type according to the embodiment includes an aircraft model simulated flight system, an infrared image acquisition device, and a hardware-accelerated identification system based on a YOLOv3 convolutional neural network algorithm; the target machine type identification method specifically comprises the following steps:
(1) And (3) building an airplane model simulation flight system and an infrared image acquisition system. The method comprises the following specific steps: the airplane model is fixed on a support which can rotate around a central shaft by adopting six airplane models of different types as shown in figure 2, the airplane model is connected with a heating device, the airplane model is heated by a direct-current power supply, and the central shaft motor is powered, so that the airplane model can generate infrared heat radiation while rotating, and infrared flying scenes of different types can be simulated.
(2) And collecting simulated flight infrared data of the airplane target, and constructing an infrared target sample library. The method specifically comprises the following steps: an infrared image sequence of the target is acquired by an infrared acquisition device, and a target data sample set is constructed by numbering pictures according to the acquisition sequence, such as 0001-0999.
(3) And marking the airplane target information of the pictures in the sample library. The method comprises the following specific steps: manually marking the target position and the type of the infrared image in the target data sample set one by one to obtain the central position, the length, the width and the type information of a target machine type in the target infrared image; the acquired sample set is expanded by adopting 6 sample augmentation modes of horizontal turning, rotation, mirror image transformation, brightness transformation, scaling and Gaussian white noise addition; the sample set is as follows 8:2 into training samples and testing samples;
(4) And carrying out training based on the improved YOLOv3 network model. Specifically, the YOLOv3 network model is firstly improved as follows:
(a) Modifying the input layer channels of the network model to 256 × 1;
(b) Reducing 1024 channels in the network to 512 channels;
(c) A residual unit is added to the network structure. Specifically, 2 residual error units are added in the second residual error block of the Darknet53 of the YOLOv3 network structure, so that more low-level small target position information is obtained, and the capability of extracting features by the network is improved.
(d) The output scale of one channel is increased. As shown in fig. 3, since the airplane presents a small target form in the image at a long distance, and the target occupies fewer pixels and has an unobvious feature in the entire image, it is proposed to perform 2 times of upsampling on an 8 times downsampling feature map output by the original network, to splice the 2 times upsampling feature map with the feature map output by the 2 nd residual block, and to establish feature fusion information whose output is 4 times downsampling, so that a prediction structure of one scale is increased, the recognition capability for scale change is improved, and the detection probability of the small-size target is increased.
(e) The candidate box size is recalculated by the k-means algorithm. And performing cluster analysis on the number and the aspect ratio dimension of the target candidate frames by using a K-means clustering algorithm to obtain the aspect ratio of the candidate frames aiming at the visible light friend or foe identification image sample, and reducing the difficulty of convergence in the network training process. The clustered AvgIOU objective function takes the form:
Figure BDA0002934710020000071
in the formula, B represents a sample, namely a target in a ground channel, C represents the center of a cluster, nk represents the number of samples at the center of the kth cluster, n represents the total number of samples, k represents the number of clusters, and I IOU And (B, C) represents the intersection ratio of the central frame and the clustering frame of the cluster, i represents the sample sequence number, and j represents the sequence number of the clustering central sample.
(f) Combining the convolution layer and the BN layer. A Batch Normalization (BN) layer is arranged behind a convolution layer in YOLO v3, batch Normalization operation is carried out on the layer of data through the BN layer, network convergence can be accelerated, overfitting can be controlled, one layer of operation is added in the network forward inference process, more memory or display space is occupied, and therefore parameters of the BN layer are combined into the convolution layer, and the model forward inference speed is improved.
The calculation process of the BN layer before combination is as follows:
Figure BDA0002934710020000081
where γ is the scaling factor, μ is the mean, σ 2 Is variance, β, is offset, x out As a result of BN calculation, x conv As a result of the convolution calculation in front of the BN layer,
Figure BDA0002934710020000082
wherein x is i For convolutional layer input, w i Is the weight parameter of the convolutional layer.
After merging convolution and BN layers:
Figure BDA0002934710020000083
namely, it is
Figure BDA0002934710020000084
After merging, the full-time parameters become:
Figure BDA0002934710020000085
the bias becomes:
Figure BDA0002934710020000086
the combined calculation becomes:
Figure BDA0002934710020000087
(g) The network upsampling process is removed. As shown in fig. 4, since the upsampling operation in the YOLO v3 network is a relatively time-consuming and low cost-effective manner, on the premise of not losing network precision and network concept, the upsampling process in the network is removed by using convolution with stride of 2 to implement downsampling, so that detection in multiple scales can be finally implemented.
Then, the improved Yolov3 network model structure is combined for training. The settings of the configuration parameters of the YOLOv3 network model adopted during training are as follows: the input of the network model is set to be 256 × 1, the iteration times is set to be 50000, the initial learning rate of the network model is set to be 0.001, the learning rate after 20000 times of iteration in the training process is set to be reduced to be 0.1 time of the original learning rate, and the learning rate after 25000 times of iteration is set to be reduced to be 0.1 time of the original learning rate; training the network model by using the training sample manually marked in the step (2) at a server side by using a YOLO v3 framework algorithm to obtain a network weight parameter of the YOLOv3 network model, performing target identification on the test sample obtained in the step (1) by using the network weight parameter, comparing the target type and the target position in the identification result with the target type and the target position obtained by manual marking, taking an mAP value as an evaluation index, and adjusting the network parameter to perform the network training process in the step (4) again when the mAP value is less than 50%; and when the network training reaches the set iteration times and the mAP reaches the standard, storing the weight parameters of the trained network model.
(5) And configuring the YOLOv3 network to the FPGA acceleration unit. The method specifically comprises the following steps: firstly, respectively converting a network configuration cfg file and a network parameter weight file of YOLO v3 into files in a prototxt format and a cafemeodel format; and then fusing the Batch normalization layer (Batch Normal layer) and the scale scaling layer (scale layer) to the convolution layer (volume layer), and performing 8-bit quantization on the weight parameters by using a Ristretto tool on a coffee platform. And then realizing a conv layer, a route layer, a res layer and an upsample layer by the FPGA end, and realizing a yolo output layer by the PC end.
(6) And testing the real-time identification effect of the network model by the upper computer. Specifically, the upper computer reads an image acquired by the infrared image acquisition equipment through the Camerlink image acquisition board card, converts the image into an 8bit format, calls the FPGA board card through a PCIe interface to realize the network model identification of YOLO v3, acquires target position and category information in the image, and realizes the real-time processing, display and storage of a target airplane type identification result. The specific working process of the upper computer based on Qt is as follows: the real-time reading and displaying of the camera image are called, the read image is processed in real time through the PCIe interface calling FPGA board card, and the real-time processing, displaying and storing of the identification result of the target airplane model are achieved.

Claims (6)

1. A hardware accelerated deep learning target machine type identification method is characterized by comprising the following steps:
(1) Building an airplane model simulation flight system and an infrared image acquisition system;
(2) Collecting simulated flight infrared data of an airplane target, and constructing an infrared target sample library;
(3) Carrying out airplane target information marking on the pictures in the sample library;
(4) Carrying out improved YOLOv3 network model training;
(5) Configuring a YOLOv3 network to an FPGA acceleration unit;
(6) Testing the real-time identification effect of the network model by the upper computer;
in the step (1), the method for building the simulated flight system specifically comprises the following steps: the method comprises the following steps that six airplane models of different types are adopted and fixed on a support capable of rotating around a central shaft, the airplane models are connected with a heating device, the airplane models are heated through a direct current power supply, and a central shaft motor is powered on, so that the airplane models can generate infrared heat radiation while rotating, and infrared flying scenes of different types can be simulated;
the improved YOLO v3 network model in the step (4) specifically adopts the following improvement method:
modifying the input layer channels of the network model to 256 × 1;
reducing 1024 channels in the network to 512 channels;
adding a residual error unit in a network structure;
increasing the output scale of one channel;
recalculating the size of the candidate box by a k-means algorithm;
merging the convolution layer and the BN layer;
the network upsampling process is removed.
2. The hardware-accelerated deep learning target machine type identification method according to claim 1, wherein in the step (2), the constructing of the infrared target sample specifically includes: and acquiring an infrared image sequence of the target by using infrared acquisition equipment, numbering pictures according to an acquisition sequence, and constructing a target data sample set.
3. The method for identifying a hardware-accelerated deep-learning target machine type according to claim 1, wherein the target information mark in the step (3) is specifically: manually marking the target position and the type of the infrared image in the target data sample set one by one to obtain the central position, the length, the width and the type information of a target machine type in the target infrared image; the acquired sample set is expanded by adopting 6 sample augmentation modes of horizontal turning, rotation, mirror image transformation, brightness transformation, scaling and Gaussian white noise addition; the sample set is adjusted according to the following steps of 8: a ratio of 2 into training samples and test samples.
4. The hardware-accelerated deep learning target machine type identification method according to claim 1, wherein the training in the step (4) specifically comprises:
the settings of the configuration parameters of the YOLOv3 network model adopted during training are as follows: the input of the network model is set to be 256 × 1, the iteration times is set to be 50000, the initial learning rate of the network model is set to be 0.001, the learning rate after 20000 times of iteration in the training process is set to be reduced to be 0.1 time of the original learning rate, and the learning rate after 25000 times of iteration is set to be reduced to be 0.1 time of the original learning rate; training a network model by using a training sample manually marked in the infrared target sample library in the step (2) at a server side by using a YOLO v3 frame algorithm to obtain a network weight parameter of the YOLOv3 network model, performing target identification on the test sample obtained in the step (1) by using the network weight parameter, comparing a target type and a target position in an identification result with a target type and a target position obtained by manual marking, taking an mAP value as an evaluation index, and adjusting the network parameter to perform the network training process of the step (4) again when the mAP value is less than 50%; and when the network training reaches the set iteration times and the mAP reaches the standard, storing the weight parameters of the trained network model.
5. The hardware-accelerated deep learning target machine type identification method according to claim 1, wherein the step (5) of configuring the YOLOv3 network to the FPGA acceleration unit specifically comprises:
firstly, respectively converting a network configuration cfg file and a network parameter weight file of YOLO v3 into files in a prototxt format and a cafemedel format; then, fusing the Batch normalization Batch Normal layer and the scale scaling scale layer to the convolution layer, and carrying out 8bit quantization on the weight parameters by utilizing a quantization tool Ristretto tool on a mask platform; and then realizing a conv layer, a route layer, a res layer and an upsample layer by the FPGA end, and realizing a yolo output layer by the PC end.
6. The method for identifying the hardware-accelerated deep learning target machine type according to claim 1, wherein the upper computer test in the step (6) specifically comprises:
the upper computer reads an image acquired by the infrared image acquisition equipment through the Camera link image acquisition board card, converts the image into an 8bit format, calls the FPGA board card through a PCIe interface to realize the network model identification of YOLO v3, acquires the target position and category information in the image, and realizes the real-time processing, display and storage of the target airplane type identification result.
CN202110158349.8A 2021-02-04 2021-02-04 Hardware-accelerated deep learning target machine type identification method Active CN112906523B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110158349.8A CN112906523B (en) 2021-02-04 2021-02-04 Hardware-accelerated deep learning target machine type identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110158349.8A CN112906523B (en) 2021-02-04 2021-02-04 Hardware-accelerated deep learning target machine type identification method

Publications (2)

Publication Number Publication Date
CN112906523A CN112906523A (en) 2021-06-04
CN112906523B true CN112906523B (en) 2022-12-27

Family

ID=76122602

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110158349.8A Active CN112906523B (en) 2021-02-04 2021-02-04 Hardware-accelerated deep learning target machine type identification method

Country Status (1)

Country Link
CN (1) CN112906523B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114384940B (en) * 2022-03-25 2022-06-07 北京航天晨信科技有限责任公司 Embedded recognition model obtaining method and system applied to civil unmanned aerial vehicle

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110018170A (en) * 2019-04-15 2019-07-16 中国民航大学 A kind of small-sized damage positioning method of aircraft skin based on honeycomb moudle
CN111815513A (en) * 2020-06-09 2020-10-23 四川虹美智能科技有限公司 Infrared image acquisition method and device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107330405A (en) * 2017-06-30 2017-11-07 上海海事大学 Remote sensing images Aircraft Target Recognition based on convolutional neural networks
CN108509986A (en) * 2018-03-16 2018-09-07 上海海事大学 Based on the Aircraft Target Recognition for obscuring constant convolutional neural networks
CN110516560B (en) * 2019-08-05 2022-12-02 西安电子科技大学 Optical remote sensing image target detection method based on FPGA heterogeneous deep learning
CN111401148B (en) * 2020-02-27 2023-06-20 江苏大学 Road multi-target detection method based on improved multi-stage YOLOv3
CN111460968B (en) * 2020-03-27 2024-02-06 上海大学 Unmanned aerial vehicle identification and tracking method and device based on video
CN111563557B (en) * 2020-05-12 2023-01-17 山东科华电力技术有限公司 Method for detecting target in power cable tunnel
CN111783974A (en) * 2020-08-12 2020-10-16 成都佳华物链云科技有限公司 Model construction and image processing method and device, hardware platform and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110018170A (en) * 2019-04-15 2019-07-16 中国民航大学 A kind of small-sized damage positioning method of aircraft skin based on honeycomb moudle
CN111815513A (en) * 2020-06-09 2020-10-23 四川虹美智能科技有限公司 Infrared image acquisition method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于判别局部保持映射算法的飞机目标识别方法;张善文等;《计算机工程与科学》;20130615(第06期);第118-122页 *
红毛丹色泽品质的计算机视觉分级技术研究;章程辉等;《农业工程学报》;20060930(第11期);第108-111页 *

Also Published As

Publication number Publication date
CN112906523A (en) 2021-06-04

Similar Documents

Publication Publication Date Title
CN111325115B (en) Cross-modal countervailing pedestrian re-identification method and system with triple constraint loss
CN111274970A (en) Traffic sign detection method based on improved YOLO v3 algorithm
CN108428255A (en) A kind of real-time three-dimensional method for reconstructing based on unmanned plane
CN109255286A (en) A kind of quick detection recognition method of unmanned plane optics based on YOLO deep learning network frame
CN109344878B (en) Eagle brain-like feature integration small target recognition method based on ResNet
CN111832568A (en) License plate recognition method, and training method and device of license plate recognition model
CN112801230A (en) Intelligent acceptance method for unmanned aerial vehicle of power distribution line
CN110276286B (en) Embedded panoramic video stitching system based on TX2
CN112906523B (en) Hardware-accelerated deep learning target machine type identification method
CN115862055A (en) Pedestrian re-identification method and device based on comparison learning and confrontation training
CN116087880A (en) Radar radiation source signal sorting system based on deep learning
CN113673527B (en) License plate recognition method and system
CN112489089B (en) Airborne ground moving target identification and tracking method for micro fixed wing unmanned aerial vehicle
CN114170565A (en) Image comparison method and device based on unmanned aerial vehicle aerial photography and terminal equipment
CN106897730A (en) SAR target model recognition methods based on fusion classification information with locality preserving projections
CN112052829B (en) Pilot behavior monitoring method based on deep learning
CN104615987B (en) A kind of the wreckage of an plane intelligent identification Method and system based on error-duration model neutral net
CN111104965A (en) Vehicle target identification method and device
CN114463685A (en) Behavior recognition method and device, electronic equipment and storage medium
CN113743251B (en) Target searching method and device based on weak supervision scene
CN113343903B (en) License plate recognition method and system in natural scene
CN110108719A (en) It is a kind of based on intelligent glasses around machine check method and system
CN112967290A (en) Method for automatically identifying enemies of target aircraft in air by unmanned aerial vehicle
CN114627493A (en) Gait feature-based identity recognition method and system
CN110796112A (en) In-vehicle face recognition system based on MATLAB

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant