CN111414997B - Artificial intelligence-based method for battlefield target recognition - Google Patents

Artificial intelligence-based method for battlefield target recognition Download PDF

Info

Publication number
CN111414997B
CN111414997B CN202010231438.6A CN202010231438A CN111414997B CN 111414997 B CN111414997 B CN 111414997B CN 202010231438 A CN202010231438 A CN 202010231438A CN 111414997 B CN111414997 B CN 111414997B
Authority
CN
China
Prior art keywords
target frame
target
battlefield
learning
optimization
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010231438.6A
Other languages
Chinese (zh)
Other versions
CN111414997A (en
Inventor
权文
宋亚飞
路艳丽
王坚
王亚男
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Air Force Engineering University of PLA
Original Assignee
Air Force Engineering University of PLA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Air Force Engineering University of PLA filed Critical Air Force Engineering University of PLA
Priority to CN202010231438.6A priority Critical patent/CN111414997B/en
Publication of CN111414997A publication Critical patent/CN111414997A/en
Application granted granted Critical
Publication of CN111414997B publication Critical patent/CN111414997B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Astronomy & Astrophysics (AREA)
  • Multimedia (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for battlefield target identification based on artificial intelligence, which specifically comprises the following steps: step 1: image preprocessing optimization; step 2: optimizing the learning rate; setting a decline rate, and reducing the original learning rate after training a designated step length so as to prevent oscillation; step 3: multi-resolution learning and identification; step 4: non-maximum suppression. The beneficial effects are that: the method comprises the steps of carrying out recognition processing on acquired optical image data, specifically carrying out convolutional neural network training on related image data acquired on an air unmanned aerial vehicle by utilizing a deep neural network, aiming at three problems of overfitting, redundant recognition and lack of recognition accuracy existing in an original model, upgrading an algorithm model according to the characteristics of reconnaissance return optical images of the unmanned aerial vehicle, and intelligently generating a set of neural network capable of rapidly recognizing battlefield multi-type targets through image preprocessing optimization, learning rate optimization and transfer learning.

Description

Artificial intelligence-based method for battlefield target recognition
Technical Field
The invention belongs to the field of artificial intelligence and deep learning, and particularly relates to a target identification method based on artificial intelligence.
Background
Future warfare is an intelligent military angle, and in order to improve the decision-making capability of a battlefield, the first problem to be solved is intelligent target identification in a strong complex electromagnetic environment. Traditional non-intelligent methods such as machine learning, expert systems and the like have difficulty in solving the intelligent recognition problem of targets without self-learning capability.
At present, the deep learning algorithm is utilized to realize the perception of the battlefield environment, and the artificial intelligence technology is utilized to acquire timely and accurate comprehensive battlefield situation information so as to assist a battlefield commander in quickly commanding and deciding.
Most of the research on object recognition is recognition based on radar information, but in a complex electromagnetic environment, such recognition is severely limited. While images acquired based on optical sensors are less disturbed by this. The existing image recognition algorithm based on artificial intelligence has the problems of over fitting, redundant recognition, lack of recognition accuracy and the like, and cannot meet the requirements of battlefield target recognition.
Disclosure of Invention
The invention aims to provide a method for battlefield target recognition based on artificial intelligence, which utilizes an improved deep learning intelligent algorithm to recognize targets and can improve the recognition capability of a reconnaissance unmanned aerial vehicle on battlefield targets in a complex environment.
The technical scheme of the invention is as follows: a method for battlefield target identification based on artificial intelligence specifically comprises the following steps:
step 1: image preprocessing optimization;
step 2: optimizing the learning rate;
setting a decline rate, and reducing the original learning rate after training a designated step length so as to prevent oscillation;
step 3: multi-resolution learning and identification;
step 4: non-maximum suppression.
The step 1 comprises the following steps:
(1) Generating a target frame;
(2) Optimizing image transformation;
(3) Gaussian blur.
Step (1) in step 1 includes generating a target frame through a Transfer-fast-RCNN model, generating a target frame first before identifying the target, using four-dimensional vectors to represent (x, y, w, h) for the target frame, wherein x is the abscissa of the center point of the target frame, y is the ordinate of the center point of the target frame, w is the width of the target frame, h is the height of the target frame,
A=(Ax,Ay,Aw,Ah) (1)
G=(Gx,Gy,Gw,Gh) (2)
wherein A is the original target frame data set, and G is the real target frame data set.
Such that the input original window is mapped to a regression window G 'closer to the real box G, G' represents the translation transformation:
G′x=Ax+Aw·dx(A) (3)
G′y=Ay+Ah·dy(A) (4)
wherein dx (A) and dy (A) represent translation amounts, G 'x represents the abscissa of the translated center point, and G' y represents the ordinate of the translated center point;
step (2) in step 1 comprises,
finding a transformation F of an image causes
F(Ax,Ay,Aw,Ah)=(G′x,G′y,G′w,G′h) (5)
The calculation of F can be achieved by translation and scaling:
G′w=Aw·dy(A) (6)
G′h=Ah·dy(A) (7)
where dy (A) represents the amount of translation, G 'w represents the scaled target frame width, and G' h represents the scaled target frame height;
construction of objective functions
Figure GDA0004201704060000036
Wherein phi (A) is a feature vector composed of corresponding feature graphs, w T Is a parameter to be learned; d (a) is the predicted value obtained (×x, y, w, h, i.e. each transform corresponds to one of the above objective functions), in order to minimize the difference tx, ty, tw, th between the predicted value dx (a), dy (a), dw (a) and the actual value, the cost function loss is as follows:
Figure GDA0004201704060000031
in the method, in the process of the invention,
Figure GDA0004201704060000032
representing the true center point of the target frame, N represents the number of feature images, A i Representing the ith featureA target frame of the graph;
the function optimization objective w is:
Figure GDA0004201704060000033
step (2) in step 1 includes, before loading data into the fast-CNN model, performing gaussian blur and exposure processing to different degrees on the same picture:
Figure GDA0004201704060000034
wherein p and q are pixel point positions in each RGB channel, and sigma is an exposure degree coefficient.
In the step 3, a twice cubic interpolation algorithm with the least image quality loss of the processed image is selected, and the interpolation algorithm is expressed as the following by a function W (m):
Figure GDA0004201704060000035
wherein m is an independent variable, and a is an adjustment value.
The step 4 comprises the following steps:
(1) Calculating the area ratio IoU of the overlapping area of each target frame and the adjacent target frames;
(2) Comparing IoU to a threshold, changing the confidence of the adjacent target frame:
Figure GDA0004201704060000041
wherein s is i For the confidence of each target frame, N t Is a set threshold.
The invention has the beneficial effects that: the invention uses a deep learning method to identify and process the acquired optical image data, in particular to a deep neural network, which is used for carrying out convolutional neural network training on the related image data acquired on an air unmanned aerial vehicle, aiming at three problems of overfitting, redundant identification and lack of identification precision existing in an original model, an algorithm model is upgraded according to the characteristics of reconnaissance and return optical images of the unmanned aerial vehicle, and a set of neural network capable of rapidly identifying various targets in a battlefield is intelligently generated through image preprocessing optimization, learning rate optimization and transfer learning. Through optimization and migration learning strategies, the problems of over-fitting and redundant identification are effectively solved, and the target identification accuracy is remarkably improved.
Drawings
FIG. 1 is a Faster-RCNN model;
FIG. 2 is YLOL V3 model;
FIG. 3 is a Transfer-fast RCNN model;
FIG. 4 is a graph showing the results of the YOLO v3 model recognition;
FIG. 5 is a graph showing the results of the Faster-RCNN model recognition.
Detailed Description
The invention will be described in further detail with reference to the accompanying drawings and specific examples.
Typically, training of a convolutional neural network requires a large amount of data, but it is very difficult to collect a large amount of image information containing real battlefield targets due to the complexity and high timeliness of the real battlefield environment. According to the actual battlefield environment requirements, the invention introduces a migration learning algorithm, and modifies the model on a trained model by adjustment, thereby meeting and adapting to a new requirement. Since only the last single-layer fully-connected neural network is used to distinguish between the various types of images in the deep-learning network model that has been trained, the previous input layer and convolutional layer can be used to extract feature vectors for any image and train a new classifier using the extracted feature vectors as input. According to the requirements of an actual battlefield environment, a Transfer learning algorithm is introduced on the basis of the fast-RCNN deep learning model shown in fig. 1, and a Transfer-fast RCNN model (shown in fig. 3) is established by optimizing the trained model, so that the requirements of battlefield target recognition are met and met.
The invention provides a method for battlefield target identification based on artificial intelligence, which specifically comprises the following steps:
step 1: image preprocessing optimization;
in one embodiment of the present invention, step 1 comprises the following:
(1) Generating a target frame
Generating a target frame through a Transfer-fast-RCNN model, wherein the Transfer-fast-RCNN model supports inputting pictures with any size, generating a target frame firstly before identifying a target, using four-dimensional vectors for the target frame to represent (x, y, w, h), wherein x is the abscissa of the central point of the target frame, y is the ordinate of the central point of the target frame, w is the width of the target frame, h is the height of the target frame,
A=(Ax,Ay,Aw,Ah) (1)
G=(Gx,Gy,Gw,Gh) (2)
wherein A is the original target frame data set, and G is the real target frame data set.
The goal is to find a relationship such that the input original window is mapped to a regression window G 'closer to the real box G, G' representing the translation transformation:
G′x=Ax+Aw·dx(A) (3)
G′y=Ay+Ah·dy(A) (4)
wherein dx (A) and dy (A) represent translation amounts, G 'x represents the abscissa of the translated center point, and G' y represents the ordinate of the translated center point.
(2) Image transformation optimization
Because of the mechanical uncertainty of the camera and the carrier platform, the direction and the size of the acquired image are likely to be offset, so that the image is scaled and rotated to different degrees, and the system can more sensitively identify the same target with different angles and sizes after the model is loaded.
I.e. find a transformation F of an image such that
F(Ax,Ay,Aw,Ah)=(G′x,G′y,G′w,G′h) (5)
The calculation of F can be achieved by translation and scaling:
G′w=Aw·dy(A) (6)
G′h=Ah·dy(A) (7)
where dy (A) represents the amount of translation, G 'w represents the scaled target frame width, and G' h represents the scaled target frame height.
Construction of objective functions
Figure GDA0004201704060000064
Wherein phi (A) is a feature vector composed of corresponding feature graphs, w T Is a parameter to be learned; d (a) is the predicted value obtained (×x, y, w, h, i.e. each transform corresponds to one of the above objective functions), in order to minimize the difference tx, ty, tw, th between the predicted value dx (a), dy (a), dw (a) and the actual value, the cost function loss is as follows:
Figure GDA0004201704060000061
in the method, in the process of the invention,
Figure GDA0004201704060000062
representing the true center point of the target frame, N represents the number of feature images, A i A target box representing the ith feature map.
The function optimization objective w is:
Figure GDA0004201704060000063
(3) Gaussian blur
Meanwhile, considering complex interference in a real battlefield environment, before loading data into a fast-CNN model, firstly carrying out Gaussian blur and exposure treatment on the same picture in different degrees:
Figure GDA0004201704060000071
wherein p and q are pixel point positions in each RGB channel, and sigma is an exposure degree coefficient.
After the optimization treatment, the photo after the treatment is loaded into a fast-CNN model together with the original photo for training. The method has the advantages that the acquired target data volume is increased, the training accuracy is improved, and the method can be well adapted to the influence of weather conditions and mechanical reasons in a real battlefield environment by processing the pictures, so that the robustness of the system is greatly improved.
Step 2: optimizing the learning rate;
in one embodiment of the present invention, step 2 comprises the following:
when a model is trained, the training effect of the model can be influenced by the learning rate, the loss rate of training can be oscillated due to the excessive learning rate, and the too slow training speed can be caused by the too small learning rate. In order to solve the problem, the invention adopts the following treatment methods:
firstly, setting a decline rate, and reducing the original learning rate after training a designated step length so as to prevent oscillation.
Specifically, in the invention, every ten thousand steps of training are adopted to reduce the learning rate to 90% of the existing learning rate, then after one hundred thousand steps of training, the training is selected to be interrupted, the learning rate is adjusted according to the loss rate, and if the loss rate exceeds 30%, the learning rate is improved by 50%. After adjustment, the previous training data training is continued, and then a mature model with better effect is obtained.
Step 3: multi-resolution learning and identification;
in one embodiment of the present invention, step 3 comprises the following:
in the verification of the fast-CNN model, the lack of the training set can cause the problem of higher false recognition rate of the model in the final detection, and the real battlefield environment requirement is difficult to meet. Therefore, the invention provides an optimization method for multi-resolution learning, which comprises the following steps:
in order to recover the lost information in the image, the invention interpolates and generates a high-resolution image from a low-resolution image based on a model frame, obtains more image details and characteristics and provides the image details and characteristics for a neural network to learn.
The comparison analysis includes several classical image interpolation algorithms including nearest neighbor interpolation algorithm, bilinear interpolation method, twice cubic interpolation algorithm, the embodiment of the invention selects twice cubic interpolation algorithm with minimum image quality loss after processing, the interpolation algorithm is expressed by the following function W (m):
Figure GDA0004201704060000081
wherein m is an independent variable, and a is an adjustment value. The original image is amplified while the details of the original image are kept as much as possible, so that the neural network can better acquire the characteristics of the target in the image, and the quality of the training set is optimized.
In addition, when the image is identified, interpolation secondary identification is carried out, and the target frame area is enlarged, so that the accuracy of target identification is greatly improved, and the false identification rate is reduced. In the subsequent experiments, the battlefield camouflage under the real background can be better identified by the model optimized through the process, and the method has stronger complex environment adaptability.
Step 4: non-maximum suppression (Non-Maximum Suppression, NMS)
In one embodiment of the present invention, step 4 comprises the following:
after the model is optimized, the recognition capability of the model is greatly improved compared with that of the initial state, but another problem is generated, namely, a plurality of recognition target frames can be detected by the same target, and a large amount of redundant information is generated, so that the auxiliary decision making under the battlefield environment is very unfavorable.
Aiming at the phenomenon, the invention adopts a linear non-maximum suppression algorithm to remove redundant target frames generated by the same target identification, and one of the redundant target frames with the best effect is reserved, thereby improving the identification accuracy and reducing the false identification rate.
The specific implementation process comprises the following steps:
(1) Calculating the area ratio IoU (Intersection overUnion) of the overlapping area of each target frame and the adjacent target frames;
(2) Comparing IoU to a threshold, changing the confidence of the adjacent target frame:
Figure GDA0004201704060000091
wherein s is i For the confidence of each target frame, N t For the set threshold value, N in one embodiment of the invention t Set to 0.75. After the processing, redundant identification target frames can be filtered well while identification accuracy is ensured.
In order to embody the effectiveness of the battlefield target recognition method of the present invention, verification is performed by experiments as follows.
In order to better fit a complex actual battlefield environment, a short-time target reconnaissance is carried out by using a reconnaissance unmanned aerial vehicle in an experiment, and a specific target identification flow is as follows:
(1) And carrying out translation, rotation, scaling and image preprocessing on a limited sample set acquired by the unmanned aerial vehicle, adding the processed image into an original sample set, and expanding the number of the sample sets.
(2) After preprocessing the acquired information, a fast-RCNN model (shown in figure 1) and a YOLO v3 model (shown in figure 2) which are already trained are loaded for training. The learning rate is adjusted in time according to the real-time training loss rate during the training of the model, so that the model is prevented from being in local optimization or overfitting.
When the fast-RCNN and the YOLO v3 models are trained, 3 identified target types are set, parameters in each convolution layer are modified according to the target types, the initial learning rate is 0.0003, the learning rate is reduced to 95% of the existing learning rate every ten thousand steps, and the learning rate is adjusted by observing a loss rate change curve every hundred thousand steps after training is interrupted once every ten thousand steps. Model parameters are derived after twenty-thousand steps of training, and are subjected to random test, and training results are shown in fig. 4 and 5.
(3) And (3) carrying out image interpolation on the pictures in the training set, improving the resolution ratio of the pictures and increasing the picture information. And then, performing secondary identification on the primary identified key region with higher weight by interpolating and amplifying the region resolution, thereby improving the accuracy and reducing the false identification.
(4) Based on the weight obtained in the last step, if the weights of two adjacent recognition areas are higher than a set threshold, detecting the overlapping rate, wherein the overlapping rate is considered as a target, and reducing the false recognition rate of multiple recognition of one target and the situation that two targets are too close to each other.
60 photographs were each examined under each model, three sets of tests were performed together, and the results of the algorithm comparisons are shown in Table 1.
Table 1 algorithm comparison
Figure GDA0004201704060000101
As can be seen from the observation and analysis of the experimental results, under the same training condition, the structural advantage model of YOLO v3, which is detected twice by using the model, has higher recognition speed, but the accuracy is slightly poorer than that of the other two models. After the Transfer-fast RCNN optimizes the original model, the system identification accuracy is greatly improved on the premise of keeping the original identification speed, and the method can be more suitable for complex and changeable battlefield environments.

Claims (3)

1. The artificial intelligence-based method for battlefield target identification is characterized by comprising the following steps of:
step 1: image preprocessing optimization, comprising:
(1) Generating a target frame, specifically:
generating a target frame through a Transfer-fast-RCNN model, generating a target frame first before identifying a target, using four-dimensional vectors for the target frame to represent (x, y, w, h), wherein x is the abscissa of the center point of the target frame, y is the ordinate of the center point of the target frame, w is the width of the target frame,
A=(Ax,Ay,Aw,Ah) (1)
G=(Gx,Gy,Gw,Gh) (2)
wherein A is an original target frame data set, and G is a real target frame data set;
such that the input original window is mapped to a regression window G 'closer to the real box G, G' represents the translation transformation:
G′x=Ax+Aw·dx(A) (3)
G′y=Ay+Ah·dy(A) (4)
wherein dx (A) and dy (A) represent translation amounts, G 'x represents the abscissa of the translated center point, and G' y represents the ordinate of the translated center point;
(2) Image transformation optimization, specifically:
finding a transformation F of an image causes
F(Ax,Ay,Aw,Ah)=(G′x,G′y,G′w,G′h) (5)
The calculation of F is achieved by translation and scaling:
G′w=Aw·dy(A) (6)
G′h=Ah·dy(A) (7)
where dy (A) represents the amount of translation, G 'w represents the scaled target frame width, and G' h represents the scaled target frame height;
construction of objective functions
Figure FDA0004201704050000021
Wherein phi (A) is a feature vector composed of corresponding feature graphs, w T The parameter to be learned, d (A) is the predicted value, and in order to make the predicted values dx (A), dy (A), dw (A), dh (A) and the true value differences tx, ty, tw, th minimum, the cost function loss is as follows:
Figure FDA0004201704050000022
in the method, in the process of the invention,
Figure FDA0004201704050000023
representing the true center point of the target frame, N represents the number of feature images, A i A target box representing an ith feature map;
the function optimization objective w is:
Figure FDA0004201704050000024
(3) Gaussian blur, in particular:
before loading data into the fast-CNN model, the same picture is first subjected to different degrees of Gaussian blur and exposure:
Figure FDA0004201704050000025
wherein, p and q are pixel point positions in each RGB channel;
step 2: optimizing the learning rate;
setting a decline rate, and reducing the original learning rate after training a designated step length so as to prevent oscillation;
step 3: multi-resolution learning and identification;
step 4: non-maximum suppression.
2. A method for battlefield target recognition based on artificial intelligence as recited in claim 1, wherein: in the step 3, a cubic interpolation algorithm is selected, and the interpolation algorithm is expressed as follows by a function W (m):
Figure FDA0004201704050000031
wherein m is an independent variable, and a is an adjustment value.
3. The method for battlefield target recognition based on artificial intelligence of claim 1, wherein said step 4 comprises the steps of:
(1) Calculating the area ratio IoU of the overlapping area of each target frame and the adjacent target frames;
(2) Comparing IoU to a threshold, changing the confidence of the adjacent target frame:
Figure FDA0004201704050000032
wherein s is i For the confidence of each target frame, N t Is a set threshold.
CN202010231438.6A 2020-03-27 2020-03-27 Artificial intelligence-based method for battlefield target recognition Active CN111414997B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010231438.6A CN111414997B (en) 2020-03-27 2020-03-27 Artificial intelligence-based method for battlefield target recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010231438.6A CN111414997B (en) 2020-03-27 2020-03-27 Artificial intelligence-based method for battlefield target recognition

Publications (2)

Publication Number Publication Date
CN111414997A CN111414997A (en) 2020-07-14
CN111414997B true CN111414997B (en) 2023-06-06

Family

ID=71491576

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010231438.6A Active CN111414997B (en) 2020-03-27 2020-03-27 Artificial intelligence-based method for battlefield target recognition

Country Status (1)

Country Link
CN (1) CN111414997B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112465057B (en) * 2020-12-08 2023-05-12 中国人民解放军空军工程大学 Target detection and identification method based on deep convolutional neural network
CN112633168B (en) * 2020-12-23 2023-10-31 长沙中联重科环境产业有限公司 Garbage truck and method and device for identifying garbage can overturning action of garbage truck

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018015414A1 (en) * 2016-07-21 2018-01-25 Siemens Healthcare Gmbh Method and system for artificial intelligence based medical image segmentation
ES2967691T3 (en) * 2017-08-08 2024-05-03 Reald Spark Llc Fitting a digital representation of a head region
CN108399362B (en) * 2018-01-24 2022-01-07 中山大学 Rapid pedestrian detection method and device
CN109522938A (en) * 2018-10-26 2019-03-26 华南理工大学 The recognition methods of target in a kind of image based on deep learning

Also Published As

Publication number Publication date
CN111414997A (en) 2020-07-14

Similar Documents

Publication Publication Date Title
Zhang et al. Vehicle-damage-detection segmentation algorithm based on improved mask RCNN
CN109784333B (en) Three-dimensional target detection method and system based on point cloud weighted channel characteristics
CN108108746B (en) License plate character recognition method based on Caffe deep learning framework
CN107529650B (en) Closed loop detection method and device and computer equipment
CN108491837B (en) Anti-attack method for improving license plate attack robustness
CN111784747B (en) Multi-target vehicle tracking system and method based on key point detection and correction
CN111461213B (en) Training method of target detection model and target rapid detection method
CN109034184B (en) Grading ring detection and identification method based on deep learning
CN111414997B (en) Artificial intelligence-based method for battlefield target recognition
CN110110668B (en) Gait recognition method based on feedback weight convolutional neural network and capsule neural network
CN112614136B (en) Infrared small target real-time instance segmentation method and device
CN109376641B (en) Moving vehicle detection method based on unmanned aerial vehicle aerial video
CN110992378B (en) Dynamic updating vision tracking aerial photographing method and system based on rotor flying robot
CN113391607A (en) Hydropower station gate control method and system based on deep learning
CN114092793B (en) End-to-end biological target detection method suitable for complex underwater environment
CN113888461A (en) Method, system and equipment for detecting defects of hardware parts based on deep learning
CN111680713A (en) Unmanned aerial vehicle ground target tracking and approaching method based on visual detection
CN112288026B (en) Infrared weak and small target detection method based on class activation diagram
CN117333753A (en) Fire detection method based on PD-YOLO
CN116091823A (en) Single-feature anchor-frame-free target detection method based on fast grouping residual error module
CN114973026A (en) Target detection system in unmanned aerial vehicle scene of taking photo by plane, unmanned aerial vehicle system of taking photo by plane
CN116994236A (en) Low-quality image license plate detection method based on deep neural network
CN114998801A (en) Forest fire smoke video detection method based on contrast self-supervision learning network
CN114627160A (en) Underwater environment detection method
CN113378672A (en) Multi-target detection method for defects of power transmission line based on improved YOLOv3

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant