CN117292283A - Target identification method based on unmanned aerial vehicle - Google Patents

Target identification method based on unmanned aerial vehicle Download PDF

Info

Publication number
CN117292283A
CN117292283A CN202311578358.8A CN202311578358A CN117292283A CN 117292283 A CN117292283 A CN 117292283A CN 202311578358 A CN202311578358 A CN 202311578358A CN 117292283 A CN117292283 A CN 117292283A
Authority
CN
China
Prior art keywords
parameter
search
model
individuals
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311578358.8A
Other languages
Chinese (zh)
Other versions
CN117292283B (en
Inventor
李国庆
刘臣
刘兵
陈文枫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Qinglong Aviation Technology Co ltd
Original Assignee
Chengdu Qinglong Aviation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Qinglong Aviation Technology Co ltd filed Critical Chengdu Qinglong Aviation Technology Co ltd
Priority to CN202311578358.8A priority Critical patent/CN117292283B/en
Publication of CN117292283A publication Critical patent/CN117292283A/en
Application granted granted Critical
Publication of CN117292283B publication Critical patent/CN117292283B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a target recognition method based on an unmanned aerial vehicle, which belongs to the technical field of target recognition and data processing, wherein a target recognition model of the unmanned aerial vehicle is built through a deep learning model, and recognition of a target area, feature extraction of the target area and recognition of the extracted features can be realized, so that the target recognition is realized.

Description

Target identification method based on unmanned aerial vehicle
Technical Field
The invention belongs to the technical field of target identification and data processing, and particularly relates to a target identification method based on an unmanned aerial vehicle.
Background
The unmanned aerial vehicle target recognition tracking algorithm is based on target detection, and the unmanned aerial vehicle is used for collecting images containing targets and then recognizing the collected images, so that a target recognition result can be obtained. In the traditional target recognition, a certain amount of artificial features are extracted from an image, the image is expressed by a mathematical model, and then the image is recognized by a classifier. With the development of artificial intelligence, deep learning has been broken through, and has achieved great success in the fields of speech recognition, natural language processing, computer vision, video analysis, multimedia, etc. Deep learning is also gradually applied to unmanned aerial vehicle target recognition, and a target recognition method based on the deep learning generally comprises the following steps: the image is input into a neural network, the loss function is minimized by utilizing algorithms such as forward propagation error, backward propagation error and the like of deep learning, a better recognition model is obtained after the weight is updated, and then the model is utilized to recognize a new image. For example, the prior art automatically learns features from image data via convolutional neural network models, and can be quickly trained from new training data to learn new feature representations. The general pattern recognition system comprises two important parts of feature extraction and classifier, wherein the feature extraction and classifier are separated from each other in the traditional method, and the feature extraction and classifier are combined feedback optimization under the framework of a neural network, so that the performance of combined cooperation of the feature extraction and classifier can be exerted as much as possible.
However, in the prior art, the characteristics in the image data cannot be well learned, so that the finally trained neural network cannot well perform target recognition, and the unmanned aerial vehicle cannot accurately perform the target recognition task.
Disclosure of Invention
The invention provides a target identification method based on an unmanned aerial vehicle, which is used for solving the problems existing in the prior art.
An unmanned aerial vehicle-based target recognition method comprises the following steps:
acquiring training data for target recognition, wherein the training data comprises training image data, candidate region positions of the training image data and target classification labels of each candidate region;
constructing a candidate region extraction sub-model, and training the candidate region extraction sub-model by adopting a multi-layer hybrid search algorithm according to training image data and the candidate region position of the training image data so as to obtain a trained candidate region extraction sub-model;
constructing a feature extraction sub-model, wherein the feature extraction sub-model is used for extracting candidate areas of training image data according to the positions of the candidate areas of the training image data, scaling the candidate areas of the training image data to a uniform size to obtain processed candidate areas, and carrying out feature extraction on the processed candidate areas to obtain target data features of the candidate areas;
Constructing a target recognition sub-model, and training the target recognition sub-model by adopting a multi-layer hybrid search algorithm according to the target data characteristics of the candidate region and the target classification labels of the candidate region so as to obtain a trained target recognition sub-model;
constructing an unmanned aerial vehicle target recognition model according to the trained candidate region extraction sub-model, the feature extraction sub-model and the trained target recognition sub-model;
and deploying the unmanned aerial vehicle target recognition model on the unmanned aerial vehicle or a remote terminal connected with the unmanned aerial vehicle in a wireless way, and performing target recognition through the deployed unmanned aerial vehicle target recognition model when the unmanned aerial vehicle executes a target recognition task so as to acquire a target recognition result.
Further, constructing a candidate region extraction sub-model, comprising:
and constructing an RCNN deep learning model, a FastRCNN deep learning model or a FasterRCNN deep learning model, and taking the constructed RCNN deep learning model, the FastRCNN deep learning model or the FasterRCNN deep learning model as a candidate region extraction sub-model.
Further, training the candidate region extraction submodel by using a multi-layer hybrid search algorithm according to the training image data and the candidate region position of the training image data to obtain a trained candidate region extraction submodel, including:
The training image data is used as input data of a candidate region extraction sub-model, actual output data of the candidate region extraction sub-model is obtained, the candidate region position of the training image data is used as expected output data of the candidate region extraction sub-model, and the fitness value of the candidate region extraction sub-model is obtained according to the actual output data and the expected output data of the candidate region extraction sub-model;
and training the candidate region extraction submodel by adopting a multi-layer hybrid search algorithm according to the fitness value of the candidate region extraction submodel so as to obtain the trained candidate region extraction submodel.
Further, a feature extraction sub-model is constructed, and the feature extraction sub-model is used for extracting a candidate region of training image data according to the position of the candidate region of the training image data, scaling the candidate region of the training image data to a uniform size to obtain a processed candidate region, and performing feature extraction on the processed candidate region to obtain target data features of the candidate region, and the feature extraction sub-model comprises the following steps:
constructing a candidate region processing layer, wherein the candidate region processing layer is used for extracting a candidate region of training image data according to the candidate region position of the training image data, and scaling the candidate region of the training image data to a uniform size to obtain a processed candidate region;
And constructing a feature extraction layer, wherein the feature extraction layer is provided with an HOG algorithm, so that feature extraction is carried out on the processed candidate region through the HOG algorithm, and the target data features of the candidate region are obtained.
Further, constructing a target recognition sub-model, comprising: and constructing an SVM deep learning model, and constructing a target recognition sub-model by adopting the SVM deep learning model.
Further, constructing a target recognition sub-model, training the target recognition sub-model by adopting a multi-layer hybrid search algorithm according to the target data characteristics of the candidate region and the target classification labels of the candidate region to obtain a trained target recognition sub-model, wherein the method comprises the following steps:
taking the target data characteristics of the candidate region as input data of the target recognition sub-model, acquiring actual output data of the target recognition sub-model, taking a target classification label of the candidate region as expected output data of the target recognition sub-model, and acquiring an adaptability value of the target recognition sub-model according to the actual output data and the expected output data of the target recognition sub-model;
and training the target recognition sub-model by adopting a multi-layer hybrid search algorithm according to the fitness value of the target recognition sub-model so as to obtain the trained target recognition sub-model.
Further, the unmanned aerial vehicle target recognition model is deployed on the unmanned aerial vehicle or on a remote terminal wirelessly connected with the unmanned aerial vehicle, when the unmanned aerial vehicle executes a target recognition task, target recognition is performed through the deployed unmanned aerial vehicle target recognition model, so as to obtain a target recognition result, and the method comprises the following steps:
deploying the unmanned aerial vehicle target recognition model on the unmanned aerial vehicle or a remote terminal in wireless connection with the unmanned aerial vehicle;
when the unmanned aerial vehicle executes the target recognition task, real-time image data are collected through the unmanned aerial vehicle in real time, and the deployed unmanned aerial vehicle target recognition model is adopted to recognize the real-time image data, so that a target in the real-time image data is determined, and a target recognition result is obtained.
Further, the multi-layer hybrid search algorithm comprises:
randomly generating parameters between the upper parameter limit and the lower parameter limit of the model to be trained, and forming vectors by the parameters of the model to be trained to obtain parameter individuals corresponding to the model to be trained;
repeatedly obtaining parameter individuals corresponding to the model to be trained, obtaining a plurality of parameter individuals, obtaining the fitness value corresponding to each parameter individual, and obtaining a global optimal individual according to the fitness value corresponding to each parameter individual;
Determining a searching direction and a searching step length, searching parameter individuals in a solution space according to the searching direction and the searching step length, acquiring parameter individuals after basic searching, and updating global optimal individuals according to the parameter individuals after basic searching;
carrying out local fine search on the parameter individuals after basic search to obtain the parameter individuals after local search, and updating the globally optimal individuals according to the parameter individuals after local search;
carrying out global coarse search on the parameter individuals after the local search by adopting a probability updating strategy, obtaining the parameter individuals after the global search, and updating the global optimal individuals according to the parameter individuals after the global search;
eliminating and updating the parameter individuals after global searching by adopting an elite optimization strategy to acquire final parameter individuals of the round of training, and updating global optimal individuals according to the final parameter individuals;
judging whether the current training times are greater than or equal to the maximum training times, if so, outputting a global optimal individual to obtain model parameters of a model to be trained, otherwise, returning to the basic searching step.
Further, determining a search direction and a search step length, searching the parameter individuals in the solution space according to the search direction and the search step length, obtaining parameter individuals after basic search, and updating global optimal individuals according to the parameter individuals after basic search, wherein the method comprises the following steps:
Randomly generating a first basic parameter vector in (0, 1) during first training, wherein the parameter dimension of the first basic parameter vector is the same as the parameter dimension of an individual parameter;
according to the first basic parameter vector, the search direction is obtained as follows:
wherein,represent the firsttIn the training processiThe search direction corresponding to the individual parameter,i=1,2,…,I max ,I max representing the total number of parameter individuals, +.>Represent the firsttIn the training processiA first basic parameter vector corresponding to each parameter individual,Trepresenting a transpose;
the searching step length is determined according to the searching direction:
wherein,representing a preset firstiInitial search step size of individual parameters, +.>Represent the firsttIn the training processiSearching step sizes corresponding to the individual parameters;
in addition to the first training, in the subsequent training process, judging whether the fitness value becomes large after searching according to the searching step length determined in the t-1 th training process, if so, taking the searching step length determined in the t-1 th training process as the searching step length in the t-1 th training process, otherwise, re-determining the searching step length;
according to the searching step length, searching the parameter individuals in the solution space, wherein the parameter individuals after basic searching are obtained as follows:
Wherein,represent the firsttTraining aidThe first step in the training processiIndividual parameters->Representing the individual parameters after the basic search;
determining a first target optimal individual in parameter individuals after basic search, and taking the first target optimal individual as a new global optimal individual; the first target optimal individual represents an individual with the largest fitness value in parameter individuals after basic searching;
carrying out local fine search on the parameter individuals after basic search, obtaining the parameter individuals after local search, and updating the globally optimal individuals according to the parameter individuals after local search, wherein the method comprises the following steps:
setting the search duration as N max Search countern=1;
Is the firstiRandomly generating a second basic parameter vector in (0, 1) by the individual parameters;
according to the second basic parameter vector, carrying out local fine search on the parameter individuals after basic search, and obtaining the parameter individuals after local search as follows:
wherein,representing the parameter individuals after the local search, +.>Representing a preset search radius +.>Is shown in the firstnSecond local refinement searchiA second basic parameter vector corresponding to each parameter individual and whennWhen=1, the element in the second base parameter vector is the randomly generated data in (0, 1);
Judgment search counternWhether or not the value of (2) is greater than or equal to the search duration N max If so, the method comprises, if so,outputting the parameter individuals after the local search, otherwise updating the second basic parameter vector to beAnd carrying out local fine search on the current parameter individual according to the second basic parameter vector until a search counternThe value of (2) is greater than or equal to the search duration N max Outputting the parameter individuals after the local search; wherein (1)>Representing the adjustment coefficient;
determining a second target optimal individual in the parameter individuals after the local search, and taking the second target optimal individual as a new global optimal individual; the second target optimal individual represents an individual with the largest fitness value among the parameter individuals after the local search.
Further, performing global coarse search on the parameter individuals after the local search by adopting a probability updating strategy, obtaining the parameter individuals after the global search, and updating the global optimal individuals according to the parameter individuals after the global search, wherein the method comprises the following steps:
the global coarse search probability is determined as follows:
wherein,representing a preset maximum global coarse search probability, < ->Representing a preset minimum global coarse search probability,/->Representing the global coarse search probability during the t-th training process,/- >Representing a maximum number of exercises;
randomly generating a random number between (0, 1) aiming at any parameter individual after local search, judging whether the global coarse search probability is larger than the random number, if so, randomly generating a target parameter individual, and replacing the parameter individual after local search by the target parameter individual, otherwise, not replacing;
performing global search on all the parameter individuals after the local search to obtain the parameter individuals after the global search;
determining a third target optimal individual in the parameter individuals after global searching, and taking the third target optimal individual as a new global optimal individual; the third target optimal individual represents an individual with the largest fitness value in parameter individuals after global searching;
eliminating and updating the parameter individuals after global search by adopting an elite optimization strategy to obtain final parameter individuals of the round of training, and updating global optimal individuals according to the final parameter individuals, wherein the method comprises the following steps of:
copying half parameter individuals with the maximum fitness value to obtain copied parameter individuals, and replacing half parameter individuals with the minimum fitness value by adopting the copied parameter individuals in one-to-one correspondence to obtain final parameter individuals of the round of training
Determining a fourth target optimal individual in the final parameter individuals, and taking the fourth target optimal individual as a new global optimal individual; the fourth target optimal individual represents the individual with the largest fitness value in the final parameter individuals.
According to the unmanned aerial vehicle-based target recognition method, the target recognition model of the unmanned aerial vehicle is built through the deep learning model, recognition of a target area, feature extraction of the target area and recognition of the extracted features can be achieved, so that target recognition is achieved, compared with the mode that images are directly recognized in the prior art, the unmanned aerial vehicle target recognition model has a better recognition effect in a mode that the extracted features are re-recognized, and the problem that the characteristics in image data cannot be well learned in the prior art, so that target recognition cannot be well conducted by a finally trained neural network is solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a flowchart of a target recognition method based on an unmanned aerial vehicle according to an embodiment of the present invention.
Specific embodiments of the present invention have been shown by way of the above drawings and will be described in more detail below. The drawings and the written description are not intended to limit the scope of the inventive concepts in any way, but rather to illustrate the inventive concepts to those skilled in the art by reference to the specific embodiments.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the invention. Rather, they are merely examples of apparatus and methods consistent with aspects of the invention as detailed in the accompanying claims.
Embodiments of the present invention are described in detail below with reference to the accompanying drawings.
As shown in fig. 1, an unmanned aerial vehicle-based target recognition method includes:
s101, training data for target recognition is acquired, wherein the training data comprises training image data, candidate region positions of the training image data and target classification labels of each candidate region.
The training data for target recognition can be data stored in advance in a database, or can be data input by a user through man-machine interaction. In order to ensure the training effect of the unmanned aerial vehicle target recognition model, the quantity of training data should be greater than a preset quantity threshold.
S102, constructing a candidate region extraction sub-model, and training the candidate region extraction sub-model by adopting a multi-layer hybrid search algorithm according to training image data and the candidate region position of the training image data so as to obtain a trained candidate region extraction sub-model.
The candidate region extraction sub-model is mainly used for identifying the candidate region where the target is located, so that the target in the candidate region can be conveniently identified by the subsequent target identification sub-model. In order to improve the recognition effect of the candidate region extraction sub-model and the target recognition sub-model, the embodiment of the invention provides a multi-layer hybrid search algorithm, which can better optimize the candidate region extraction sub-model and the target recognition sub-model compared with the existing training method, can effectively avoid the problems of sinking local optimum and gradient disappearance in the training process, and enables the candidate region extraction sub-model and the target recognition sub-model to better learn the characteristics in data.
And S103, constructing a feature extraction sub-model, wherein the feature extraction sub-model is used for extracting candidate areas of training image data according to the positions of the candidate areas of the training image data, scaling the candidate areas of the training image data to a uniform size to obtain processed candidate areas, and carrying out feature extraction on the processed candidate areas to obtain target data features of the candidate areas.
Scaling the candidate regions of the training image data to a uniform size may enable the target recognition sub-model to better recognize the target. The feature extraction of the candidate region after the processing may be: the candidate region after processing is subjected to feature extraction by HOG (Histogram of Oriented Gradient) algorithm, PHOG (pyramid histogram of oriented gradient, tower-type directional gradient histogram) algorithm, or LBP (Local Binary Pattern ) algorithm.
S104, constructing a target recognition sub-model, and training the target recognition sub-model by adopting a multi-layer hybrid search algorithm according to the target data characteristics of the candidate region and the target classification labels of the candidate region so as to obtain the trained target recognition sub-model.
The main function of the target recognition sub-model is to classify the target data characteristics of the candidate region, so that the classification of targets in the candidate region is realized. In order to ensure the recognition effect of the target recognition sub-model, a multi-layer hybrid search algorithm is adopted to train the target recognition sub-model.
S105, constructing an unmanned aerial vehicle target recognition model according to the trained candidate region extraction sub-model, the feature extraction sub-model and the trained target recognition sub-model.
After the candidate region extraction sub-model extracts the candidate region of the input image, the extracted candidate region is transmitted to the feature extraction sub-model for feature extraction, and the feature extraction sub-model transmits the extracted feature to the target recognition sub-model for target recognition, so that the target recognition of the unmanned aerial vehicle can be realized.
And S106, deploying the unmanned aerial vehicle target recognition model on the unmanned aerial vehicle or on a remote terminal connected with the unmanned aerial vehicle in a wireless mode, and performing target recognition through the deployed unmanned aerial vehicle target recognition model when the unmanned aerial vehicle executes a target recognition task so as to acquire a target recognition result.
When the unmanned aerial vehicle target recognition model is deployed on the unmanned aerial vehicle, the unmanned aerial vehicle is required to have strong data processing capacity, power consumption of the unmanned aerial vehicle can be increased, and bandwidth occupation of the unmanned aerial vehicle is small. When the unmanned aerial vehicle target recognition model is deployed on the remote terminal, the unmanned aerial vehicle target recognition model has higher network requirements on the unmanned aerial vehicle, occupies larger bandwidth, and has smaller requirements on the data processing capacity of the unmanned aerial vehicle. Therefore, the user can deploy the unmanned aerial vehicle recognition model according to actual requirements.
Alternatively, the remote terminal may be a server side, a computer terminal, or the like, which has data processing capability and data communication capability.
According to the unmanned aerial vehicle-based target recognition method, the target recognition model of the unmanned aerial vehicle is built through the deep learning model, recognition of a target area, feature extraction of the target area and recognition of the extracted features can be achieved, so that target recognition is achieved, compared with the mode that images are directly recognized in the prior art, the unmanned aerial vehicle target recognition model has a better recognition effect in a mode that the extracted features are re-recognized, and the problem that the characteristics in image data cannot be well learned in the prior art, so that target recognition cannot be well conducted by a finally trained neural network is solved.
In this embodiment, constructing the candidate region extraction submodel includes:
an RCNN (Region based Convolutional Neural Network, region-based convolutional neural network) deep learning model, a FastRCNN deep learning model (FastRegion based Convolutional Neural Network, fast RCNN) or a FasterRCNN (FasterRegion based Convolutional Neural Network, faster RCNN) deep learning model is constructed, and the constructed RCNN deep learning model, fastRCNN deep learning model or FasterRCNN deep learning model is used as a candidate region extraction sub-model. It should be noted that, in addition to the target region detection model described above, other deep learning models may be used to construct candidate region extraction sub-models.
In this embodiment, according to training image data and the candidate region position of the training image data, a multi-layer hybrid search algorithm is used to train the candidate region extraction sub-model to obtain a trained candidate region extraction sub-model, including:
the training image data is used as input data of the candidate region extraction sub-model, actual output data of the candidate region extraction sub-model is obtained, the candidate region position of the training image data is used as expected output data of the candidate region extraction sub-model, and the fitness value of the candidate region extraction sub-model is obtained according to the actual output data and the expected output data of the candidate region extraction sub-model.
And training the candidate region extraction submodel by adopting a multi-layer hybrid search algorithm according to the fitness value of the candidate region extraction submodel so as to obtain the trained candidate region extraction submodel.
In order to simplify the data processing steps, the present embodiment may extract the desired output data and the actual output data of the submodel from the candidate region to acquire an error function value, and employ 1/(error function value+0.0001) as the fitness value. The method for obtaining the error function value is relatively conventional (such as root mean square error), and more methods exist in the prior art, and this embodiment will not be described.
In this embodiment, a feature extraction sub-model is constructed, where the feature extraction sub-model is configured to extract a candidate region of training image data according to a candidate region position of the training image data, scale the candidate region of the training image data to a uniform size, obtain a processed candidate region, and perform feature extraction on the processed candidate region to obtain a target data feature of the candidate region, and includes: and constructing a candidate region processing layer, wherein the candidate region processing layer is used for extracting the candidate region of the training image data according to the candidate region position of the training image data, and scaling the candidate region of the training image data to a uniform size to obtain the processed candidate region. And constructing a feature extraction layer, wherein the feature extraction layer is provided with an HOG algorithm, so that feature extraction is carried out on the processed candidate region through the HOG algorithm, and the target data features of the candidate region are obtained.
In this embodiment, constructing the object recognition sub-model includes: and constructing an SVM (support vector machines, support vector machine) deep learning model, and constructing a target recognition sub-model by adopting the SVM deep learning model. It should be noted that, in addition to using the SVM deep learning model, other image recognition models may be used to construct the target recognition sub-model.
In this embodiment, a target recognition sub-model is constructed, and training is performed on the target recognition sub-model by using a multi-layer hybrid search algorithm according to target data features of a candidate region and target classification labels of the candidate region, so as to obtain a trained target recognition sub-model, including: and taking the target data characteristics of the candidate region as input data of the target recognition sub-model, acquiring actual output data of the target recognition sub-model, taking the target classification labels of the candidate region as expected output data of the target recognition sub-model, and acquiring the fitness value of the target recognition sub-model according to the actual output data and the expected output data of the target recognition sub-model. And training the target recognition sub-model by adopting a multi-layer hybrid search algorithm according to the fitness value of the target recognition sub-model so as to obtain the trained target recognition sub-model.
The fitness value of the target recognition sub-model is similar to the process of acquiring the fitness value of the candidate region extraction sub-model, and details are not repeated here.
In this embodiment, the unmanned aerial vehicle target recognition model is deployed on an unmanned aerial vehicle or on a remote terminal wirelessly connected with the unmanned aerial vehicle, and when the unmanned aerial vehicle performs a target recognition task, target recognition is performed through the deployed unmanned aerial vehicle target recognition model to obtain a target recognition result, including: and deploying the unmanned aerial vehicle target recognition model on the unmanned aerial vehicle or a remote terminal in wireless connection with the unmanned aerial vehicle. When the unmanned aerial vehicle executes the target recognition task, real-time image data are collected through the unmanned aerial vehicle in real time, and the deployed unmanned aerial vehicle target recognition model is adopted to recognize the real-time image data, so that a target in the real-time image data is determined, and a target recognition result is obtained.
In this embodiment, the multi-layer hybrid search algorithm includes:
a1, randomly generating parameters between the upper parameter limit and the lower parameter limit of the model to be trained, and forming the parameters of the model to be trained into vectors to obtain parameter individuals corresponding to the model to be trained.
The upper and lower parameter limits may be different for different models, and even different parameter types, so that the upper and lower parameter limits may be set according to the selected model.
Alternatively, since there is an upper parameter limit and a lower parameter limit, the individual parameters finally searched should be within the upper parameter limit and the lower parameter limit. During the search, however, there may be parameters that exceed their upper and lower limits. Therefore, the out-of-limit processing may be performed after each search, so that the out-of-limit parameter is randomly generated between its upper and lower limits, or returned to its lower limit, or returned to its upper limit.
A2, repeatedly obtaining parameter individuals corresponding to the model to be trained, obtaining a plurality of parameter individuals, obtaining fitness values corresponding to each parameter individual, and obtaining a global optimal individual according to the fitness values corresponding to each parameter individual.
A3, determining a searching direction and a searching step length, searching the parameter individuals in the solution space according to the searching direction and the searching step length, obtaining the parameter individuals after basic searching, and updating the global optimal individuals according to the parameter individuals after basic searching.
And A4, carrying out local fine search on the parameter individuals after the basic search, obtaining the parameter individuals after the local search, and updating the globally optimal individuals according to the parameter individuals after the local search.
And A5, carrying out global coarse search on the parameter individuals after the local search by adopting a probability updating strategy, obtaining the parameter individuals after the global search, and updating the global optimal individuals according to the parameter individuals after the global search.
And A6, eliminating and updating the parameter individuals after global searching by adopting an elite optimization strategy to obtain final parameter individuals of the round of training, and updating the global optimal individuals according to the final parameter individuals.
And A7, judging whether the current training times are greater than or equal to the maximum training times, if so, outputting a global optimal individual to obtain model parameters of the model to be trained, otherwise, returning to the basic searching step.
In this embodiment, determining a search direction and a search step length, searching parameter individuals in a solution space according to the search direction and the search step length, obtaining parameter individuals after basic search, and updating global optimal individuals according to the parameter individuals after basic search, including:
At the time of the first training, a first basic parameter vector whose parameter dimension is the same as that of the parameter individual is randomly generated in (0, 1).
According to the first basic parameter vector, the search direction is obtained as follows:
wherein,represent the firsttIn the training processiThe search direction corresponding to the individual parameter,i=1,2,…,I max ,I max representing the total number of parameter individuals, +.>Represent the firsttIn the training processiA first basic parameter vector corresponding to each parameter individual,Trepresenting the transpose.
The searching step length is determined according to the searching direction:
wherein,representing a preset firstiInitial search step size of individual parameters, +.>Represent the firsttIn the training processiSearching step length corresponding to each parameter individual.
In addition to the first training, in the subsequent training process, judging whether the fitness value becomes large after searching according to the searching step length determined in the t-1 th training process, if so, taking the searching step length determined in the t-1 th training process as the searching step length in the t-1 th training process, otherwise, re-determining the searching step length.
According to the searching step length, searching the parameter individuals in the solution space, wherein the parameter individuals after basic searching are obtained as follows:
Wherein,represent the firsttIn the training processiIndividual parameters->Representing the individual parameters after the basic search.
And determining a first target optimal individual in the parameter individuals after basic searching, and taking the first target optimal individual as a new global optimal individual. The first target optimal individual represents an individual with the largest fitness value in parameter individuals after basic searching.
By basic search, parameter individuals can be searched in the joint space according to a certain step length, and after the searching direction is poor, the searching direction is converted, so that the approximate search of the solution space is realized.
Carrying out local fine search on the parameter individuals after basic search, obtaining the parameter individuals after local search, and updating the globally optimal individuals according to the parameter individuals after local search, wherein the method comprises the following steps:
setting the search duration as N max Search countern=1。
Is the firstiThe individual parameters randomly generate a second basis parameter vector in (0, 1).
According to the second basic parameter vector, carrying out local fine search on the parameter individuals after basic search, and obtaining the parameter individuals after local search as follows:
wherein,representing the parameter individuals after the local search, +. >Representing a preset search radius +.>Is shown in the firstnSecond local refinement searchiA second basic parameter vector corresponding to each parameter individual and whennWhen=1, the element in the second base parameter vector is the randomly generated data in (0, 1).
Judgment search counternWhether or not the value of (2) is greater than or equal to the search duration N max If yes, outputting the parameter individual after the local search, otherwise updating the second basic parameter vector to beAnd carrying out local fine search on the current parameter individual according to the second basic parameter vector until a search counternThe value of (2) is greater than or equal to the search duration N max And outputting the parameter individuals after the local search. Wherein (1)>Indicating an adjustment factor, the present embodiment is preferably 4.
And determining a second target optimal individual in the parameter individuals after the local search, and taking the second target optimal individual as a new global optimal individual. The second target optimal individual represents an individual with the largest fitness value among the parameter individuals after the local search.
After searching the solution space approximately, there are still many regions not searched, so local searches can be performed on an individual basis of parameters after the basic search. The local searching method provided by the embodiment is in a sporadic outward searching mode, and can effectively conduct fine searching on local areas near parameter individuals, so that further optimization is achieved.
In this embodiment, a probability update policy is used to perform global coarse search on a parameter individual after local search, obtain a parameter individual after global search, and update a global optimal individual according to the parameter individual after global search, including:
the global coarse search probability is determined as follows:
wherein,representing a preset maximum global coarse search probability, < ->Representing a preset minimum global coarse search probability,/->Representing the global coarse search probability during the t-th training process,/->Representing the maximum number of exercises.
And randomly generating a random number between (0, 1) aiming at any parameter individual after local search, judging whether the global coarse search probability is larger than the random number, if so, randomly generating a target parameter individual, and replacing the parameter individual after local search by the target parameter individual, otherwise, not replacing.
And carrying out global search on all the parameter individuals after the local search to obtain the parameter individuals after the global search.
And determining a third target optimal individual in the parameter individuals after global searching, and taking the third target optimal individual as a new global optimal individual. The third target optimal individual represents an individual with the largest fitness value among the parameter individuals after global searching.
After the local search, it may be that the parameter individuals are trapped in a local optimum, so that a global search strategy may be employed for searching. In the early stage of the algorithm, local optima need to be jumped out as much as possible, so the probability of performing global searches is high. And the later stage of the algorithm needs to ensure stability and convergence, so the probability of global search is smaller. The global rough search provided by the embodiment can effectively help to escape from local optimum, and the basic search also has the capability of jumping out of the local optimum, so that global optimum can be searched, and the dynamic global search strategy can effectively balance the convergence speed and the convergence precision.
Optionally, to ensure the forward direction of the search, only a part of parameter individuals may be selected to execute the global search policy, and the first M parameter individuals optimally do not participate in the global search. Alternatively, a greedy algorithm is employed to execute the global search strategy.
Eliminating and updating the parameter individuals after global search by adopting an elite optimization strategy to obtain final parameter individuals of the round of training, and updating global optimal individuals according to the final parameter individuals, wherein the method comprises the following steps of:
copying half parameter individuals with the maximum fitness value to obtain copied parameter individuals, and replacing half parameter individuals with the minimum fitness value by adopting the copied parameter individuals in one-to-one correspondence to obtain final parameter individuals of the round of training
And determining a fourth target optimal individual in the final parameter individuals, and taking the fourth target optimal individual as a new global optimal individual. The fourth target optimal individual represents the individual with the largest fitness value in the final parameter individuals. The parameter individual is updated through the elite optimization strategy, so that the convergence rate of the algorithm can be effectively improved.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Those of ordinary skill in the art will appreciate that implementing all or part of the above facts and methods may be accomplished by a program to instruct related hardware, the program involved or the program may be stored in a computer readable storage medium, the program when executed comprising the steps of: the corresponding method steps are introduced at this time, and the storage medium may be a ROM/RAM, a magnetic disk, an optical disk, or the like.
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the invention, and is not meant to limit the scope of the invention, but to limit the invention to the particular embodiments, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (10)

1. The target identification method based on the unmanned aerial vehicle is characterized by comprising the following steps of:
acquiring training data for target recognition, wherein the training data comprises training image data, candidate region positions of the training image data and target classification labels of each candidate region;
constructing a candidate region extraction sub-model, and training the candidate region extraction sub-model by adopting a multi-layer hybrid search algorithm according to training image data and the candidate region position of the training image data so as to obtain a trained candidate region extraction sub-model;
constructing a feature extraction sub-model, wherein the feature extraction sub-model is used for extracting candidate areas of training image data according to the positions of the candidate areas of the training image data, scaling the candidate areas of the training image data to a uniform size to obtain processed candidate areas, and carrying out feature extraction on the processed candidate areas to obtain target data features of the candidate areas;
Constructing a target recognition sub-model, and training the target recognition sub-model by adopting a multi-layer hybrid search algorithm according to the target data characteristics of the candidate region and the target classification labels of the candidate region so as to obtain a trained target recognition sub-model;
constructing an unmanned aerial vehicle target recognition model according to the trained candidate region extraction sub-model, the feature extraction sub-model and the trained target recognition sub-model;
and deploying the unmanned aerial vehicle target recognition model on the unmanned aerial vehicle or a remote terminal connected with the unmanned aerial vehicle in a wireless way, and performing target recognition through the deployed unmanned aerial vehicle target recognition model when the unmanned aerial vehicle executes a target recognition task so as to acquire a target recognition result.
2. The unmanned aerial vehicle-based target recognition method of claim 1, wherein constructing the candidate region extraction submodel comprises:
and constructing an RCNN deep learning model, a FastRCNN deep learning model or a FasterRCNN deep learning model, and taking the constructed RCNN deep learning model, the FastRCNN deep learning model or the FasterRCNN deep learning model as a candidate region extraction sub-model.
3. The unmanned aerial vehicle-based target recognition method of claim 2, wherein training the candidate region extraction sub-model by using a multi-layer hybrid search algorithm according to the training image data and the candidate region position of the training image data to obtain the trained candidate region extraction sub-model comprises:
The training image data is used as input data of a candidate region extraction sub-model, actual output data of the candidate region extraction sub-model is obtained, the candidate region position of the training image data is used as expected output data of the candidate region extraction sub-model, and the fitness value of the candidate region extraction sub-model is obtained according to the actual output data and the expected output data of the candidate region extraction sub-model;
and training the candidate region extraction submodel by adopting a multi-layer hybrid search algorithm according to the fitness value of the candidate region extraction submodel so as to obtain the trained candidate region extraction submodel.
4. The unmanned aerial vehicle-based target recognition method of claim 3, wherein constructing a feature extraction sub-model for extracting candidate regions of training image data according to candidate region positions of the training image data, scaling the candidate regions of the training image data to a uniform size to obtain processed candidate regions, and performing feature extraction on the processed candidate regions to obtain target data features of the candidate regions comprises:
constructing a candidate region processing layer, wherein the candidate region processing layer is used for extracting a candidate region of training image data according to the candidate region position of the training image data, and scaling the candidate region of the training image data to a uniform size to obtain a processed candidate region;
And constructing a feature extraction layer, wherein the feature extraction layer is provided with an HOG algorithm, so that feature extraction is carried out on the processed candidate region through the HOG algorithm, and the target data features of the candidate region are obtained.
5. A method of unmanned aerial vehicle-based object recognition as claimed in claim 3, wherein constructing the object recognition sub-model comprises: and constructing an SVM deep learning model, and constructing a target recognition sub-model by adopting the SVM deep learning model.
6. The unmanned aerial vehicle-based target recognition method of claim 5, wherein constructing a target recognition sub-model, training the target recognition sub-model by using a multi-layer hybrid search algorithm according to target data features of the candidate region and target classification labels of the candidate region to obtain a trained target recognition sub-model, comprises:
taking the target data characteristics of the candidate region as input data of the target recognition sub-model, acquiring actual output data of the target recognition sub-model, taking a target classification label of the candidate region as expected output data of the target recognition sub-model, and acquiring an adaptability value of the target recognition sub-model according to the actual output data and the expected output data of the target recognition sub-model;
And training the target recognition sub-model by adopting a multi-layer hybrid search algorithm according to the fitness value of the target recognition sub-model so as to obtain the trained target recognition sub-model.
7. The unmanned aerial vehicle-based target recognition method of any of claims 1-6, wherein deploying the unmanned aerial vehicle target recognition model on the unmanned aerial vehicle or on a remote terminal wirelessly connected to the unmanned aerial vehicle, when the unmanned aerial vehicle performs the target recognition task, performing target recognition by the deployed unmanned aerial vehicle target recognition model to obtain a target recognition result, comprises:
deploying the unmanned aerial vehicle target recognition model on the unmanned aerial vehicle or a remote terminal in wireless connection with the unmanned aerial vehicle;
when the unmanned aerial vehicle executes the target recognition task, real-time image data are collected through the unmanned aerial vehicle in real time, and the deployed unmanned aerial vehicle target recognition model is adopted to recognize the real-time image data, so that a target in the real-time image data is determined, and a target recognition result is obtained.
8. The unmanned aerial vehicle-based target recognition method of claim 6, wherein the multi-layer hybrid search algorithm comprises:
randomly generating parameters between the upper parameter limit and the lower parameter limit of the model to be trained, and forming vectors by the parameters of the model to be trained to obtain parameter individuals corresponding to the model to be trained;
Repeatedly obtaining parameter individuals corresponding to the model to be trained, obtaining a plurality of parameter individuals, obtaining the fitness value corresponding to each parameter individual, and obtaining a global optimal individual according to the fitness value corresponding to each parameter individual;
determining a searching direction and a searching step length, searching parameter individuals in a solution space according to the searching direction and the searching step length, acquiring parameter individuals after basic searching, and updating global optimal individuals according to the parameter individuals after basic searching;
carrying out local fine search on the parameter individuals after basic search to obtain the parameter individuals after local search, and updating the globally optimal individuals according to the parameter individuals after local search;
carrying out global coarse search on the parameter individuals after the local search by adopting a probability updating strategy, obtaining the parameter individuals after the global search, and updating the global optimal individuals according to the parameter individuals after the global search;
eliminating and updating the parameter individuals after global searching by adopting an elite optimization strategy to acquire final parameter individuals of the round of training, and updating global optimal individuals according to the final parameter individuals;
judging whether the current training times are greater than or equal to the maximum training times, if so, outputting a global optimal individual to obtain model parameters of a model to be trained, otherwise, returning to the basic searching step.
9. The unmanned aerial vehicle-based target recognition method of claim 8, wherein determining the search direction and the search step size, searching the parameter individuals in the solution space according to the search direction and the search step size, obtaining the parameter individuals after the basic search, and updating the globally optimal individuals according to the parameter individuals after the basic search, comprises:
randomly generating a first basic parameter vector in (0, 1) during first training, wherein the parameter dimension of the first basic parameter vector is the same as the parameter dimension of an individual parameter;
according to the first basic parameter vector, the search direction is obtained as follows:
wherein,represent the firsttIn the training processiThe search direction corresponding to the individual parameter,i=1,2,…,I max ,I max representing the total number of parameter individuals, +.>Represent the firsttIn the training processiA first basic parameter vector corresponding to each parameter individual,Trepresenting a transpose;
the searching step length is determined according to the searching direction:
wherein,representing a preset firstiInitial search step size of individual parameters, +.>Represent the firsttIn the training processiSearching step sizes corresponding to the individual parameters;
in addition to the first training, in the subsequent training process, judging whether the fitness value becomes large after searching according to the searching step length determined in the t-1 th training process, if so, taking the searching step length determined in the t-1 th training process as the searching step length in the t-1 th training process, otherwise, re-determining the searching step length;
According to the searching step length, searching the parameter individuals in the solution space, wherein the parameter individuals after basic searching are obtained as follows:
wherein,represent the firsttIn the training processiIndividual parameters->Representing the individual parameters after the basic search;
determining a first target optimal individual in parameter individuals after basic search, and taking the first target optimal individual as a new global optimal individual; the first target optimal individual represents an individual with the largest fitness value in parameter individuals after basic searching;
carrying out local fine search on the parameter individuals after basic search, obtaining the parameter individuals after local search, and updating the globally optimal individuals according to the parameter individuals after local search, wherein the method comprises the following steps:
setting the search duration as N max Search countern=1;
Is the firstiRandomly generating a second basic parameter vector in (0, 1) by the individual parameters;
according to the second basic parameter vector, carrying out local fine search on the parameter individuals after basic search, and obtaining the parameter individuals after local search as follows:
wherein,representing the parameter individuals after the local search, +.>Representing a preset search radius +.>Is shown in the firstnSecond local refinement search iA second basic parameter vector corresponding to each parameter individual and whennWhen=1, the element in the second base parameter vector is the randomly generated data in (0, 1);
judgment search counternWhether or not the value of (2) is greater than or equal to the search duration N max If yes, outputting the parameter individual after the local search, otherwise updating the second basic parameter vector to beAnd carrying out local fine search on the current parameter individual according to the second basic parameter vector until a search counternThe value of (2) is greater than or equal to the search duration N max Outputting the parameter individuals after the local search; wherein (1)>Representing the adjustment coefficient;
determining a second target optimal individual in the parameter individuals after the local search, and taking the second target optimal individual as a new global optimal individual; the second target optimal individual represents an individual with the largest fitness value among the parameter individuals after the local search.
10. The unmanned aerial vehicle-based target recognition method of claim 9, wherein performing global coarse search on the parameter individuals after the local search by using a probability update strategy, obtaining the parameter individuals after the global search, and updating the globally optimal individuals according to the parameter individuals after the global search, comprises:
The global coarse search probability is determined as follows:
wherein,representing a preset maximum global coarse search probability, < ->Representing a preset minimum global coarse search probability,/->Representing the global coarse search probability during the t-th training process,/->Representing a maximum number of exercises;
randomly generating a random number between (0, 1) aiming at any parameter individual after local search, judging whether the global coarse search probability is larger than the random number, if so, randomly generating a target parameter individual, and replacing the parameter individual after local search by the target parameter individual, otherwise, not replacing;
performing global search on all the parameter individuals after the local search to obtain the parameter individuals after the global search;
determining a third target optimal individual in the parameter individuals after global searching, and taking the third target optimal individual as a new global optimal individual; the third target optimal individual represents an individual with the largest fitness value in parameter individuals after global searching;
eliminating and updating the parameter individuals after global search by adopting an elite optimization strategy to obtain final parameter individuals of the round of training, and updating global optimal individuals according to the final parameter individuals, wherein the method comprises the following steps of:
Copying half parameter individuals with the maximum fitness value to obtain copied parameter individuals, and replacing half parameter individuals with the minimum fitness value by adopting the copied parameter individuals in one-to-one correspondence to obtain final parameter individuals of the round of training
Determining a fourth target optimal individual in the final parameter individuals, and taking the fourth target optimal individual as a new global optimal individual; the fourth target optimal individual represents the individual with the largest fitness value in the final parameter individuals.
CN202311578358.8A 2023-11-24 2023-11-24 Target identification method based on unmanned aerial vehicle Active CN117292283B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311578358.8A CN117292283B (en) 2023-11-24 2023-11-24 Target identification method based on unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311578358.8A CN117292283B (en) 2023-11-24 2023-11-24 Target identification method based on unmanned aerial vehicle

Publications (2)

Publication Number Publication Date
CN117292283A true CN117292283A (en) 2023-12-26
CN117292283B CN117292283B (en) 2024-02-13

Family

ID=89239384

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311578358.8A Active CN117292283B (en) 2023-11-24 2023-11-24 Target identification method based on unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN117292283B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117668671A (en) * 2024-02-01 2024-03-08 成都工业学院 Educational resource management method based on machine learning
CN117668671B (en) * 2024-02-01 2024-04-30 成都工业学院 Educational resource management method based on machine learning

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104217225A (en) * 2014-09-02 2014-12-17 中国科学院自动化研究所 A visual target detection and labeling method
CN106504233A (en) * 2016-10-18 2017-03-15 国网山东省电力公司电力科学研究院 Image electric power widget recognition methodss and system are patrolled and examined based on the unmanned plane of Faster R CNN
CN108009525A (en) * 2017-12-25 2018-05-08 北京航空航天大学 A kind of specific objective recognition methods over the ground of the unmanned plane based on convolutional neural networks
JP2019175063A (en) * 2018-03-28 2019-10-10 Kddi株式会社 Object identification device
CN110942000A (en) * 2019-11-13 2020-03-31 南京理工大学 Unmanned vehicle target detection method based on deep learning
CN111241931A (en) * 2019-12-30 2020-06-05 沈阳理工大学 Aerial unmanned aerial vehicle target identification and tracking method based on YOLOv3
CN111368900A (en) * 2020-02-28 2020-07-03 桂林电子科技大学 Image target object identification method
KR102327060B1 (en) * 2021-07-01 2021-11-16 국방과학연구소 Method, apparatus, computer-readable storage medium and computer program for extracting region of interest for identifying target
CN114511021A (en) * 2022-01-27 2022-05-17 浙江树人学院(浙江树人大学) Extreme learning machine classification algorithm based on improved crow search algorithm
CN114758241A (en) * 2022-04-24 2022-07-15 西安电子科技大学 Remote sensing image situation recognition method based on SVM and step-by-step grid search
CN116128071A (en) * 2023-01-10 2023-05-16 江苏科技大学 SVR model hyper-parameter optimization method based on improved African bald-clearance algorithm
CN117094446A (en) * 2023-09-05 2023-11-21 阿牧网云(北京)科技有限公司 Deep learning-based milk yield prediction method for dairy cows

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104217225A (en) * 2014-09-02 2014-12-17 中国科学院自动化研究所 A visual target detection and labeling method
CN106504233A (en) * 2016-10-18 2017-03-15 国网山东省电力公司电力科学研究院 Image electric power widget recognition methodss and system are patrolled and examined based on the unmanned plane of Faster R CNN
CN108009525A (en) * 2017-12-25 2018-05-08 北京航空航天大学 A kind of specific objective recognition methods over the ground of the unmanned plane based on convolutional neural networks
JP2019175063A (en) * 2018-03-28 2019-10-10 Kddi株式会社 Object identification device
CN110942000A (en) * 2019-11-13 2020-03-31 南京理工大学 Unmanned vehicle target detection method based on deep learning
CN111241931A (en) * 2019-12-30 2020-06-05 沈阳理工大学 Aerial unmanned aerial vehicle target identification and tracking method based on YOLOv3
CN111368900A (en) * 2020-02-28 2020-07-03 桂林电子科技大学 Image target object identification method
KR102327060B1 (en) * 2021-07-01 2021-11-16 국방과학연구소 Method, apparatus, computer-readable storage medium and computer program for extracting region of interest for identifying target
CN114511021A (en) * 2022-01-27 2022-05-17 浙江树人学院(浙江树人大学) Extreme learning machine classification algorithm based on improved crow search algorithm
CN114758241A (en) * 2022-04-24 2022-07-15 西安电子科技大学 Remote sensing image situation recognition method based on SVM and step-by-step grid search
CN116128071A (en) * 2023-01-10 2023-05-16 江苏科技大学 SVR model hyper-parameter optimization method based on improved African bald-clearance algorithm
CN117094446A (en) * 2023-09-05 2023-11-21 阿牧网云(北京)科技有限公司 Deep learning-based milk yield prediction method for dairy cows

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
FRANK TIAN: "经典搜索算法总结", Retrieved from the Internet <URL:https://zhuanlan.zhihu.com/p/187283548> *
SHOULIN YIN等: "Region search based on hybrid convolutional neural network in optical remote sensing images", INTERNATIONAL JOURNAL OF DISTRIBUTED SENSOR NETWORKS, vol. 15, no. 5, pages 1 - 12 *
丁彧洋: "基于混合优化算法的超参数优化方法及其应用", 化工自动化及仪表, vol. 50, no. 06, pages 875 - 882 *
邓华昌;方康玲;梁开;张鹏;: "一种混合遗传算法在PID参数优化中的应用", 机械设计与制造, no. 07, pages 93 - 95 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117668671A (en) * 2024-02-01 2024-03-08 成都工业学院 Educational resource management method based on machine learning
CN117668671B (en) * 2024-02-01 2024-04-30 成都工业学院 Educational resource management method based on machine learning

Also Published As

Publication number Publication date
CN117292283B (en) 2024-02-13

Similar Documents

Publication Publication Date Title
CN107209873B (en) Hyper-parameter selection for deep convolutional networks
CN109840531B (en) Method and device for training multi-label classification model
KR102641116B1 (en) Method and device to recognize image and method and device to train recognition model based on data augmentation
CN110569793B (en) Target tracking method for unsupervised similarity discrimination learning
CN107851191B (en) Context-based priors for object detection in images
US20200279156A1 (en) Feature fusion for multi-modal machine learning analysis
CN110633745A (en) Image classification training method and device based on artificial intelligence and storage medium
CN111052151B (en) Video action positioning based on attention suggestion
KR102517513B1 (en) Artificial intelligence based tree data management system and tree data management method
Mensink et al. Learning structured prediction models for interactive image labeling
CN104036255A (en) Facial expression recognition method
CN111833322B (en) Garbage multi-target detection method based on improved YOLOv3
CN112949647A (en) Three-dimensional scene description method and device, electronic equipment and storage medium
CN117157678A (en) Method and system for graph-based panorama segmentation
JP2003228706A (en) Data classifying device
CN109299668A (en) A kind of hyperspectral image classification method based on Active Learning and clustering
CN114663685B (en) Pedestrian re-recognition model training method, device and equipment
Chen et al. Learning capsules for vehicle logo recognition
JP2003256443A (en) Data classification device
WO2020198173A1 (en) Subject-object interaction recognition model
CN113569895A (en) Image processing model training method, processing method, device, equipment and medium
CN115018039A (en) Neural network distillation method, target detection method and device
CN111428758A (en) Improved remote sensing image scene classification method based on unsupervised characterization learning
KR20230088714A (en) Personalized neural network pruning
CN117292283B (en) Target identification method based on unmanned aerial vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant