CN109816695A - Target detection and tracking method for infrared small unmanned aerial vehicle under complex background - Google Patents
Target detection and tracking method for infrared small unmanned aerial vehicle under complex background Download PDFInfo
- Publication number
- CN109816695A CN109816695A CN201910094311.1A CN201910094311A CN109816695A CN 109816695 A CN109816695 A CN 109816695A CN 201910094311 A CN201910094311 A CN 201910094311A CN 109816695 A CN109816695 A CN 109816695A
- Authority
- CN
- China
- Prior art keywords
- target
- network
- infrared
- aerial vehicle
- unmanned aerial
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Analysis (AREA)
Abstract
The invention belongs to the field of infrared image processing, and relates to a target detection and tracking method for an infrared small unmanned aerial vehicle under a complex background. The method comprises the following steps: (S1) obtaining a training sample, and training a deep convolutional neural network to serve as an unmanned aerial vehicle target detection network; (S2) acquiring a target image to be detected in real time, inputting the unmanned aerial vehicle target detection network in the step (S1), and outputting an unmanned aerial vehicle target detection result; and (S3) tracking the unmanned aerial vehicle target output by the step (S2) by using a nuclear correlation filtering fast tracking method. The invention adopts the residual error network based on batch regularization and random discarding to extract the characteristics, thereby improving the training efficiency and the model robustness. The method disclosed by the invention fully combines the context characteristics and the semantic characteristics, and uses the multi-layer fusion characteristic diagram with fine granularity to perform multi-scale discrimination, so that the detection precision of the small target is effectively improved.
Description
Technical field
The invention belongs to machine vision, safety monitoring and infrared image processing fields, are related to the design of deep learning model
With training, a kind of infrared small drone object detecting and tracking method under complex background is realized.
Background technique
With countries in the world unmanned plane manufacturing technology it is continuous promotion and unmanned plane gradually reducing using threshold,
So that the scale amounts and frequency of use of civil small-scale unmanned plane all greatly increase, lacked due to low cost, and in industry
Weary effective detection and counterattacking measure, so that unmanned plane " black to fly " problem is got worse.And such small drone has extension
Carry the ability of the dangers such as explosion, the unmanned plane illegal invasion event repeatedly occurred all over the world in recent years, not only to citizen's
Individual privacy and the security of the lives and property cause serious harm, and to military base, large-scale assembly scene, nuclear power station, government
The security protection of the sensitizing ranges such as confidential departments guard station causes great threat.Such low-altitude low-velocity target have it is stronger it is sudden and
Flexibility can walk and hover in complicated city housing-group environment, this monitors the fixed area of key area and facility
It is put forward new requirements with security protection.Therefore most important to the detection of unmanned plane target and management, and can realize and be carried on the back in complexity
Quickly and effectively unmanned plane target real-time detection is then the basis solved the problems, such as and key under scape.
Infrared imagery technique have the characteristics that passive work, strong interference immunity, target identification ability by force, all weather operations,
It is widely used in military surveillance, monitoring and guidance etc., correlative study shows to carry out detection monitoring using infrared imagery technique
It is stable effective Technology Ways.
As convolutional neural networks and depth learning technology constantly improve, especially detects and know in target in complex environment
Other field, has accumulated lot of documents and technical experience.Have benefited from its powerful feature extraction and learning ability, depth convolutional Neural
Network can carry out feature extraction to complicated image and carry out the character representation of stratification to target, and the structure based on target is not
Denaturation, can have well the target under deformation target, shelter target, fuzzy object, multiscale target and complex background
Detectability.
How design effectively depth network extracts Infrared Image Features, how to be realized and is improved using the Analysis On Multi-scale Features extracted
The detection accuracy of small drone target and the speed of service for how improving entire algorithm frame are to realize infrared small-sized nothing
The key of people's target real-time detection.
Summary of the invention
To solve the feature extraction of infrared image and improving infrared small drone target detection accuracy and real-time
The technical issues of, the present invention utilizes the depth convolutional neural networks technology that candidate window is merged and slided based on Analysis On Multi-scale Features figure to mention
Specific technical solution is gone out, particular content is as follows.
A kind of infrared small drone object detecting and tracking method under complex background, comprising the following steps:
(S1) training sample is obtained, training depth convolutional neural networks detect network as unmanned plane target;
(S2) target image to be detected is obtained in real time, and the unmanned plane target in input step (S1) detects network, exports nothing
Man-machine object detection results;
(S3) core correlation filtering fast tracking method is utilized, the unmanned plane target of (S2) output is tracked.
Further, the detailed process of the step (S1) are as follows:
(S11) infrared picture data collection is acquired, the artificial mark of position and classification is carried out to the unmanned plane target that data are concentrated
Note, using the infrared image after artificial mark as training sample;
(S12) the Analysis On Multi-scale Features figure of infrared image is extracted using depth convolutional neural networks;
(S13) sampling and Weighted Fusion are carried out to Analysis On Multi-scale Features figure, obtains fusion feature figure;
(S14) setting sliding candidate frame traverses fusion feature figure, includes by each sliding candidate frame as area-of-interest
Region input fully-connected network, fully-connected network exports to obtain the feature vector of five dimensions, and the feature vector of five dimension includes
In the region there are the probability of target,;
(S15) according to central point abscissa deviation, central point ordinate deviation, long side deviation and short side deviation
Candidate frame position is corrected, obtains target location envelope frame, and non-maximum value inhibition is carried out to all target location envelope frames and is handled,
Confidence level is exported to be higher than the objective degrees of confidence of predetermined threshold and surround frame result;
(S16) loss function value is calculated, the loss function value is the sum of error in classification value and placement error value;Calculate phase
Adjacent ten the sum of loss function values, if the sum of adjacent ten loss function values terminates network training less than the threshold value of setting,
Otherwise, return step (S11) continues to train unmanned plane target detection network.
Further, the step (S12) extracts the Analysis On Multi-scale Features figure of infrared image using depth convolutional neural networks
Specifically:
(S121) the residual error layer based on batch regularization and random drop is constructed:
(S122) by stacking residual error layer, depth residual error network is built;
(S123) sample image is inputted into above-mentioned depth residual error network and carries out feature extraction, extracted last three layers of characteristic pattern and make
To export result.
Technical solution for a better understanding of the present invention is below briefly described relative theory.
Residual error module based on batch regularization and random drop: residual error module belongs to the basic structure of feature extraction network
Unit, by using random drop (dropout) operation to carry out " discarding " of partial parameters in the convolutional layer of residual block, into one
The fitting degree of step limitation network, it is therefore an objective to by " freezing " to partial parameters in training, update local parameter only to prevent
Only over-fitting, Lai Shixian optimize feature extraction.Therefore, foot can be constructed by the combination of above-mentioned residual block and convolutional layer
The feature extraction network of enough " depth ".
Because activation input value of the deep-neural-network before doing nonlinear transformation is deepened with network depth, distribution by
It gradually shifts, usually overall distribution is gradually close toward the bound both ends of the value interval of nonlinear function, so this leads
The gradient of low layer neural network when backpropagation is caused to disappear, this is that trained deep-neural-network restrains slower and slower essence original
Cause, by batch regularization means, the distribution of any neuron input value of every layer of neural network is adjusted to mean value is the present invention
0, the standardized normal distribution that variance is 1, so that activation input value falls in nonlinear function to input than more sensitive region, in this way
The small variation of input will lead to the biggish variation of loss function, and gradient disappearance problem is avoided to generate as far as possible.
Analysis On Multi-scale Features figure blending algorithm: in depth convolutional neural networks, shallower convolutional layer perception domain is smaller, can be with
Learn the feature and contextual feature to some regional areas;And deeper convolutional layer has biggish perception domain, can learn
To more abstract semantic feature.Bottom abstract characteristics are lower to sensibility such as size, the position and direction of object, are unfavorable for pair
In the detection of Small object: if the size of input picture be 512*512, by multilayer it is down-sampled after characteristic pattern size be 16*
16, target of the length and width less than 32 is compressed into a pixel.Therefore in small drone target detection, only with height
Layer feature carries out detection meeting so that the spatial position prediction of the target of smaller imaging area is very inaccurate.The present invention extracts depth net
In network three kinds of different resolutions characteristic pattern (in addition, in embodiment 2 kinds or more than 3 kinds of different resolutions characteristic pattern
To realize), the up-sampling for carrying out bilinear interpolation to the characteristic pattern of small size operates, and is then added with large-sized characteristic pattern
Power fusion improves detection algorithm to Small object finally using having different fine-grained fusion feature figures to carry out target detection
Sensitivity.
Using the invention has the benefit that the present invention using the residual error network based on batch regularization and random drop into
Row feature extraction improves training effectiveness and model robustness.The method that detection-phase is merged with Analysis On Multi-scale Features figure is sufficiently tied
Following traits and semantic feature are closed, multiple dimensioned differentiation is carried out using with fine-grained multilayer fusion feature figure, effectively improves
For the detection accuracy of Small object.Detection is improved finally by combining the track algorithm based on correlation filtering to carry out algorithm fusion
The bulk velocity of system finally realizes the quick accurate detection and tracking of infrared unmanned plane target.By to infrared detector
The monitor video of recording is tested, the results showed that, which can effectively realize the inspection for infrared small drone target
It surveys, accuracy with higher and real-time processing speed.
Detailed description of the invention
Fig. 1 is the flow chart of the method for the present invention;
Fig. 2 is target real-time detection network flow chart of steps in the present invention;
Fig. 3 is the artificial annotation results figure of training sample (merely illustrating part sample) in embodiment;
Fig. 4 is the residual error module diagram based on random drop and batch regularization;
Fig. 5 is target detection schematic network structure in the present invention;
Fig. 6 is the candidate frame prediction module schematic diagram based on sliding window;
Fig. 7 is detection and tracking system flow chart in the present invention;
Fig. 8 is the testing result and tracking result figure of the method for the present invention under the background of mountainous region;
Fig. 9 is the testing result and tracking result figure of the method for the present invention under building background;
Specific embodiment
Present invention will be further explained below with reference to the attached drawings and examples.
It is as shown in Figure 1, Figure 2 the stream of the infrared small drone object detecting and tracking method under a kind of complex background of the present invention
Cheng Tu.Each step is illustrated below with reference to implementation process.
(S1) training sample is obtained, training depth convolutional neural networks detect network as unmanned plane target;
(S11) infrared picture data collection is acquired, is recorded a video extensively using infrared long wave, medium wave camera lens and obtains small drone mesh
Logo image.In order to meet the reliability and generalization ability of algorithm, the rich of experimental data should be increased as far as possible, should such as be covered
Different model, the unmanned plane target of flight attitude, different background, temperature, weather, detection range and pitch angle.
The unmanned plane image of acquisition is manually marked, using the image lower left corner as origin, long side, short side x, y-axis are built
Vertical coordinate system marks the position coordinates of unmanned plane target central point and the length and width of its area-encasing rectangle frame, after manually marking
Infrared image as training sample;As shown in Figure 3.
(S12) characteristic pattern of multiple scales of infrared image is extracted using depth convolutional neural networks;
It is that three scale feature figures are introduced with output in the present embodiment.
(1) residual block based on batch regularization and random drop is constructed first, as shown in figure 4,
(1.1) image data inputs residual block, first passes around one 3 × 3 convolutional layer and extracts feature, increases network layer
Port number;Then by one 1 × 1 convolution come the character representation after compressing 3 × 3 convolution carry out dimensionality reduction and across channel information it is whole
It closes.
Batch regularization is added after (1.2) 3 × 3 convolutional layer and 1 × 1 convolutional layer operates (Batch
Normalization) (particular reference: S.Ioffe and C.Szegedy.Batch normalization:
Accelerating deep network training by reducing internal covariate shift[C].
32nd International Conference on Machine Learning, ICML 2015,2015,448-456),
The distribution of any neuron input value of every layer of neural network is adjusted to the standardized normal distribution that mean value is 0, variance is 1, so that swashing
Input value living falls in nonlinear function to input than more sensitive region.
(1.3) input data is transferred to the output of 1 × 1 convolutional layer after batch regularization, i.e. mesh by an access
Scalar functions are as follows: F (x)=H (x)+x, wherein x is input data, and H (x) is convolution algorithm result.In this way network is passed through
" 0 " learns to construct identical mapping;
(1.4) in (1.3) described access be added using random drop operation (dropout) (particular reference:
A.Krizhevsky, I.Sutskever and G.E.Hinton.ImageNet classification with deep
Convolutional neural networks [J] .Communications of the ACM 2017,60 (6): 84-90)
Operation carries out " discarding " of partial parameters, further limits the fitting degree of network.To a certain proportion of random ginseng in training
Number carries out " freezing ", updates local parameter only to prevent over-fitting, feature extraction is optimized with realizing.
(2) residual error layer is arranged by the above method, by stacking the layer, it is (specific builds depth residual error network ResNet-50
Bibliography: K.He, X.Zhang, S.Ren and J.Sun.Deep residual learning for image
recognition[C].29th IEEE Conference on Computer Vision and Pattern
Recognition, CVPR 2016,2016770-778.), such as residual error module _ 1 in Fig. 5 to residual error module _ 7;
(3) image is inputted into above-mentioned depth residual error network and carries out feature extraction, and extracted last three layers of scale and be respectively
16 × 16,32 × 32,64 × 64 characteristic pattern is as output.
(S13) sampling and Weighted Fusion are carried out to Analysis On Multi-scale Features figure, fusion feature figure is obtained, as the feature in Fig. 5 is melted
Close network;
(1) different scale of three residual blocks output of bottommost layer is extracted respectively in back feature extraction network
Characteristic pattern Fiture1, Fiture2, Fiture3, respectively correspond residual error module _ 5 in Fig. 5, residual error module _ 6, residual error module _ 7
The characteristic pattern of output, size are respectively as follows:
Fiture1I.e. 16 × 16 characteristic pattern is as first prediction scale:
ScaleI=Fiture1(formula two)
(2) 16 × 16 characteristic pattern is up-sampled to obtain and Fiture2Identical size i.e. 32 × 32 characteristic pattern, and
Two parts characteristic pattern is weighted fusion afterwards
(2.1) first to Fiture1It is up-sampled, Fiture ' is obtained using interpolation method1, scale is by 16 × 16 risings
To 32 × 32,Indicate that bilinear interpolation up-samples operation, resolution ratio rises to 32 by 16 × 16
× 32,Indicate Fiture1Resolution ratio after characteristic pattern up-sampling:
(2.2) to high-resolution low-level feature Fiture2With the high-level characteristic Fiture ' by up-sampling1Carry out pixel
Grade fusion (Element-Sum) fusion, wherein low-level feature item is added with weight coefficient α, it is preferred that α=1 obtains second ruler
The fusion feature Scale of degreeII:
(3) using identical method to Fiture1, Fiture2Up-sampled then with Fiture3It is merged to obtain
Third scale ScaleIII, wherein Fiture "1Indicate Fiture '1Output after up-sampling is as a result, Fiture '2It indicates
Fiture2Output after up-sampling is as a result, beta, gamma is weighting coefficient.Preferably, β=1, γ=1:
(4) a line rectification function (Rectified is respectively passed through by the characteristic pattern of three scales obtained by the above method
Linear Unit) (particular reference: Notes on convolutional neural networks.Bouvrie, J.,
2006) fusion feature is exported after.
(S14) candidate frame prediction is carried out using sliding window module
Equally spaced several pixels are uniformly chosen on the characteristic pattern of three scales of output respectively as anchor point,
Multiple semi-cylindrical hills are predicted simultaneously on each anchor point, our this sliding window is referred to as anchor point frame.Such as the sliding in Fig. 5 and Fig. 6
Window module, concrete operations are as follows:
(1) on image of the input having a size of 512*512, a grid is divided every 8 pixels.
(2) central pixel point of each grid is as anchor point, and symbiosis is at 5 candidate frames on each anchor point.
(3) size of this 5 candidate frames has corresponded to the statistical result of training data target size, to the length and width of these targets
K mean cluster calculating is carried out, the candidate frame size with the best match of true frame is obtained.Preferably, candidate frame having a size of (12,
7), (16,14), (19,8), (22,16), (27,21).
(4) the candidate frame size being arranged in the position and (3) of the anchor point generated in (2) is distinguished by the image of 512*512
It is mapped on 16 × 16,32 × 32,64 × 64 characteristic pattern.
(5) area-of-interest pond (ROI is carried out to the pixel in each different scale candidate frame on each characteristic pattern
Pooling (particular reference: Ren S, He K, Girshick R, et al. Faster R-CNN:towards) is operated
Real-time object detection with region proposal networks [C], International
Conference on Neural Information Processing Systems.MIT Press, 2015:91-99.), it obtains
To the identical feature vector of scale as output.
(S15) the corresponding feature vector of each candidate frame is inputted into fully-connected network, calculates its confidence level and candidate frame is inclined
Difference;
(S16) target location envelope frame is obtained according to the amendment that deviation carries out candidate frame position, and to all target positions
It sets and surrounds non-maximum value inhibition (Non-Maximum Suppression, the NMS) processing of frame progress, output confidence level is higher than predetermined
The objective degrees of confidence and encirclement frame result of threshold value.In embodiment, whole prediction results is carried out at non-maxima suppression
Reason screens out friendship and the encirclement frame than being higher than threshold value, i.e. target prodiction frame, it is preferred that threshold value takes 0.3.Output prediction mesh
Cursor position and its confidence level obtain final testing result.
(S17) it is missed according to testing result with artificial mark situation, calculating loss function value, the loss function value for classification
The sum of difference and placement error value;
Loss function is set.Loss function is by error in classification Lcls, position error LlocTwo parts composition, overall calculation formula
It is as follows:
L=Lcls+Lloc(formula ten)
(1) error in classification is defined using log-likelihood loss function:
piIndicate the probability that i-th of reference zone is determined as to target, yiIndicate that i-th of reference zone whether there is target,
It is 1 there are value, is otherwise 0.
(2) position error LlocConcrete form are as follows:
L1(x) smooth 1- norm, the parametrization coordinate of the locations of real targets marked in candidate frame and training sample are indicated
Respectively four dimensional vector tiWith four dimensional vectorsI indicates that i-th of candidate frame or true frame, s indicate four dimensional vectors (x, y, w, h)
In an element,WithRespectively represent tiWithIn each corresponding element:
tiWithMiddle each element is defined as follows:
Wherein,Respectively indicate x, translational movement of the y relative to candidate frame Scale invariant,Indicate that logarithm is empty
Between in relative to reference zone width with it is high, be similarly suitable forX, y, w, h respectively indicate central point seat
It marks (x, y), wide w, high h;xa, x*Respectively indicate the central point abscissa of candidate frame and true frame;ya, y*Respectively indicate candidate frame,
The central point ordinate of the locations of real targets marked in training sample, remaining parameter take same mode to define.
(3) by calculating the loss function of output, it is whole to train that error back propagation is carried out using stochastic gradient descent method
A network.Model learning rate is set as 0.0001,0.00001 is fallen to after iteration 15000 times, total losses is stablized on 0.05 left side
Training stops when right no longer decline.
(S2) target image to be detected is obtained in real time, and the unmanned plane target in input step (S1) detects network, exports nothing
Man-machine object detection results;
(S3) correlation filtering track algorithm is combined, the unmanned plane target of (S2) output is tracked, recursive call is passed through
Detection output is as a result, realize track before detection;If Fig. 7 is detecting and tracking system flow chart.
(3.1) system boot, video streaming image inputs above-mentioned trained target detection network, when detecting target,
Wherein x, y represent target's center's point coordinate for output unmanned plane target location information (x, y, w, h), and w, h represent target detection rectangle
The width and height of frame.
(3.2) by (3.1) target position information and next frame image be transmitted to tracking nuclear phase close filter tracker (tool
Body sees reference document: Henriques J F, Caseiro R, Martins P, et al. High-Speed Tracking
with Kemelized Correlation Filters[J].IEEE Transactions on Pattern Analysis&
Machine Intelligence, 2014,37 (3): 583-596.), performance objective tracking obtains tracking response value k.Compare k
With tracking decision threshold, it is considered as if tracker output tracking response k is greater than threshold value and tracks successfully, while more new template
For current tracking box, otherwise it is tracking failure, that is, loses target, it is preferred that tracking decision threshold is set as 0.3.
(3.3) if tracking failure, next frame image input depth residual error network is subjected to target detection with recapture
Unmanned plane target information;
(3.4) if tracking successfully, next frame image is inputted into KCF tracker, is repeated step (2.2), after continuing K frame,
K+1 frame image input depth residual error network is subjected to target detection to relocate, it is preferred that K takes 5 frames.
(3.5) the tracking process in repetitive cycling (3.1)-(3.4) realizes the real-time side to infrared small drone target
Detect side tracking.
Fig. 8 is under the background of mountainous region, with the testing result and tracking result figure of the method for the present invention;Two are given in figure
The testing result of target, under the background of mountainous region, the system in continuous four frames image exports result schematic diagram;One frame of the leftmost side is detection
As a result, rear three frame is tracking result (cross represents the center of gravity of target actual position, and box represents the detecting and tracking result of output).
Fig. 9 is under building background, with the testing result and tracking result figure of the method for the present invention;Two are given in figure
The testing result of target, under building background, the system in continuous four frames image exports result schematic diagram;One frame of the leftmost side is inspection
It surveys as a result, rear three frame is tracking result (cross represents the center of gravity of target actual position, and frame represents the detecting and tracking result of output).
It will be further noted that the invention is not limited to specific embodiments above, those skilled in the art can be
Any deformation or improvement are made in the scope of the claims.
Claims (3)
1. a kind of infrared small drone object detecting and tracking method under complex background, it is characterised in that including following step
It is rapid:
(S1) training sample is obtained, training depth convolutional neural networks detect network as unmanned plane target;
(S2) target image to be detected is obtained in real time, and the unmanned plane target in input step (S1) detects network, exports unmanned plane
Object detection results;
(S3) core correlation filtering fast tracking method is utilized, the unmanned plane target of (S2) output is tracked.
2. the infrared small drone object detecting and tracking method under a kind of complex background as described in claim 1, special
Sign is the detailed process of the step (S1) are as follows:
(S11) infrared picture data collection is acquired, the artificial mark of position and classification is carried out to the unmanned plane target that data are concentrated, it will
Infrared image after artificial mark is as training sample;
(S12) the Analysis On Multi-scale Features figure of infrared image is extracted using depth convolutional neural networks;
(S13) sampling and Weighted Fusion are carried out to Analysis On Multi-scale Features figure, obtains fusion feature figure;
(S14) setting sliding candidate frame traverses fusion feature figure as area-of-interest, the area for including by each sliding candidate frame
Domain inputs fully-connected network, and the feature vector of the feature vector of five dimension of output, five dimension includes that there are targets in the region
Probability, central point abscissa deviation, central point ordinate deviation, long side deviation, short side deviation;
(S15) candidate frame position is corrected according to deviation, obtains target location envelope frame, and to all target location envelope frames into
The non-maximum value inhibition processing of row, output confidence level are higher than the objective degrees of confidence of predetermined threshold and surround frame result;
(S16) loss function value is calculated, the loss function value is the sum of error in classification value and placement error value;Calculate adjacent ten
The sum of secondary loss function value, if the sum of adjacent ten loss function values less than the threshold value of setting, otherwise, return step (S11)
Continue to train unmanned plane target detection network.
3. the infrared small drone object detecting and tracking method under a kind of complex background as claimed in claim 2, special
Sign is that the step (S12) extracts the Analysis On Multi-scale Features figure of infrared image using depth convolutional neural networks specifically:
(S121) the residual error layer based on batch regularization and random drop is constructed:
(S122) by stacking the layer, depth residual error network is built;
(S123) sample image is inputted into above-mentioned depth residual error network and carries out feature extraction, extract last three layers of characteristic pattern conduct
Export result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910094311.1A CN109816695A (en) | 2019-01-31 | 2019-01-31 | Target detection and tracking method for infrared small unmanned aerial vehicle under complex background |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910094311.1A CN109816695A (en) | 2019-01-31 | 2019-01-31 | Target detection and tracking method for infrared small unmanned aerial vehicle under complex background |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109816695A true CN109816695A (en) | 2019-05-28 |
Family
ID=66606064
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910094311.1A Pending CN109816695A (en) | 2019-01-31 | 2019-01-31 | Target detection and tracking method for infrared small unmanned aerial vehicle under complex background |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109816695A (en) |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110472679A (en) * | 2019-08-08 | 2019-11-19 | 桂林电子科技大学 | A kind of unmanned plane tracking and device based on Siamese network |
CN110488872A (en) * | 2019-09-04 | 2019-11-22 | 中国人民解放军国防科技大学 | A kind of unmanned plane real-time route planing method based on deeply study |
CN110532914A (en) * | 2019-08-20 | 2019-12-03 | 西安电子科技大学 | Building analyte detection method based on fine-feature study |
CN110632941A (en) * | 2019-09-25 | 2019-12-31 | 北京理工大学 | Trajectory generation method for target tracking of unmanned aerial vehicle in complex environment |
CN110796115A (en) * | 2019-11-08 | 2020-02-14 | 厦门美图之家科技有限公司 | Image detection method and device, electronic equipment and readable storage medium |
CN110955259A (en) * | 2019-11-28 | 2020-04-03 | 上海歌尔泰克机器人有限公司 | Unmanned aerial vehicle, tracking method thereof and computer-readable storage medium |
CN111340850A (en) * | 2020-03-20 | 2020-06-26 | 军事科学院系统工程研究院系统总体研究所 | Ground target tracking method of unmanned aerial vehicle based on twin network and central logic loss |
CN111369589A (en) * | 2020-02-26 | 2020-07-03 | 桂林电子科技大学 | Unmanned aerial vehicle tracking method based on multi-strategy fusion |
CN111580560A (en) * | 2020-05-29 | 2020-08-25 | 中国科学技术大学 | Unmanned helicopter autonomous stunt flight method based on deep simulation learning |
CN111611925A (en) * | 2020-05-21 | 2020-09-01 | 重庆现代建筑产业发展研究院 | Building detection and identification method and device |
CN112101113A (en) * | 2020-08-14 | 2020-12-18 | 北京航空航天大学 | Lightweight unmanned aerial vehicle image small target detection method |
CN112288657A (en) * | 2020-11-16 | 2021-01-29 | 北京小米松果电子有限公司 | Image processing method, image processing apparatus, and storage medium |
CN112288044A (en) * | 2020-12-24 | 2021-01-29 | 成都索贝数码科技股份有限公司 | News picture attribute identification method of multi-scale residual error network based on tree structure |
CN112767297A (en) * | 2021-02-05 | 2021-05-07 | 中国人民解放军国防科技大学 | Infrared unmanned aerial vehicle group target simulation method based on image derivation under complex background |
CN113223053A (en) * | 2021-05-27 | 2021-08-06 | 广东技术师范大学 | Anchor-free target tracking method based on fusion of twin network and multilayer characteristics |
CN113240708A (en) * | 2021-04-22 | 2021-08-10 | 中国人民解放军32802部队 | Bilateral flow semantic consistency method for tracking unmanned aerial vehicle |
CN113313733A (en) * | 2021-05-19 | 2021-08-27 | 西华大学 | Hierarchical unmanned aerial vehicle target tracking method based on shared convolution |
CN113313201A (en) * | 2021-06-21 | 2021-08-27 | 南京挥戈智能科技有限公司 | Multi-target detection and distance measurement method based on Swin transducer and ZED camera |
WO2021211068A1 (en) * | 2020-04-15 | 2021-10-21 | Aselsan Elektroni̇k Sanayi̇ Ve Ti̇caret Anoni̇m Şi̇rketi̇ | A method for training shallow convolutional neural networks for infrared target detection using a two-phase learning strategy |
CN114255407A (en) * | 2021-12-13 | 2022-03-29 | 中国电子科技集团公司第三十八研究所 | High-resolution-based anti-unmanned aerial vehicle multi-target identification and tracking video detection method |
CN115562330A (en) * | 2022-11-04 | 2023-01-03 | 哈尔滨工业大学 | Unmanned aerial vehicle control method for restraining wind disturbance of similar field |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170132468A1 (en) * | 2015-11-06 | 2017-05-11 | The Boeing Company | Systems and methods for object tracking and classification |
CN107016689A (en) * | 2017-02-04 | 2017-08-04 | 中国人民解放军理工大学 | A kind of correlation filtering of dimension self-adaption liquidates method for tracking target |
CN107229952A (en) * | 2017-06-01 | 2017-10-03 | 雷柏英 | The recognition methods of image and device |
CN108346159A (en) * | 2018-01-28 | 2018-07-31 | 北京工业大学 | A kind of visual target tracking method based on tracking-study-detection |
-
2019
- 2019-01-31 CN CN201910094311.1A patent/CN109816695A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170132468A1 (en) * | 2015-11-06 | 2017-05-11 | The Boeing Company | Systems and methods for object tracking and classification |
CN107016689A (en) * | 2017-02-04 | 2017-08-04 | 中国人民解放军理工大学 | A kind of correlation filtering of dimension self-adaption liquidates method for tracking target |
CN107229952A (en) * | 2017-06-01 | 2017-10-03 | 雷柏英 | The recognition methods of image and device |
CN108346159A (en) * | 2018-01-28 | 2018-07-31 | 北京工业大学 | A kind of visual target tracking method based on tracking-study-detection |
Non-Patent Citations (3)
Title |
---|
MINGJIE LAO ET AL: "Visual Target Detection and Tracking Framework Using Deep Convolutional Neural Networks for Micro Aerial Vehicles", 《2018 IEEE 14TH INTERNATIONAL CONFERENCE ON CONTROL AND AUTOMATION (ICCA)》 * |
SHAOQING REN ET AL: "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks", 《CS.CV》 * |
辛鹏 等: "全卷积网络多层特征融合的飞机快速检测方法", 《光学学报》 * |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110472679A (en) * | 2019-08-08 | 2019-11-19 | 桂林电子科技大学 | A kind of unmanned plane tracking and device based on Siamese network |
CN110532914A (en) * | 2019-08-20 | 2019-12-03 | 西安电子科技大学 | Building analyte detection method based on fine-feature study |
CN110488872A (en) * | 2019-09-04 | 2019-11-22 | 中国人民解放军国防科技大学 | A kind of unmanned plane real-time route planing method based on deeply study |
CN110488872B (en) * | 2019-09-04 | 2023-03-07 | 中国人民解放军国防科技大学 | Unmanned aerial vehicle real-time path planning method based on deep reinforcement learning |
CN110632941A (en) * | 2019-09-25 | 2019-12-31 | 北京理工大学 | Trajectory generation method for target tracking of unmanned aerial vehicle in complex environment |
CN110796115B (en) * | 2019-11-08 | 2022-12-23 | 厦门美图宜肤科技有限公司 | Image detection method and device, electronic equipment and readable storage medium |
CN110796115A (en) * | 2019-11-08 | 2020-02-14 | 厦门美图之家科技有限公司 | Image detection method and device, electronic equipment and readable storage medium |
CN110955259A (en) * | 2019-11-28 | 2020-04-03 | 上海歌尔泰克机器人有限公司 | Unmanned aerial vehicle, tracking method thereof and computer-readable storage medium |
CN110955259B (en) * | 2019-11-28 | 2023-08-29 | 上海歌尔泰克机器人有限公司 | Unmanned aerial vehicle, tracking method thereof and computer readable storage medium |
CN111369589B (en) * | 2020-02-26 | 2022-04-22 | 桂林电子科技大学 | Unmanned aerial vehicle tracking method based on multi-strategy fusion |
CN111369589A (en) * | 2020-02-26 | 2020-07-03 | 桂林电子科技大学 | Unmanned aerial vehicle tracking method based on multi-strategy fusion |
CN111340850A (en) * | 2020-03-20 | 2020-06-26 | 军事科学院系统工程研究院系统总体研究所 | Ground target tracking method of unmanned aerial vehicle based on twin network and central logic loss |
WO2021211068A1 (en) * | 2020-04-15 | 2021-10-21 | Aselsan Elektroni̇k Sanayi̇ Ve Ti̇caret Anoni̇m Şi̇rketi̇ | A method for training shallow convolutional neural networks for infrared target detection using a two-phase learning strategy |
CN111611925A (en) * | 2020-05-21 | 2020-09-01 | 重庆现代建筑产业发展研究院 | Building detection and identification method and device |
CN111580560B (en) * | 2020-05-29 | 2022-05-13 | 中国科学技术大学 | Unmanned helicopter autonomous stunt flight method based on deep simulation learning |
CN111580560A (en) * | 2020-05-29 | 2020-08-25 | 中国科学技术大学 | Unmanned helicopter autonomous stunt flight method based on deep simulation learning |
CN112101113B (en) * | 2020-08-14 | 2022-05-27 | 北京航空航天大学 | Lightweight unmanned aerial vehicle image small target detection method |
CN112101113A (en) * | 2020-08-14 | 2020-12-18 | 北京航空航天大学 | Lightweight unmanned aerial vehicle image small target detection method |
CN112288657A (en) * | 2020-11-16 | 2021-01-29 | 北京小米松果电子有限公司 | Image processing method, image processing apparatus, and storage medium |
CN112288044A (en) * | 2020-12-24 | 2021-01-29 | 成都索贝数码科技股份有限公司 | News picture attribute identification method of multi-scale residual error network based on tree structure |
CN112767297A (en) * | 2021-02-05 | 2021-05-07 | 中国人民解放军国防科技大学 | Infrared unmanned aerial vehicle group target simulation method based on image derivation under complex background |
CN112767297B (en) * | 2021-02-05 | 2022-09-23 | 中国人民解放军国防科技大学 | Infrared unmanned aerial vehicle group target simulation method based on image derivation under complex background |
CN113240708A (en) * | 2021-04-22 | 2021-08-10 | 中国人民解放军32802部队 | Bilateral flow semantic consistency method for tracking unmanned aerial vehicle |
CN113313733A (en) * | 2021-05-19 | 2021-08-27 | 西华大学 | Hierarchical unmanned aerial vehicle target tracking method based on shared convolution |
CN113223053A (en) * | 2021-05-27 | 2021-08-06 | 广东技术师范大学 | Anchor-free target tracking method based on fusion of twin network and multilayer characteristics |
CN113313201A (en) * | 2021-06-21 | 2021-08-27 | 南京挥戈智能科技有限公司 | Multi-target detection and distance measurement method based on Swin transducer and ZED camera |
CN114255407A (en) * | 2021-12-13 | 2022-03-29 | 中国电子科技集团公司第三十八研究所 | High-resolution-based anti-unmanned aerial vehicle multi-target identification and tracking video detection method |
CN115562330A (en) * | 2022-11-04 | 2023-01-03 | 哈尔滨工业大学 | Unmanned aerial vehicle control method for restraining wind disturbance of similar field |
CN115562330B (en) * | 2022-11-04 | 2023-08-22 | 哈尔滨工业大学 | Unmanned aerial vehicle control method for inhibiting wind disturbance of quasi-field |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109816695A (en) | Target detection and tracking method for infrared small unmanned aerial vehicle under complex background | |
CN110135267B (en) | Large-scene SAR image fine target detection method | |
CN112288008B (en) | Mosaic multispectral image disguised target detection method based on deep learning | |
CN107818326A (en) | A kind of ship detection method and system based on scene multidimensional characteristic | |
CN111723693B (en) | Crowd counting method based on small sample learning | |
Chen et al. | Research on recognition of fly species based on improved RetinaNet and CBAM | |
Bao et al. | Boosting ship detection in SAR images with complementary pretraining techniques | |
CN109255286A (en) | A kind of quick detection recognition method of unmanned plane optics based on YOLO deep learning network frame | |
CN115497005A (en) | YOLOV4 remote sensing target detection method integrating feature transfer and attention mechanism | |
Sun et al. | SPAN: Strong scattering point aware network for ship detection and classification in large-scale SAR imagery | |
CN113139489B (en) | Crowd counting method and system based on background extraction and multi-scale fusion network | |
CN111079739A (en) | Multi-scale attention feature detection method | |
Gong et al. | Local distinguishability aggrandizing network for human anomaly detection | |
CN112149591A (en) | SSD-AEFF automatic bridge detection method and system for SAR image | |
Chen et al. | Building area estimation in drone aerial images based on mask R-CNN | |
Ahmed et al. | An IoT‐based human detection system for complex industrial environment with deep learning architectures and transfer learning | |
CN112580480A (en) | Hyperspectral remote sensing image classification method and device | |
CN116563726A (en) | Remote sensing image ship target detection method based on convolutional neural network | |
CN116385873A (en) | SAR small target detection based on coordinate-aware attention and spatial semantic context | |
CN116363748A (en) | Power grid field operation integrated management and control method based on infrared-visible light image fusion | |
Cheng et al. | YOLOv3 Object Detection Algorithm with Feature Pyramid Attention for Remote Sensing Images. | |
Shen et al. | An improved UAV target detection algorithm based on ASFF-YOLOv5s | |
Yao et al. | Substation object detection based on enhance RCNN model | |
Bao et al. | Detecting Fine-Grained Airplanes in SAR Images With Sparse Attention-Guided Pyramid and Class-Balanced Data Augmentation | |
CN115331127A (en) | Unmanned aerial vehicle moving target detection method based on attention mechanism |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190528 |