CN107862705A - A kind of unmanned plane small target detecting method based on motion feature and deep learning feature - Google Patents

A kind of unmanned plane small target detecting method based on motion feature and deep learning feature Download PDF

Info

Publication number
CN107862705A
CN107862705A CN201711166232.4A CN201711166232A CN107862705A CN 107862705 A CN107862705 A CN 107862705A CN 201711166232 A CN201711166232 A CN 201711166232A CN 107862705 A CN107862705 A CN 107862705A
Authority
CN
China
Prior art keywords
mrow
candidate
target
region
unmanned plane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711166232.4A
Other languages
Chinese (zh)
Other versions
CN107862705B (en
Inventor
高陈强
杜莲
王灿
冯琦
汤林
汪澜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201711166232.4A priority Critical patent/CN107862705B/en
Publication of CN107862705A publication Critical patent/CN107862705A/en
Application granted granted Critical
Publication of CN107862705B publication Critical patent/CN107862705B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of unmanned plane small target detecting method based on motion feature and deep learning feature, belong to image procossing and technical field of computer vision.The sets of video data of input is handled by Video Stabilization algorithm first, compensates cam movement;Analyze and motion candidates target area is detected in image;Sets of video data is divided into two parts, trains to obtain improved candidate region generation network model using training dataset;The candidate region based on depth characteristic obtained by training generates network, and candidate target is generated to the video image of test set by the network;Candidate target region is merged;Train to obtain the model based on twin-channel deep neural network using training dataset, and result is identified using the model.Method for tracking target based on multilayer depth characteristic is applied to the recognition result of previous step, obtains final unmanned plane position.The present invention can accurately detect the unmanned plane in video image, and the research that association area is monitored for follow-up UAV Intelligent provides support.

Description

A kind of unmanned plane small target detecting method based on motion feature and deep learning feature
Technical field
The invention belongs to image procossing and technical field of computer vision, is related to one kind and is based on motion feature and deep learning The unmanned plane small target detecting method of feature.
Background technology
At present, the availability with business unmanned plane and maturity sharp increase, the sale of unmanned plane is doubled and redoubled, in public affairs The unmanned plane of field flight is of common occurrence altogether.Ceremony that unmanned plane is present not only in the camera lens of popular variety, romance is proposed On, it can also be sprayed insecticide in farmland overhead, carry out high-altitude cleaning operation instead of worker, for surveying and drawing shooting, forest fire protection, army Thing scouting etc..But developing rapidly with unmanned plane, peril caused by unmanned plane are also increasing, to public safety, Privacy leakage, military security etc. bring threat.
In recent years, detection unmanned air vehicle technique can substantially be divided into sound detection (Acoustics Detection), radio frequency inspection Survey (Radio Frequency), detections of radar (Radar Detection), vision-based detection (Visual Detection) etc..Sound Sound detection carrys out the rotor noise of sensorcraft flight using microphone array, then that the noise detected is all with have recorded The database of unmanned plane sound is matched, and identifies whether the noise belongs to unmanned plane so as to judge whether have unmanned plane to lean on Closely.The method of sound detection is easily disturbed by ambient noise, meanwhile, the database of structure unmanned plane sound characteristic extremely consumes When.Radio frequency detection is to monitor the radio frequency in the range of certain frequency by wireless receiver to detect unmanned plane.This method is held very much It is unmanned plane easily by unknown radio frequency sending set wrong report.Detections of radar is by detecting target scattering and the reflection that checking receives Electromagnetic wave judge whether it is unmanned plane.The expense of radar equipment and energy resource consumption are very expensive, and are vulnerable to ring Border influences and produces blind area.Vision-based detection detects unmanned plane generally by one or more imaging devices, utilizes certain side Method analyzes image sequence to determine whether that unmanned plane is present.The unmanned machine testing of view-based access control model is not easily susceptible to the dry of ambient noise Disturb, unmanned plane position can be positioned, can also identify whether unmanned plane carries dangerous goods, it might even be possible to detect nobody The information such as the flight path of machine, flying speed.Therefore, vision-based inspection method has huge excellent compared to other means Gesture, while also can overcome the disadvantages that the deficiency of other detection means.
The research work for being currently based on the unmanned machine testing of vision is less.It will be apparent that detect nobody in more remote distance Machine, for evading the dangerous more favourable of unmanned plane in advance.Unmanned plane is smaller compared to target sizes such as pedestrian, aircraft, vehicles, especially It is that the size of unmanned plane is very small, and this make it that the unmanned machine testing of view-based access control model is more difficult in remote imaging.Therefore, A kind of detection algorithm of unmanned plane Small object in energy effective detection video is needed at present.
The content of the invention
In view of this, it is an object of the invention to provide a kind of unmanned plane based on motion feature and deep learning feature is small Object detection method, track unmanned plane using target tracking algorism and filter out false target, with reference to the small grade spy of unmanned plane size Point, convolutional neural networks structure is improved, deep learning algorithm is applied to the situation of Small object, and can be effective The unmanned plane in complex scene is detected, improves the accuracy rate of unmanned machine testing.
To reach above-mentioned purpose, the present invention provides following technical scheme:
A kind of unmanned plane small target detecting method based on motion feature and deep learning feature, this method include following step Suddenly:
S1:The data set of the video of input is handled by Video Stabilization algorithm, compensates cam movement;
S2:Motion candidates target area I is detected from the video image after motion compensation by low-rank matrix analysis method, And noise spot tiny in motion candidates target area I is removed by post processing of image module;
S3:The data set of video is divided into training set and test set, trains to obtain improved candidate region using training set Generate network model;Candidate target region is generated to the Computer Vision of test set by improved Area generation network model Ⅱ;
S4:Candidate target region I and candidate target region II are merged to obtain candidate target region III;
S5:According to candidate target region III, train to obtain using training set and be based on twin-channel deep neural network, then The candidate target for being applied to test set based on twin-channel deep neural network is identified result;
S6:Using the position of correlation filtering prediction target, the stable target of tracking and matching, false target is filtered out, is obtained To unmanned plane position.
Further, in step S1, described Video Stabilization algorithm includes:
S11:Characteristic point is extracted to each two field picture using SURF algorithm;
S12:The affine Transform Model between two frames is calculated by the characteristic matching point between obtained two field pictures;
S13:Present frame is compensated using obtained affine Transform Model.
Further, in step S2, using low-rank matrix analysis method detect motion candidates target area I process include with Lower step:
S21:By input video sequence view data { f1,f2,...,fnVectorization composition image arrayWherein n is video frame number, fnFor n-th frame video image matrix,For fnImage moment after vectorization Battle array;
S22:Matrix C is decomposed into by low-rank matrix L and sparse matrix S by RPCA algorithms, wherein obtained low-rank matrix L Target background is represented, sparse matrix S represents obtained Candidate Motion target;
S23:Noise filtering processing is carried out to obtained Candidate Motion target using morphology opening and closing operation, motion is filtered out and waits Tiny noise spot in favored area.
Further, in step S3, the improved candidate region generation network model includes five convolution being sequentially connected Layer and two full articulamentums, wherein between first layer convolutional layer and second layer convolutional layer, second layer convolutional layer and third layer convolution Pond layer is provided between layer and between layer 5 and the first full articulamentum;
Step S3 is specially:
S31:The data set of video is divided into training set and test set;
S32:For the data of training set, the positive sample of the artificial mark in image is extracted, then some areas of stochastical sampling Domain is as negative sample;
S33:Improved candidate region generation network model is obtained using the positive and negative sample training of training set;
S34:Candidate target region is generated to the Computer Vision of test set by improved Area generation network model Ⅱ。
Further, in step S32 the width in the region of stochastical sampling and height size scope by positive sample width and height Degree determines, and the overlapping region of negative sample and positive sample meets:
Wherein IoU is Duplication, rgFor positive sample region, rnFor stochastical sampling negative sample region.
Further, fusion obtains candidate target region III and is specially in step S4:
S41:Intensive sampling is carried out to candidate target region I and obtains intensive seed candidate region;
S42:Similitude between computation-intensive seed candidate region and candidate target region II, works as satisfaction
When merge two candidate regions, wherein, Sim be intensive seed candidate region it is similar to candidate target region II Degree;
S43:Travel through all candidate target regions I and obtain final candidate target region III.
Further, it is described that front-end module and rear module are included based on twin-channel deep neural network in step S5;
The front-end module is made up of two parallel deep neural network models, and one of them is straight with candidate target region Connect as input, by 6 layers of convolutional neural networks and 1 full articulamentum;Another centered on candidate target region, An extended area is established on artwork target area as input, by 6 layers of convolutional neural networks and 1 full articulamentum;
The output for two full articulamentums that the rear module is obtained using front-end module is as input, by 2 full connections Layer and 1 softmax layer obtain the classification information of each candidate region as final classification results;
Step S5 is specially:
S51:For training dataset, the training dataset for the candidate target region III that step S4 is obtained is divided into positive and negative sample This, is input to and trains to obtain optimal weights based on twin-channel deep neural network;
S52:The candidate target region that optimal weights are applied to the test set that step S4 is obtained is classified, and is obtained final Recognition result.
Further, step S6 is specifically included:
S61:Center (the x of known present frame t former frame targett-1,yt-1), obtained for training in step S5 Improved candidate region generates network model, and the last three-layer coil lamination of improved candidate region generation network model is obtained Convolution feature image array carries out rarefaction, and the depth characteristic of target is then extracted using the characteristic pattern after rarefaction;
S62:Phase is constructed respectively to the output characteristic of the last three-layer coil lamination of improved candidate region generation network model Wave filter is closed, from the front to the back, each layer of feature is subjected to convolution with corresponding correlation filter, calculates corresponding confidence score F, so as to obtain new center (x of the candidate target in present framet,yt);
S63:Depth characteristic is extracted around new center, updates the parameter of correlation filter;
S64:In view of the stability and continuity of unmanned plane target motion, for candidate mesh of the tracking frame number less than threshold value Mark region track is filtered out, and the tracking target finally given is the testing result of unmanned plane.
Further, the step of construction correlation filter is:
S621:If the size of output characteristic is M × N × D, depth characteristic x, the object function of correlation filter is constructed:
Wherein, w*For the object function of correlation filter, w is correlation filter, xm,nFor the feature at (m, n) pixel, λ is regularization parameter λ (λ >=0), and y (m, n) represents the label in (m, n) place pixel;
Y (m, n) obeys dimensional gaussian distribution:
Wherein σ is the width of Gaussian kernel;
S622:Object function is transformed into frequency domain using Fast Fourier Transform (FFT), obtains the optimal solution of object function,
Y is y Fourier transformation, and ⊙ represents Hadamard products, WdFor the optimal solution of object function,For depth characteristic x Fourier transformation, i is the i-th passage, and d is channel sequence, d ∈ { 1,2 ..., D };
S623:The candidate target region of next two field picture is given, for the depth characteristic z of candidate region, corresponding related filter The response diagram of ripple device is:
Wherein F-1Represent Fourier transformation,Represent depth characteristic z Fourier transformation.
Further, the parameter that correlation filter is updated in step S63 meets:
Pt、QtFor intermediate variable, WtFor the object function of the correlation filter of the t frames after renewal, t is video frame number, η For learning rate.
The beneficial effects of the present invention are:
1) present invention proposes a kind of based on the method for unmanned plane motion feature and deep learning feature detection unmanned plane.Should Method can effectively detect target in the case of background is complicated and unmanned plane is less.
2) this method improves traditional deep neural network structure, effectively solves and existing is based on depth nerve net Not the problem of algorithm of target detection of network is not suitable for Small object.
3), can be preferably method proposes a kind of on-line tracking based on multilayer depth characteristic and correlation filter Tracking prediction unmanned plane track simultaneously filters out false target.
Brief description of the drawings
In order that the purpose of the present invention, technical scheme and beneficial effect are clearer, the present invention provides drawings described below and carried out Explanation:
Fig. 1 is the schematic diagram of the unmanned plane small target detecting method of the invention based on motion feature and deep learning feature;
Fig. 2 is Video Stabilization algorithm schematic diagram;
Fig. 3 is convolutional neural networks structural representation;
Fig. 4 is to generate candidate target schematic diagram using improved Area generation network;
Fig. 5 is based on twin-channel deep neural network schematic diagram;
Fig. 6 is the on-line tracking schematic diagram based on depth characteristic.
Embodiment
Below in conjunction with accompanying drawing, the preferred embodiments of the present invention are described in detail.
In the present invention, the candidate target detection module based on motion feature, after carrying out Video Stabilization to original video, lead to The motion target area crossed in low-rank matrix analysis extraction video;
Candidate target detection module based on depth characteristic, video image is carried by improved Area generation network model Take candidate target;
Improved candidate region generation network be have modified on traditional Area generation network foundation network structure and The scale size of candidate region, and changed the Internet of output characteristic figure;
Candidate region Fusion Module is to be merged step S2 with the candidate region that S3 is obtained;
Candidate target identification module based on twin-channel deep neural network is that tradition is improved the characteristics of being directed to Small object Deep neural network model to candidate region carry out Classification and Identification, obtain final recognition result;
On-line tracking based on depth characteristic improves traditional track algorithm based on manual features, utilizes convolution The clarification of objective of neutral net extraction, with more robustness.
Fig. 1 is the unmanned plane small target detecting method schematic diagram of the invention based on motion feature and deep learning feature, such as Shown in figure, the method for the invention specifically includes following steps:
Step S1:The data set of the original video of input is handled by Video Stabilization algorithm first, compensation shooting Head motion, its particular flow sheet are as shown in Figure 2:
S101:Using SURF algorithm to image zooming-out key point, construction SURF feature point description symbols.
S102:Calculate two field pictures character pair point Euclidean distance, then choose minimum distance, given threshold, when When character pair point distance is less than threshold value, retains match point, otherwise reject.
S103:Two field pictures are subjected to bi-directional matching, by repeating S12 steps, as characteristic point pair and the step S12 of matching When obtained result is consistent, final characteristic matching point is obtained.
S104:Cam movement is set as affine Transform Model, the characteristic matching point obtained according to above-mentioned steps, utilized The affine Transform Model between two field pictures is calculated in least square method.
S105:It is registering with the reference frame set by present frame according to obtained affine Transform Model, after being compensated Present frame, be saved in new video, finally give stable video.
S106:Drift rate after the compensation being calculated between present frame and reference frame, if drift rate is more than threshold value, more Frame on the basis of new present frame, otherwise continue to read next frame.
Step S2:The method analyzed by low-rank matrix is led to the video detection motion candidates target area after compensation Cross post processing of image module and remove noise spot tiny in motion candidates region.
Step S3:Data set is divided into training set and test set, trains to obtain improved candidate regions using training dataset Domain generates network model;By training obtained improved Area generation network to generate candidate's mesh to the video image of test set Mark.Improved Area generation network architecture is as shown in Figure 4:
In step s3, network model is generated using the improved candidate region of the positive and negative sample training of data set;Pass through training Obtained improved Area generation network generates candidate target to the video image of test set, specifically includes procedure below:
The characteristics of first against unmanned plane, is improved traditional candidate region generation network structure, obtains improved candidate regions Domain generates network, and improved candidate region generates the network amendment scale size of network structure and feature extraction, and changes The Internet of output characteristic figure;Then train to obtain improved candidate region generation network model most using training dataset Excellent weight;Finally optimal weights are applied in test data set and obtain candidate target rectangle frame.
Improved Area generation network mainly convolutional neural networks (Convolutional Neural Network, CNN two full convolutional layers are adds additional on the basis of), one is territorial classification layer, for judging that candidate region is prospect mesh Mark or background, another is that region frame returns layer, the position coordinates for predicting candidate region.Wherein, convolutional Neural net Network is made up of five convolutional layers, three pond layers and two full articulamentums, as shown in Figure 3.Traditional Area generation network Generally characteristic pattern caused by last layer of convolutional layer is handled, but Small object often relies more on shallow-layer feature, because shallow Layer feature has higher resolution ratio, and therefore, it is changed to the 4th convolutional layer by this method.Area generation network is by sliding network Slided on the characteristic pattern of the 4th convolutional layer output, this slide network every time with 9 different scale sizes on characteristic pattern Window is connected entirely, is then mapped to a low-dimensional vector, and this low-dimensional vector finally is sent into two full articulamentums, waited Select classification and the position of target.Unlike traditional Area generation network, 9 scale sizes are compared to originally big in method Small reduction, advantageously in the detection of Small object.
Step S3 is specially:
S31:The data set of video is divided into training set and test set;
S32:For the data of training set, the positive sample of the artificial mark in image is extracted, then some areas of stochastical sampling Domain is as negative sample;
S33:Improved candidate region generation network model is obtained using the positive and negative sample training of training set;
S34:Candidate target region is generated to the Computer Vision of test set by improved Area generation network model Ⅱ。
The sampling of negative sample is carried out in step S32 to image, the width in the region of sampling and the scope of height size are by just Maximum (minimum) width and height of sample determine, and the Duplication of the region of negative sample and positive sample is no more than satisfaction:
Wherein IoU is Duplication, rgFor positive sample region, rnFor stochastical sampling negative sample region.
Step S4:The candidate target region obtained to step S2 carries out intensive sampling and obtains more dense candidate target frame, Then final candidate target is obtained by merging with the candidate target that step S3 is obtained.
Specific amalgamation mode includes:
S41:Using the motion candidates region that step S2 is obtained as seed candidate region, traveling one is entered to seed candidate region The intensive sampling of step, obtain intensive seed candidate region;
S42:Similitude between the candidate region that calculating seed candidate region and step S3 are obtained, when similitude is more than μ When (μ ∈ [0.6,1]), merge two candidate regions, travel through all seed candidate regions and obtain final candidate region.Region A and region B similitude Sim calculation formula are:
Step S5:According to this method propose it is a kind of for small target deteection based on twin-channel deep neural network mould Type, train to obtain the network model using training dataset, the candidate target for being then applied to test set is identified tying Fruit.It is as shown in Figure 5 based on twin-channel deep neural network model structure:
Mainly it is made up of based on twin-channel deep neural network model two parts, front-end module and rear module.Before End module is made up of two parallel deep neural network models, one with candidate target region directly as input, by 6 Convolutional layer and 1 full articulamentum, obtain the feature of 4096 dimensions;Another takes 4 times centered on candidate target region in artwork Target area an extended area as input, by 6 convolutional layers and 1 full articulamentum, obtain the feature of 4096 dimensions. The feature of two 4096 that rear module is obtained using front-end module is stringed together as input, by 2 full connections layer by layer with 1 Softmax layers obtain the classification information of each candidate region as final classification results.
In step s 5, according to this method propose it is a kind of for small target deteection based on twin-channel depth nerve net Network, train to obtain the network model using training dataset, the candidate target for being then applied to test set is identified tying Fruit, specifically include:
S51:For training dataset, the candidate target region of the obtained training datasets of step S4 is divided into positive and negative sample This, input is improved to be trained to obtain optimal weights based on twin-channel deep neural network.
S52:The candidate target region that optimal weights are applied to the test data set that step S4 is obtained is classified, and is obtained Final recognition result.
Step S6:The method for tracking target based on depth characteristic that this method is proposed is applied to step S5 identification knot Fruit, the position of target is predicted using correlation filtering, and the stable objects of tracking and matching obtain final so as to filter out false target Unmanned plane position.Target tracking algorism particular flow sheet based on depth characteristic is as shown in Figure 6:
S601:The candidate target region of present frame former frame is inputted, utilizes the neutral net mould for training to obtain in step S5 Type, the convolution feature image array rarefaction for first obtaining the last three-layer coil lamination of model, then utilizes the feature after rarefaction The depth characteristic of figure extraction target;
S602:Corresponding correlation filter is constructed respectively to above-mentioned each convolutional layer output characteristic, from the front to the back, will be each The feature of layer carries out convolution with corresponding correlation filter, calculates corresponding confidence score, is working as so as to obtain the candidate target The new position of previous frame;
S603:Depth characteristic is extracted around the new center of candidate target, to update the parameter of correlation filter.
S604:In view of the stability and continuity of unmanned plane target motion, for candidate of the tracking frame number less than threshold value Target area track is filtered out, and the tracking target finally given is the testing result of unmanned plane.
The span of threshold value for being mentioned in step S604 is 5-20.
In step s 6, it is described that corresponding correlation filter is constructed respectively to the output characteristic that size is M × N × D, specifically Including:
First, if the depth characteristic that size is M × N × D is x, the object function for constructing corresponding correlation filter is:
Wherein, λ (λ >=0) is regularization parameter;Y (m, n) represents the label in (m, n) place pixel, and it is high that label obeys two dimension This distribution:
Then, object function is transformed into frequency domain using Fast Fourier Transform (FFT), the optimal of object function can be derived by Xie Wei:
Wherein, Y is y Fourier transformation, and ⊙ represents Hadamard products;
Finally, after the candidate target region of next two field picture is given, for the depth characteristic z of candidate region, then correspond to The response diagram of correlation filter is:
Wherein, F-1Represent Fourier inversion.
Further, in step s 6, the parameter W of the correlation filterdMore new strategy, specifically include:
Wherein, t is video frame number, and η is learning rate.
Finally illustrate, preferred embodiment above only to illustrate invention technical scheme and it is unrestricted, although passing through The present invention is described in detail for above preferred embodiment, it is to be understood by those skilled in the art that can be in shape Various changes are made in formula and to it in details, without departing from claims of the present invention limited range.

Claims (10)

  1. A kind of 1. unmanned plane small target detecting method based on motion feature and deep learning feature, it is characterised in that:This method Comprise the following steps:
    S1:The data set of the video of input is handled by Video Stabilization algorithm, compensates cam movement;
    S2:Motion candidates target area I is detected from the video image after motion compensation by low-rank matrix analysis method, and led to Cross post processing of image module and remove noise spot tiny in motion candidates target area I;
    S3:The data set of video is divided into training set and test set, trains to obtain improved candidate region generation using training set Network model;Candidate target region II is generated to the Computer Vision of test set by improved Area generation network model;
    S4:Candidate target region I and candidate target region II are merged to obtain candidate target region III;
    S5:According to candidate target region III, train to obtain using training set and be based on twin-channel deep neural network, then by base The candidate target for being applied to test set in twin-channel deep neural network is identified result;
    S6:Using the position of correlation filtering prediction target, the stable target of tracking and matching, false target is filtered out, obtains nothing Man-machine position.
  2. A kind of 2. unmanned plane small target deteection side based on motion feature and deep learning feature according to claim 1 Method, it is characterised in that:In step S1, described Video Stabilization algorithm includes:
    S11:Characteristic point is extracted to each two field picture using SURF algorithm;
    S12:The affine Transform Model between two frames is calculated by the characteristic matching point between obtained two field pictures;
    S13:Present frame is compensated using obtained affine Transform Model.
  3. A kind of 3. unmanned plane small target deteection side based on motion feature and deep learning feature according to claim 1 Method, it is characterised in that:In step S2, using low-rank matrix analysis method detect motion candidates target area I process include with Lower step:
    S21:By input video sequence view data { f1,f2,...,fnVectorization composition image array Wherein n is video frame number, fnFor n-th frame video image matrix,For fnImage array after vectorization;
    S22:Matrix C is decomposed into by low-rank matrix L and sparse matrix S by RPCA algorithms, wherein obtained low-rank matrix L is represented Target background, sparse matrix S represent obtained Candidate Motion target;
    S23:Noise filtering processing is carried out to obtained Candidate Motion target using morphology opening and closing operation, filters out motion candidates area Tiny noise spot in domain.
  4. A kind of 4. unmanned plane small target deteection side based on motion feature and deep learning feature according to claim 1 Method, it is characterised in that:In step S3, the improved candidate region generation network model includes five convolutional layers being sequentially connected With two full articulamentums, wherein between first layer convolutional layer and second layer convolutional layer, second layer convolutional layer and third layer convolutional layer Between and layer 5 and the first full articulamentum between be provided with pond layer;
    Step S3 is specially:
    S31:The data set of video is divided into training set and test set;
    S32:For the data of training set, the positive sample of the artificial mark in image is extracted, then some regions of stochastical sampling are made For negative sample;
    S33:Improved candidate region generation network model is obtained using the positive and negative sample training of training set;
    S34:Candidate target region II is generated to the Computer Vision of test set by improved Area generation network model.
  5. A kind of 5. unmanned plane small target deteection side based on motion feature and deep learning feature according to claim 4 Method, it is characterised in that:In step S32 the width in the region of stochastical sampling and height size scope by positive sample width and height Determine, and the overlapping region of negative sample and positive sample meets:
    <mrow> <mi>I</mi> <mi>o</mi> <mi>U</mi> <mo>=</mo> <mfrac> <mrow> <mi>a</mi> <mi>r</mi> <mi>e</mi> <mi>a</mi> <mrow> <mo>(</mo> <msub> <mi>r</mi> <mi>g</mi> </msub> <mo>)</mo> </mrow> <mo>&amp;cap;</mo> <mi>a</mi> <mi>r</mi> <mi>e</mi> <mi>a</mi> <mrow> <mo>(</mo> <msub> <mi>r</mi> <mi>n</mi> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mi>a</mi> <mi>r</mi> <mi>e</mi> <mi>a</mi> <mrow> <mo>(</mo> <msub> <mi>r</mi> <mi>g</mi> </msub> <mo>)</mo> </mrow> <mo>&amp;cup;</mo> <mi>a</mi> <mi>r</mi> <mi>e</mi> <mi>a</mi> <mrow> <mo>(</mo> <msub> <mi>r</mi> <mi>n</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>&lt;</mo> <mn>0.5</mn> </mrow>
    Wherein IoU is Duplication, rgFor positive sample region, rnFor stochastical sampling negative sample region.
  6. A kind of 6. unmanned plane small target deteection side based on motion feature and deep learning feature according to claim 4 Method, it is characterised in that:Fusion obtains candidate target region III and is specially in step S4:
    S41:Intensive sampling is carried out to candidate target region I and obtains intensive seed candidate region;
    S42:Similitude between computation-intensive seed candidate region and candidate target region II, works as satisfaction
    <mrow> <mn>0.6</mn> <mo>&lt;</mo> <mi>S</mi> <mi>i</mi> <mi>m</mi> <mo>=</mo> <mfrac> <mrow> <mi>A</mi> <mo>&amp;cap;</mo> <mi>B</mi> </mrow> <mrow> <mi>A</mi> <mo>&amp;cup;</mo> <mi>B</mi> </mrow> </mfrac> <mo>&lt;</mo> <mn>1</mn> </mrow>
    When merge two candidate regions, wherein, Sim is the similarity of intensive seed candidate region and candidate target region II;
    S43:Travel through all candidate target regions I and obtain final candidate target region III.
  7. A kind of 7. unmanned plane small target deteection side based on motion feature and deep learning feature according to claim 6 Method, it is characterised in that:It is described that front-end module and rear module are included based on twin-channel deep neural network in step S5;
    The front-end module is made up of two parallel deep neural network models, and one of them is directly made with candidate target region To input, by 6 layers of convolutional neural networks and 1 full articulamentum;Another is centered on candidate target region, in artwork An extended area is established on target area as input, by 6 layers of convolutional neural networks and 1 full articulamentum;
    The output for two full articulamentums that the rear module is obtained using front-end module is as input, by 2 full articulamentums and 1 Individual softmax layers obtain the classification information of each candidate region as final classification results;
    Step S5 is specially:
    S51:For training dataset, the training dataset for the candidate target region III that step S4 is obtained is divided into positive negative sample, It is input to and trains to obtain optimal weights based on twin-channel deep neural network;
    S52:The candidate target region that optimal weights are applied to the test set that step S4 is obtained is classified, and obtains final knowledge Other result.
  8. A kind of 8. unmanned plane small target deteection side based on motion feature and deep learning feature according to claim 4 Method, it is characterised in that:Step S6 is specifically included:
    S61:Center (the x of known present frame t former frame targett-1,yt-1), for the improvement for training to obtain in step S5 Candidate region generation network model, improved candidate region is generated to the obtained convolution of last three-layer coil lamination of network model Feature image array carries out rarefaction, and the depth characteristic of target is then extracted using the characteristic pattern after rarefaction;
    S62:Related filter is constructed respectively to the output characteristic of the last three-layer coil lamination of improved candidate region generation network model Ripple device, from the front to the back, each layer of feature is subjected to convolution with corresponding correlation filter, calculates corresponding confidence score f, from And obtain new center (x of the candidate target in present framet,yt);
    S63:Depth characteristic is extracted around new center, updates the parameter of correlation filter;
    S64:In view of the stability and continuity of unmanned plane target motion, for candidate target area of the tracking frame number less than threshold value Domain track is filtered out, and the tracking target finally given is the testing result of unmanned plane.
  9. A kind of 9. unmanned plane small target deteection side based on motion feature and deep learning feature according to claim 8 Method, it is characterised in that:Construct correlation filter the step of be:
    S621:If the size of output characteristic is M × N × D, depth characteristic x, the object function of correlation filter is constructed:
    <mrow> <msup> <mi>w</mi> <mo>*</mo> </msup> <mo>=</mo> <munder> <mi>argmin</mi> <mi>w</mi> </munder> <munder> <mo>&amp;Sigma;</mo> <mrow> <mi>m</mi> <mo>,</mo> <mi>n</mi> </mrow> </munder> <mo>|</mo> <mo>|</mo> <mi>w</mi> <mo>&amp;CenterDot;</mo> <msub> <mi>x</mi> <mrow> <mi>m</mi> <mo>,</mo> <mi>n</mi> </mrow> </msub> <mo>-</mo> <mi>y</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>+</mo> <mi>&amp;lambda;</mi> <mo>|</mo> <mo>|</mo> <mi>w</mi> <mo>|</mo> <msubsup> <mo>|</mo> <mn>2</mn> <mn>2</mn> </msubsup> <mo>,</mo> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>&amp;Element;</mo> <mo>{</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>...</mn> <mo>,</mo> <mi>M</mi> <mo>-</mo> <mn>1</mn> <mo>}</mo> <mo>&amp;times;</mo> <mo>{</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>...</mn> <mo>,</mo> <mi>N</mi> <mo>-</mo> <mn>1</mn> <mo>}</mo> </mrow>
    Wherein, w*For the object function of correlation filter, w is correlation filter, xm,nFor the feature at (m, n) pixel, λ is just Then change parameter lambda (λ >=0), y (m, n) represents the label in (m, n) place pixel;
    Y (m, n) obeys dimensional gaussian distribution:
    <mrow> <mi>y</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mfrac> <mrow> <msup> <mrow> <mo>(</mo> <mi>m</mi> <mo>-</mo> <mi>M</mi> <mo>/</mo> <mn>2</mn> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <mi>n</mi> <mo>-</mo> <mi>N</mi> <mo>/</mo> <mn>2</mn> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <mn>2</mn> <msup> <mi>&amp;sigma;</mi> <mn>2</mn> </msup> </mrow> </mfrac> <mo>)</mo> </mrow> </mrow>
    Wherein σ is the width of Gaussian kernel;
    S622:Object function is transformed into frequency domain using Fast Fourier Transform (FFT), obtains the optimal solution of object function,
    Y is y Fourier transformation, and ⊙ represents Hadamard products, WdFor the optimal solution of object function,In Fu for depth characteristic x Leaf transformation, i are the i-th passage, and d is channel sequence, d ∈ { 1,2 ..., D };
    S623:The candidate target region of next two field picture is given, for the depth characteristic z of candidate region, corresponding correlation filter Response diagram be:
    Wherein F-1Represent Fourier transformation,Represent depth characteristic z Fourier transformation.
  10. A kind of 10. unmanned plane small target deteection side based on motion feature and deep learning feature according to claim 9 Method, it is characterised in that:The parameter that correlation filter is updated in step S63 meets:
    <mrow> <msubsup> <mi>W</mi> <mi>t</mi> <mi>d</mi> </msubsup> <mo>=</mo> <mfrac> <msubsup> <mi>P</mi> <mi>t</mi> <mi>d</mi> </msubsup> <mrow> <msubsup> <mi>Q</mi> <mi>t</mi> <mi>d</mi> </msubsup> <mo>+</mo> <mi>&amp;lambda;</mi> </mrow> </mfrac> <mo>;</mo> </mrow>
    Pt、QtFor intermediate variable, WtFor the object function of the correlation filter of the t frames after renewal, t is video frame number, and η is Habit rate.
CN201711166232.4A 2017-11-21 2017-11-21 Unmanned aerial vehicle small target detection method based on motion characteristics and deep learning characteristics Active CN107862705B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711166232.4A CN107862705B (en) 2017-11-21 2017-11-21 Unmanned aerial vehicle small target detection method based on motion characteristics and deep learning characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711166232.4A CN107862705B (en) 2017-11-21 2017-11-21 Unmanned aerial vehicle small target detection method based on motion characteristics and deep learning characteristics

Publications (2)

Publication Number Publication Date
CN107862705A true CN107862705A (en) 2018-03-30
CN107862705B CN107862705B (en) 2021-03-30

Family

ID=61702397

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711166232.4A Active CN107862705B (en) 2017-11-21 2017-11-21 Unmanned aerial vehicle small target detection method based on motion characteristics and deep learning characteristics

Country Status (1)

Country Link
CN (1) CN107862705B (en)

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108549876A (en) * 2018-04-20 2018-09-18 重庆邮电大学 The sitting posture detecting method estimated based on target detection and human body attitude
CN108846522A (en) * 2018-07-11 2018-11-20 重庆邮电大学 UAV system combines charging station deployment and route selection method
CN108960190A (en) * 2018-07-23 2018-12-07 西安电子科技大学 SAR video object detection method based on FCN Image Sequence Model
CN109145906A (en) * 2018-08-31 2019-01-04 北京字节跳动网络技术有限公司 The image of target object determines method, apparatus, equipment and storage medium
CN109255286A (en) * 2018-07-21 2019-01-22 哈尔滨工业大学 A kind of quick detection recognition method of unmanned plane optics based on YOLO deep learning network frame
CN109272530A (en) * 2018-08-08 2019-01-25 北京航空航天大学 Method for tracking target and device towards space base monitoring scene
CN109325490A (en) * 2018-09-30 2019-02-12 西安电子科技大学 Terahertz image target identification method based on deep learning and RPCA
CN109325967A (en) * 2018-09-14 2019-02-12 腾讯科技(深圳)有限公司 Method for tracking target, device, medium and equipment
CN109325407A (en) * 2018-08-14 2019-02-12 西安电子科技大学 Optical remote sensing video object detection method based on F-SSD network filtering
CN109359545A (en) * 2018-09-19 2019-02-19 北京航空航天大学 A kind of collaboration monitoring method and apparatus under complicated low latitude environment
CN109410149A (en) * 2018-11-08 2019-03-01 安徽理工大学 A kind of CNN denoising method extracted based on Concurrent Feature
CN109472191A (en) * 2018-09-17 2019-03-15 西安电子科技大学 A kind of pedestrian based on space-time context identifies again and method for tracing
CN109708659A (en) * 2018-12-25 2019-05-03 四川九洲空管科技有限责任公司 A kind of distributed intelligence photoelectricity low latitude guard system
CN109801317A (en) * 2018-12-29 2019-05-24 天津大学 The image matching method of feature extraction is carried out based on convolutional neural networks
CN109859241A (en) * 2019-01-09 2019-06-07 厦门大学 Adaptive features select and time consistency robust correlation filtering visual tracking method
CN109918988A (en) * 2018-12-30 2019-06-21 中国科学院软件研究所 A kind of transplantable unmanned plane detection system of combination imaging emulation technology
CN110262529A (en) * 2019-06-13 2019-09-20 桂林电子科技大学 A kind of monitoring unmanned method and system based on convolutional neural networks
CN110287955A (en) * 2019-06-05 2019-09-27 北京字节跳动网络技术有限公司 Target area determines model training method, device and computer readable storage medium
CN110414375A (en) * 2019-07-08 2019-11-05 北京国卫星通科技有限公司 Recognition methods, device, storage medium and the electronic equipment of low target
CN110631588A (en) * 2019-09-23 2019-12-31 电子科技大学 Unmanned aerial vehicle visual navigation positioning method based on RBF network
CN110633597A (en) * 2018-06-21 2019-12-31 北京京东尚科信息技术有限公司 Driving region detection method and device
CN110706193A (en) * 2018-06-21 2020-01-17 北京京东尚科信息技术有限公司 Image processing method and device
CN110706252A (en) * 2019-09-09 2020-01-17 西安理工大学 Robot nuclear correlation filtering tracking algorithm under guidance of motion model
CN111006669A (en) * 2019-12-12 2020-04-14 重庆邮电大学 Unmanned aerial vehicle system task cooperation and path planning method
CN111127509A (en) * 2018-10-31 2020-05-08 杭州海康威视数字技术股份有限公司 Target tracking method, device and computer readable storage medium
CN111247526A (en) * 2020-01-02 2020-06-05 香港应用科技研究院有限公司 Target tracking method and system using iterative template matching
CN111242974A (en) * 2020-01-07 2020-06-05 重庆邮电大学 Vehicle real-time tracking method based on twin network and back propagation
CN111508002A (en) * 2020-04-20 2020-08-07 北京理工大学 Small-sized low-flying target visual detection tracking system and method thereof
CN111781599A (en) * 2020-07-16 2020-10-16 哈尔滨工业大学 SAR moving ship target speed estimation method based on CV-EstNet
CN112288655A (en) * 2020-11-09 2021-01-29 南京理工大学 Sea surface image stabilization method based on MSER region matching and low-rank matrix decomposition
CN112487892A (en) * 2020-11-17 2021-03-12 中国人民解放军军事科学院国防科技创新研究院 Unmanned aerial vehicle ground detection method and system based on confidence
CN114511793A (en) * 2020-11-17 2022-05-17 中国人民解放军军事科学院国防科技创新研究院 Unmanned aerial vehicle ground detection method and system based on synchronous detection and tracking
CN116954264A (en) * 2023-09-08 2023-10-27 杭州牧星科技有限公司 Distributed high subsonic unmanned aerial vehicle cluster control system and method thereof
CN117079196A (en) * 2023-10-16 2023-11-17 长沙北斗产业安全技术研究院股份有限公司 Unmanned aerial vehicle identification method based on deep learning and target motion trail

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106991396A (en) * 2017-04-01 2017-07-28 南京云创大数据科技股份有限公司 A kind of target relay track algorithm based on wisdom street lamp companion

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106991396A (en) * 2017-04-01 2017-07-28 南京云创大数据科技股份有限公司 A kind of target relay track algorithm based on wisdom street lamp companion

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
张炜程等: ""森林背景下基于自适应区域生长法的烟雾检测"", 《重庆邮电大学学报( 自然科学版)》 *
田超等: ""基于奇异值分解的红外弱小目标检测"", 《工程数学学报》 *
黄晓明等: ""自然场景文本区域定位"", 《重庆邮电大学学报( 自然科学版)》 *

Cited By (57)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108549876A (en) * 2018-04-20 2018-09-18 重庆邮电大学 The sitting posture detecting method estimated based on target detection and human body attitude
CN110706193A (en) * 2018-06-21 2020-01-17 北京京东尚科信息技术有限公司 Image processing method and device
CN110633597A (en) * 2018-06-21 2019-12-31 北京京东尚科信息技术有限公司 Driving region detection method and device
CN110633597B (en) * 2018-06-21 2022-09-30 北京京东尚科信息技术有限公司 Drivable region detection method and device
CN108846522B (en) * 2018-07-11 2022-02-11 重庆邮电大学 Unmanned aerial vehicle system combined charging station deployment and routing method
CN108846522A (en) * 2018-07-11 2018-11-20 重庆邮电大学 UAV system combines charging station deployment and route selection method
CN109255286A (en) * 2018-07-21 2019-01-22 哈尔滨工业大学 A kind of quick detection recognition method of unmanned plane optics based on YOLO deep learning network frame
CN109255286B (en) * 2018-07-21 2021-08-24 哈尔滨工业大学 Unmanned aerial vehicle optical rapid detection and identification method based on deep learning network framework
CN108960190A (en) * 2018-07-23 2018-12-07 西安电子科技大学 SAR video object detection method based on FCN Image Sequence Model
CN108960190B (en) * 2018-07-23 2021-11-30 西安电子科技大学 SAR video target detection method based on FCN image sequence model
CN109272530A (en) * 2018-08-08 2019-01-25 北京航空航天大学 Method for tracking target and device towards space base monitoring scene
US10719940B2 (en) 2018-08-08 2020-07-21 Beihang University Target tracking method and device oriented to airborne-based monitoring scenarios
CN109272530B (en) * 2018-08-08 2020-07-21 北京航空航天大学 Target tracking method and device for space-based monitoring scene
CN109325407A (en) * 2018-08-14 2019-02-12 西安电子科技大学 Optical remote sensing video object detection method based on F-SSD network filtering
CN109325407B (en) * 2018-08-14 2020-10-09 西安电子科技大学 Optical remote sensing video target detection method based on F-SSD network filtering
CN109145906A (en) * 2018-08-31 2019-01-04 北京字节跳动网络技术有限公司 The image of target object determines method, apparatus, equipment and storage medium
CN109325967A (en) * 2018-09-14 2019-02-12 腾讯科技(深圳)有限公司 Method for tracking target, device, medium and equipment
CN109325967B (en) * 2018-09-14 2023-04-07 腾讯科技(深圳)有限公司 Target tracking method, device, medium, and apparatus
CN109472191A (en) * 2018-09-17 2019-03-15 西安电子科技大学 A kind of pedestrian based on space-time context identifies again and method for tracing
CN109472191B (en) * 2018-09-17 2020-08-11 西安电子科技大学 Pedestrian re-identification and tracking method based on space-time context
CN109359545B (en) * 2018-09-19 2020-07-21 北京航空航天大学 Cooperative monitoring method and device under complex low-altitude environment
CN109359545A (en) * 2018-09-19 2019-02-19 北京航空航天大学 A kind of collaboration monitoring method and apparatus under complicated low latitude environment
CN109325490A (en) * 2018-09-30 2019-02-12 西安电子科技大学 Terahertz image target identification method based on deep learning and RPCA
CN109325490B (en) * 2018-09-30 2021-04-27 西安电子科技大学 Terahertz image target identification method based on deep learning and RPCA
CN111127509B (en) * 2018-10-31 2023-09-01 杭州海康威视数字技术股份有限公司 Target tracking method, apparatus and computer readable storage medium
CN111127509A (en) * 2018-10-31 2020-05-08 杭州海康威视数字技术股份有限公司 Target tracking method, device and computer readable storage medium
CN109410149B (en) * 2018-11-08 2019-12-31 安徽理工大学 CNN denoising method based on parallel feature extraction
CN109410149A (en) * 2018-11-08 2019-03-01 安徽理工大学 A kind of CNN denoising method extracted based on Concurrent Feature
CN109708659A (en) * 2018-12-25 2019-05-03 四川九洲空管科技有限责任公司 A kind of distributed intelligence photoelectricity low latitude guard system
CN109801317A (en) * 2018-12-29 2019-05-24 天津大学 The image matching method of feature extraction is carried out based on convolutional neural networks
CN109918988A (en) * 2018-12-30 2019-06-21 中国科学院软件研究所 A kind of transplantable unmanned plane detection system of combination imaging emulation technology
CN109859241B (en) * 2019-01-09 2020-09-18 厦门大学 Adaptive feature selection and time consistency robust correlation filtering visual tracking method
CN109859241A (en) * 2019-01-09 2019-06-07 厦门大学 Adaptive features select and time consistency robust correlation filtering visual tracking method
CN110287955A (en) * 2019-06-05 2019-09-27 北京字节跳动网络技术有限公司 Target area determines model training method, device and computer readable storage medium
CN110262529A (en) * 2019-06-13 2019-09-20 桂林电子科技大学 A kind of monitoring unmanned method and system based on convolutional neural networks
CN110262529B (en) * 2019-06-13 2022-06-03 桂林电子科技大学 Unmanned aerial vehicle monitoring method and system based on convolutional neural network
CN110414375A (en) * 2019-07-08 2019-11-05 北京国卫星通科技有限公司 Recognition methods, device, storage medium and the electronic equipment of low target
CN110706252B (en) * 2019-09-09 2020-10-23 西安理工大学 Robot nuclear correlation filtering tracking algorithm under guidance of motion model
CN110706252A (en) * 2019-09-09 2020-01-17 西安理工大学 Robot nuclear correlation filtering tracking algorithm under guidance of motion model
CN110631588A (en) * 2019-09-23 2019-12-31 电子科技大学 Unmanned aerial vehicle visual navigation positioning method based on RBF network
CN111006669A (en) * 2019-12-12 2020-04-14 重庆邮电大学 Unmanned aerial vehicle system task cooperation and path planning method
CN111247526B (en) * 2020-01-02 2023-05-02 香港应用科技研究院有限公司 Method and system for tracking position and direction of target object moving on two-dimensional plane
CN111247526A (en) * 2020-01-02 2020-06-05 香港应用科技研究院有限公司 Target tracking method and system using iterative template matching
CN111242974B (en) * 2020-01-07 2023-04-11 重庆邮电大学 Vehicle real-time tracking method based on twin network and back propagation
CN111242974A (en) * 2020-01-07 2020-06-05 重庆邮电大学 Vehicle real-time tracking method based on twin network and back propagation
CN111508002A (en) * 2020-04-20 2020-08-07 北京理工大学 Small-sized low-flying target visual detection tracking system and method thereof
CN111781599A (en) * 2020-07-16 2020-10-16 哈尔滨工业大学 SAR moving ship target speed estimation method based on CV-EstNet
CN112288655B (en) * 2020-11-09 2022-11-01 南京理工大学 Sea surface image stabilization method based on MSER region matching and low-rank matrix decomposition
CN112288655A (en) * 2020-11-09 2021-01-29 南京理工大学 Sea surface image stabilization method based on MSER region matching and low-rank matrix decomposition
CN112487892B (en) * 2020-11-17 2022-12-02 中国人民解放军军事科学院国防科技创新研究院 Unmanned aerial vehicle ground detection method and system based on confidence
CN114511793A (en) * 2020-11-17 2022-05-17 中国人民解放军军事科学院国防科技创新研究院 Unmanned aerial vehicle ground detection method and system based on synchronous detection and tracking
CN112487892A (en) * 2020-11-17 2021-03-12 中国人民解放军军事科学院国防科技创新研究院 Unmanned aerial vehicle ground detection method and system based on confidence
CN114511793B (en) * 2020-11-17 2024-04-05 中国人民解放军军事科学院国防科技创新研究院 Unmanned aerial vehicle ground detection method and system based on synchronous detection tracking
CN116954264B (en) * 2023-09-08 2024-03-15 杭州牧星科技有限公司 Distributed high subsonic unmanned aerial vehicle cluster control system and method thereof
CN116954264A (en) * 2023-09-08 2023-10-27 杭州牧星科技有限公司 Distributed high subsonic unmanned aerial vehicle cluster control system and method thereof
CN117079196A (en) * 2023-10-16 2023-11-17 长沙北斗产业安全技术研究院股份有限公司 Unmanned aerial vehicle identification method based on deep learning and target motion trail
CN117079196B (en) * 2023-10-16 2023-12-29 长沙北斗产业安全技术研究院股份有限公司 Unmanned aerial vehicle identification method based on deep learning and target motion trail

Also Published As

Publication number Publication date
CN107862705B (en) 2021-03-30

Similar Documents

Publication Publication Date Title
CN107862705A (en) A kind of unmanned plane small target detecting method based on motion feature and deep learning feature
Wu et al. Using popular object detection methods for real time forest fire detection
CN107016357B (en) Video pedestrian detection method based on time domain convolutional neural network
Zhao et al. SVM based forest fire detection using static and dynamic features
CN107292339A (en) The unmanned plane low altitude remote sensing image high score Geomorphological Classification method of feature based fusion
CN107767405A (en) A kind of nuclear phase for merging convolutional neural networks closes filtered target tracking
CN105760849B (en) Target object behavioral data acquisition methods and device based on video
CN106650600A (en) Forest smoke and fire detection method based on video image analysis
CN108346159A (en) A kind of visual target tracking method based on tracking-study-detection
CN111898504B (en) Target tracking method and system based on twin circulating neural network
Li et al. A method of cross-layer fusion multi-object detection and recognition based on improved faster R-CNN model in complex traffic environment
CN110533695A (en) A kind of trajectory predictions device and method based on DS evidence theory
CN108830188A (en) Vehicle checking method based on deep learning
CN108764142A (en) Unmanned plane image forest Smoke Detection based on 3DCNN and sorting technique
CN106845385A (en) The method and apparatus of video frequency object tracking
CN106485245A (en) A kind of round-the-clock object real-time tracking method based on visible ray and infrared image
CN107230267B (en) Intelligence In Baogang Kindergarten based on face recognition algorithms is registered method
KR102160591B1 (en) Fire situation generation system and its optimization method for fire situation detection model
CN110110649A (en) Alternative method for detecting human face based on directional velocity
CN106570490B (en) A kind of pedestrian&#39;s method for real time tracking based on quick clustering
CN108985169A (en) Across the door operation detection method in shop based on deep learning target detection and dynamic background modeling
CN107689052A (en) Visual target tracking method based on multi-model fusion and structuring depth characteristic
CN109255286A (en) A kind of quick detection recognition method of unmanned plane optics based on YOLO deep learning network frame
CN107025420A (en) The method and apparatus of Human bodys&#39; response in video
CN110348437A (en) It is a kind of based on Weakly supervised study with block the object detection method of perception

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant