CN109284704A - Complex background SAR vehicle target detection method based on CNN - Google Patents

Complex background SAR vehicle target detection method based on CNN Download PDF

Info

Publication number
CN109284704A
CN109284704A CN201811047934.5A CN201811047934A CN109284704A CN 109284704 A CN109284704 A CN 109284704A CN 201811047934 A CN201811047934 A CN 201811047934A CN 109284704 A CN109284704 A CN 109284704A
Authority
CN
China
Prior art keywords
frame
network model
target detection
layer
cnn
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811047934.5A
Other languages
Chinese (zh)
Inventor
常沛
夏勇
吴涛
万红林
李玉景
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 38 Research Institute
Original Assignee
CETC 38 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 38 Research Institute filed Critical CETC 38 Research Institute
Priority to CN201811047934.5A priority Critical patent/CN109284704A/en
Publication of CN109284704A publication Critical patent/CN109284704A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The present invention discloses a kind of complex background SAR vehicle target detection method based on CNN, comprising steps of S1, acquires pattern data and obtain sample data set through processing;S2 forms fusion frame, carries out retraining to the fusion frame on the basis of pre-training weight after being merged ResNet and Faster-RCNN frame;S3 carries out Target detection and identification to the pattern data using the fusion frame after retraining;The present invention realizes that target detection process realizes full-automatic target detection end to end by merging ResNet and Faster-RCNN frame, using Faster-RCNN frame, convenient for engineering application;The phenomenon that solving the problems, such as that network model present in depth convolutional network model is degenerated using residual error network model simultaneously, gradient existing for depth convolutional network model avoided to disappear.

Description

Complex background SAR vehicle target detection method based on CNN
Technical field
The present invention relates to vehicle target detection technique fields, and in particular to a kind of complex background SAR vehicle mesh based on CNN Mark detection method.
Background technique
The picture characteristics of synthetic aperture radar (Synthetic Aperture Radar, SAR) image can be with different Imaging parameters, imaging posture, substance environment etc. have greatly changed so that the Target detection and identification of SAR image becomes non- It is often difficult.It is traditional based on constant false alarm rate (Constant False Alarm Rate, CFAR) algorithm its derivative algorithm in target Under background contrast with higher and scene simple scenario, detection threshold can preferably isolate target from background, when When in face of the totally different clutter of many kinds of, scattering properties, usual detection performance can be declined.
With the continuous development of artificial intelligence, the method for deep learning is also introduced into SAR object detection field.Convolution mind It is artificial nerve network model (Artificial through network model (Convolutional Neural Network, CNN) Neural NetWork, ANN) one kind, the strategy that uses weight shared due to it reduces parameter, the translation, contracting to image Put, tilt or other forms deformation have height invariance, therefore be widely used in two dimensional image target detection and In identification.CNN can take out the feature of different levels in the study of different phase, avoid in conventional machines learning algorithm The process of artificial design features and classifier.But its drawback is it is also obvious that the size dimension of its input picture needs to fix, nothing Method realizes the target detection under large scene end to end.
Summary of the invention
To solve above-mentioned technological deficiency, the technical solution adopted by the present invention is, provides a kind of complex background based on CNN SAR vehicle target detection method, comprising steps of
S1 acquires pattern data and obtains sample data set through processing;
S2 forms fusion frame, on the basis of pre-training weight after being merged ResNet and Faster-RCNN frame On to the fusion frame carry out retraining;
S3 carries out Target detection and identification to the pattern data using the fusion frame after retraining.
Preferably, Same Scene is imaged by different radars, by data prediction sample orientation and away from Descriscent is all the pattern data of 0.3m distance amplitude.
Preferably, manually extracting to vehicle sample in the pattern data, original sample is obtained;To the original sample This, which expand, forms vehicle sample, and the vehicle sample is synthesized sample number with background sample in a manner of inserting at random According to collection.
Preferably, using large data collection ILSVRC-2012 data set in ZF network model and VGG-16 network model Random initializtion parameter carry out the pre-training, using ResNet-50 data set to random in ResNet-50 network model Initiation parameter carries out the pre-training.
Preferably, utilizing the sample data on the network model feature extraction layer parameter basis after the pre-training Collect and the retraining is carried out to feature extraction layer, candidate extract layer and the Classification and Identification layer in the fusion frame.
Preferably, in the step S2, the Faster-RCNN frame include feature extraction layer, Area generation network, The pond ROI layer and classification extract layer;The feature extraction layer is by extracting the characteristic pattern of the pattern data as the classification The input of identification layer;The Area generation network by the last layer characteristic pattern that the feature extraction layer inputs sliding window mention Take candidate frame;The pond ROI layer collects the characteristic pattern and the candidate frame of input, the comprehensive characteristic pattern and the time It selects and extracts candidate frame characteristic pattern after frame, the Classification and Identification layer carries out Classification and Identification to the candidate frame characteristic pattern, and carries out the Two frames return;The Faster-RCNN frame uses one of ZF network model or VGG network model.
Preferably, 9 sizes and length and width are arranged to each of characteristic pattern pixel in the Area generation network The candidate frame is tentatively obtained than different anchor points and in conjunction with the recurrence of the first frame.
Preferably, ResNet-50 network model and Faster RCNN frame are carried out fusion shape in the step S2 At fusion frame;The fusion frame handles the pattern data;The fusion frame is according to the ResNet-50 net Network model obtains 50 layers of residual block, and the residual block includes feature residual block and classification residual block, the feature residual block setting It is 40 layers, the classification residual block is set as 10 layers.
Preferably, the feature extraction layer is set as the feature residual block, the spy of the feature extraction layer output Sign figure size remains the 1/16 of the pattern data.
Preferably, the Classification and Identification layer after the layer of the pond ROI substitutes the ZF with the classification residual block Two layers of full articulamentum in network model.
Compared with the prior art the beneficial effects of the present invention are: 1, the present invention is by ResNet and Faster-RCNN frame It is merged, realizes that target detection process realizes full-automatic target detection end to end using Faster-RCNN frame, just It is applied in engineering;Asking for the degeneration of network model present in depth convolutional network model is solved using residual error network model simultaneously The phenomenon that inscribing, gradient existing for depth convolutional network model avoided to disappear;2, the ResNet-50 based on Faster-RCNN frame Network model can not only obtain good detection effect, under the support of NVIDIA video card Tesla K40m, using CUDA+ The GPU of CUDNN accelerates, and reaches 0.46s for the average time-consuming of the SAR image target detection of 2000*2000 or so size, also reaches It is horizontal very high real-time has been arrived.
Detailed description of the invention
Fig. 1 is the exemplary diagram of original sample;
Fig. 2 is the exemplary diagram of samples pictures and labeling position;
Fig. 3 is the frame diagram of Faster-RCNN;
Fig. 4 is transfer learning and directly trained comparison diagram;
Fig. 5 is the training loss curve graph of the fusion frame;
Fig. 6 is the comparison diagram of object detection results and true annotation results.
Digital representation in figure:
1- original sample;2- background sample;3- transfer learning loses curve;4- directly training loss curve.
Specific embodiment
Below in conjunction with attached drawing, the forgoing and additional technical features and advantages are described in more detail.
Embodiment one
The present invention is based on the complex background SAR vehicle target detection method of CNN comprising steps of
S1 acquires pattern data and obtains sample data set through processing;
S2 is formed after being merged ResNet (network model depth residual error network model) and Faster-RCNN frame Frame structure is merged, retraining is carried out to the fusion frame on the basis of pre-training weight;
S3 carries out Target detection and identification to the pattern data using the fusion frame after retraining.
Step S1 is pre- by data specifically, Same Scene is imaged in the airborne X-band radar by different flight numbers Processing samples orientation and distance to being all the pattern data of 0.3m distance amplitude, is fixed using 128 × 128 (unit is pixel) Size manually extracts vehicle sample in the pattern data, in total acquisition obtain 500 comprising a variety of vehicles (truck, Bus and crane) original sample 1 original sample slice of data collection, as shown in FIG. 1, FIG. 1 is the exemplary diagrams of original sample; A, b, c are the different original samples 1 in Fig. 1.
To the original sample 1 to rotate, overturning and the mode of (multiplicative noise) of making an uproar is added to be expanded, to increase the original Sample type in beginning sample slice data set, so that formation includes the vehicle sample data set of several vehicle samples;It chooses Several width scenes are as background sample 2 (including road, the interference such as building), the vehicle that the vehicle sample data is concentrated Sample is synthesized in a manner of inserting at random with the background sample, and vehicle sample database, i.e. sample are finally fabricated to Data set, the vehicle sample database generally comprise the sample graph of 7500 width about 2000 × 2000 (unit is pixel) size Piece.
Preferably, background sample 2 described in each width inserts the 5-15 different vehicle samples at random in synthesis process This.
The format of the vehicle sample database finally obtained generally uses VOC formatted data, that is, includes figure (JREGImages) file and annotation (Annotations) file;The graphic file clip pack is containing the sample after synthesis Picture, xml document of the comment file clip pack containing corresponding mark bounding box (rectangular selection frame) position.Fig. 2 is The exemplary diagram of samples pictures and bounding box labeling position;A and c in Fig. 2 are different samples pictures, and b is a's The exemplary diagram of bounding box labeling position, d are the exemplary diagram of the bounding box labeling position of c.
The training of CNN network model needs the support of a large amount of marker samples, and detects and identify field in SAR image at present In, due to the limitation of data source, using it is more be MSTAR data set and MiniSAR data set.It is each in MSTAR data set The size of width image is 128 × 128 (unit is pixel), the target comprising 37 kinds of models of major class, to a kind of model in data set Target all acquire the image of different orientations and pitch angle.MSTAR data set is since picture size is smaller and background clutter Interference very little, be generally used for Study on Target Recognition, be not suitable for the target detection of large scene image.MiniSAR data set Picture size is 2510 × 1638 (unit is pixel), includes the target of many attitude in image, and include a variety of interference signals (building, trees etc.), but data volume is seldom, can not train up to network model, can only examine as verifying collection The precision of network model prediction.
And real vehicles are sliced sample by step S1 through the invention and SAR background image is merged and carrys out exptended sample Step method avoids improving this hair for the situation of the vehicle target sample deficiency of large scene complexity SAR image in the prior art The accuracy of the bright complex background SAR vehicle target detection method based on CNN.
Embodiment two
As shown in figure 3, Fig. 3 is the frame diagram of Faster-RCNN.It is needed in step s 2 through Faster-RCNN frame pair The pattern data is handled.
Specifically, Faster-RCNN (Faster Region CNN) frame mainly includes feature extraction layer, Area generation Network (Region Proposal Network, RPN), the pond ROI layer and classification extract layer.
The feature extraction layer mainly includes multiple convolutional layers, active coating and pond layer, and the feature extraction layer is to described Pattern data extracts characteristic pattern as the input of the Classification and Identification layer.The feature extraction layer is according to network model Depth can be divided into ZF network model and VGG network model, wherein the VGG-16 network model includes 13 conv (convolution Layer), 13 relu (active coating) and 4 pooling (pond layer), the ZF network model include 5 conv, 4 relu and 2 The characteristic pattern size of a pooling, the ZF network model and VGG network model output is all the 1/ of the pattern data 16。
The Area generation network passes through the sliding window extraction time in the last layer characteristic pattern that the feature extraction layer inputs Frame is selected, 9 sizes anchor point (anchor) different with length-width ratio is arranged to each of characteristic pattern pixel, and combine Frame returns the candidate frame for tentatively obtaining the pattern data.
The pond ROI layer collects the characteristic pattern and the candidate frame of input, the comprehensive characteristic pattern and the time Candidate frame characteristic pattern is extracted after selecting frame, the subsequent Classification and Identification layer is sent into and determines target category.
The Classification and Identification layer carries out Classification and Identification to the candidate frame characteristic pattern, and carries out more accurate frame and return, The final target detection realized to the pattern data.
The Faster-RCNN frame four steps of target detection (characteristic area generation, feature extraction, classification and Candidate region returns) unify within the frame of a depth network model, the complex background the present invention is based on CNN can be accelerated The detection speed of SAR vehicle target detection method improves detection accuracy.
Embodiment three
Preferably, in the present embodiment, step S2 is specially by ResNet-50 network model and Faster RCNN frame It carries out fusion and forms fusion frame;The fusion process of ResNet-50 network model and Faster RCNN frame specifically includes that
50 layers of residual block, the feature extraction layer setting of the fusion frame are obtained according to ResNet-50 network model The characteristic pattern size for 40 layers of the residual block, output remains the 1/16 of the pattern data;
Sliding window is extracted candidate in the last layer characteristic pattern that the Area generation network is inputted by the feature extraction layer Frame is arranged 9 sizes anchor point (anchor) different with length-width ratio to each of characteristic pattern pixel, and combines side Frame returns the candidate frame for tentatively obtaining the pattern data.
The pond ROI layer collects the characteristic pattern and the candidate frame of input, the comprehensive characteristic pattern and the time Candidate frame characteristic pattern is extracted after selecting frame, the subsequent Classification and Identification layer is sent into and determines target category.
The Classification and Identification layer after the layer of the pond ROI substitutes the ZF network mould with remaining 10 layers of residual block Two layers of full articulamentum in type ultimately forms fusion frame and carries out target detection to pattern data.
By merging ResNet-50 network model and Faster RCNN frame, target detection end to end is realized Process, to guarantee that the present invention is based on the full-automatic target detections of the complex background SAR vehicle target detection method of CNN, just In engineering application of the invention;Network model present in depth convolutional network model is solved using residual error network model simultaneously The problem of degeneration, avoids gradient extinction tests existing for depth convolutional network model, improves detection effect.
Example IV
Preferably, being trained using transfer learning, specifically include that
Pre-training: in step S2, large data collection ILSVRC-2012 data set (ImageNet Large Scale is utilized The extensive visual identity contest of Visual Recognition Challenge, IMANEET) to ZF network model and VGG-16 net Random initializtion parameter in network model carries out pre-training, using ResNet-50 data set in ResNet-50 network model Random initializtion parameter carries out pre-training.
Retraining: on the network model feature extraction layer parameter basis after the pre-training, the sample data is utilized Collection carries out retraining to the feature extraction layer, the candidate extract layer and the Classification and Identification layer.
The final fusion frame using after retraining completes Target detection and identification.
The network model parameter that the trained and retraining obtains described in through the invention carries out detection identification, detects Recognition effect will be far superior to random initializtion parameter, and the training expense of network model can be greatly decreased.
As shown in figure 4, Fig. 4 is transfer learning and directly trained loss curve comparison figure, Fig. 4 includes transfer learning damage Curve 3 and directly training loss curve 4 are lost, curve 3 is lost by the transfer learning and curve 4 is lost in the direct training It compares, it can be seen that the transfer learning obviously accelerates the convergence rate of network model, and the penalty values after convergence are less than Directly trained penalty values.
Embodiment five
Preferably, using Average Accuracy (mean average to the evaluation of the testing result of network model Precision, mAP) it is used as detection effect evaluation criterion.The mAP calculation formula are as follows:
Wherein, P is precision ratio, and R is recall ratio.
MAP is that solve Traditional measurements normalized recall rate according to the integral of recall ratio and the drawn curve of precision ratio, look into standard The One-Point-Value limitation of rate and F-Score (F score comprehensively considers the reconciliation value of P and R), therefore mAP is evaluated as detection effect and is marked Will definitely more effectively comprehensive assessment algorithm validity and accuracy.
System is concentrated in 500 test samples to ZF network model, VGG-16 network model, fusion three kinds of network models of frame It is as shown in table 1 to count mAP:
1 ZF network model of table, VGG-16 network model, fusion frame indicator-specific statistics table
By comparison T1, T2 and T3 experiment, it can be found that the testing result of fusion frame is optimal, but its individual figure is average The time-consuming also longest of detection.The detection time-consuming of image of traditional two-parameter CFAR method in 2000*2000 or so size reaches 10s Magnitude, and use the inspection of the ResNet-50 of GPU (Graphics Processing Unit graphics processor) progress parallel computation Time-consuming only 0.46s is surveyed, there is high real-time.
As shown in figure 5, Fig. 5 is the training loss curve graph for merging frame under 4 kinds of different scenes.It can be with from Fig. 5 It obtains, just basic convergence is not declining after iteration 2000 times for trained loss, therefore defeated with regard to stopping iteration at iteration 3000 times Input weight when network model weight is as test out.By object detection results of the invention and really in actual experiment Annotation results compare statistics and (think to be greater than 50% detection in target area and true callout box overlapping area when statistics Frame is regarded as effectively detecting), as shown in fig. 6, Fig. 6 is the comparison diagram of object detection results and true annotation results;A in Fig. 6, B, c, d are respectively the object detection results of four different scenes and the comparison diagram of true annotation results, it can be found that four from Fig. 6 The precision ratio of a scene reaches 100%, and recall ratio reaches 95%, F-Score and reaches 97%.
The foregoing is merely presently preferred embodiments of the present invention, is merely illustrative for the purpose of the present invention, and not restrictive 's.Those skilled in the art understand that in the spirit and scope defined by the claims in the present invention many changes can be carried out to it, It modifies or even equivalent, but falls in protection scope of the present invention.

Claims (10)

1. a kind of complex background SAR vehicle target detection method based on CNN, which is characterized in that comprising steps of
S1 acquires pattern data and obtains sample data set through processing;
S2 forms fusion frame after being merged ResNet and Faster-RCNN frame, right on the basis of pre-training weight The fusion frame carries out retraining;
S3 carries out Target detection and identification to the pattern data using the fusion frame after retraining.
2. the complex background SAR vehicle target detection method based on CNN as described in claim 1, which is characterized in that by not Same Scene is imaged with radar, samples orientation and distance to being all 0.3m distance amplitude by data prediction The pattern data.
3. the complex background SAR vehicle target detection method based on CNN as claimed in claim 2, which is characterized in that artificial right Vehicle sample extracts in the pattern data, obtains original sample;The original sample expand and forms vehicle sample This, and the vehicle sample is synthesized into sample data set with background sample in a manner of inserting at random.
4. the complex background SAR vehicle target detection method based on CNN as described in claim 1, which is characterized in that using greatly Type data set ILSVRC-2012 data set carries out institute to the random initializtion parameter in ZF network model and VGG-16 network model Pre-training is stated, the pre- instruction is carried out to the random initializtion parameter in ResNet-50 network model using ResNet-50 data set Practice.
5. the complex background SAR vehicle target detection method based on CNN as described in claim 1, which is characterized in that described On network model feature extraction layer parameter basis after pre-training, using the sample data set to the spy in the fusion frame It levies extract layer, candidate extract layer and Classification and Identification layer and carries out the retraining.
6. the complex background SAR vehicle target detection method based on CNN as described in claim 1, which is characterized in that described In step S2, the Faster-RCNN frame includes that feature extraction layer, Area generation network, the pond ROI layer and classification are extracted Layer;The feature extraction layer is by extracting input of the characteristic pattern of the pattern data as the Classification and Identification layer;The area Domain generates network and passes through the sliding window extraction candidate frame in the last layer characteristic pattern that the feature extraction layer inputs;The pond ROI Change the characteristic pattern and the candidate frame that layer collects input, it is special to extract candidate frame after the comprehensive characteristic pattern and the candidate frame Sign figure, the Classification and Identification layer carries out Classification and Identification to the candidate frame characteristic pattern, and carries out the second frame recurrence;It is described Faster-RCNN frame uses one of ZF network model or VGG network model.
7. the complex background SAR vehicle target detection method based on CNN as claimed in claim 6, which is characterized in that the area Domain generates network and 9 sizes anchor point different with length-width ratio is arranged to each of characteristic pattern pixel and combines first Frame recurrence tentatively obtains the candidate frame.
8. the complex background SAR vehicle target detection method based on CNN as claimed in claim 7, which is characterized in that described In step S2, ResNet-50 network model and Faster RCNN frame are subjected to fusion and form fusion frame;The fusion frame Frame handles the pattern data;The fusion frame obtains 50 layers of residual block according to the ResNet-50 network model, The residual block includes feature residual block and classification residual block, and the feature residual block is set as 40 layers, the classification residual block It is set as 10 layers.
9. the complex background SAR vehicle target detection method based on CNN as claimed in claim 8, which is characterized in that the spy Sign extract layer is set as the feature residual block, and the characteristic pattern size of the feature extraction layer output remains the pattern The 1/16 of data.
10. the complex background SAR vehicle target detection method based on CNN as claimed in claim 8, which is characterized in that in institute It states the Classification and Identification layer after the layer of the pond ROI and substitutes two layers in the ZF network model with the classification residual block and connect entirely Connect layer.
CN201811047934.5A 2018-09-07 2018-09-07 Complex background SAR vehicle target detection method based on CNN Pending CN109284704A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811047934.5A CN109284704A (en) 2018-09-07 2018-09-07 Complex background SAR vehicle target detection method based on CNN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811047934.5A CN109284704A (en) 2018-09-07 2018-09-07 Complex background SAR vehicle target detection method based on CNN

Publications (1)

Publication Number Publication Date
CN109284704A true CN109284704A (en) 2019-01-29

Family

ID=65183914

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811047934.5A Pending CN109284704A (en) 2018-09-07 2018-09-07 Complex background SAR vehicle target detection method based on CNN

Country Status (1)

Country Link
CN (1) CN109284704A (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109901129A (en) * 2019-03-06 2019-06-18 中国人民解放军海军航空大学 Object detection method and system in a kind of sea clutter
CN110059654A (en) * 2019-04-25 2019-07-26 台州智必安科技有限责任公司 A kind of vegetable Automatic-settlement and healthy diet management method based on fine granularity identification
CN110110722A (en) * 2019-04-30 2019-08-09 广州华工邦元信息技术有限公司 A kind of region detection modification method based on deep learning model recognition result
CN110136134A (en) * 2019-04-03 2019-08-16 深兰科技(上海)有限公司 A kind of deep learning method, apparatus, equipment and medium for road surface segmentation
CN110379178A (en) * 2019-07-25 2019-10-25 电子科技大学 Pilotless automobile intelligent parking method based on millimetre-wave radar imaging
CN110503643A (en) * 2019-08-23 2019-11-26 闽江学院 A kind of object detection method and device based on the retrieval of multiple dimensioned rapid scene
CN110826457A (en) * 2019-10-31 2020-02-21 上海融军科技有限公司 Vehicle detection method and device under complex scene
CN110929632A (en) * 2019-11-19 2020-03-27 复旦大学 Complex scene-oriented vehicle target detection method and device
CN111209975A (en) * 2020-01-13 2020-05-29 北京工业大学 Ship target identification method based on multitask learning
CN111292349A (en) * 2020-01-17 2020-06-16 北京大学深圳研究生院 Data enhancement method for target detection based on fusion of recommendation candidate boxes
CN111582339A (en) * 2020-04-28 2020-08-25 江西理工大学 Vehicle detection and identification method based on deep learning
CN111784031A (en) * 2020-06-15 2020-10-16 上海东普信息科技有限公司 Logistics vehicle classification prediction method, device, equipment and storage medium
CN112133100A (en) * 2020-09-16 2020-12-25 北京影谱科技股份有限公司 Vehicle detection method based on R-CNN
CN112132032A (en) * 2020-09-23 2020-12-25 平安国际智慧城市科技股份有限公司 Traffic sign detection method and device, electronic equipment and storage medium
CN112580408A (en) * 2019-09-30 2021-03-30 杭州海康威视数字技术股份有限公司 Deep learning model training method and device and electronic equipment
CN112734641A (en) * 2020-12-31 2021-04-30 百果园技术(新加坡)有限公司 Training method and device of target detection model, computer equipment and medium
CN114463729A (en) * 2021-12-13 2022-05-10 浙江大华技术股份有限公司 Milk cow identification method and device
CN114973023A (en) * 2022-08-01 2022-08-30 中国科学院空天信息创新研究院 High-resolution SAR image vehicle target key part extraction method based on fast RCNN

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107945173A (en) * 2017-12-11 2018-04-20 深圳市宜远智能科技有限公司 A kind of skin disease detection method and system based on deep learning
CN108280460A (en) * 2017-12-04 2018-07-13 西安电子科技大学 Based on the SAR vehicle target recognition methods for improving convolutional neural networks
CN108288271A (en) * 2018-02-06 2018-07-17 上海交通大学 Image detecting system and method based on three-dimensional residual error network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108280460A (en) * 2017-12-04 2018-07-13 西安电子科技大学 Based on the SAR vehicle target recognition methods for improving convolutional neural networks
CN107945173A (en) * 2017-12-11 2018-04-20 深圳市宜远智能科技有限公司 A kind of skin disease detection method and system based on deep learning
CN108288271A (en) * 2018-02-06 2018-07-17 上海交通大学 Image detecting system and method based on three-dimensional residual error network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LI, HX 等: "Learning Deep Appearance Feature for Multi-target Tracking", 《2017 INTERNATIONAL CONFERENCE ON VIRTUAL REALITY AND VISUALIZATION (ICVRV 2017)》 *
SHAOQING REN等: "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
彭刚 等: "改进的基于区域卷积神经网络的微操作系统目标检测方法", 《模式识别与人工智能》 *

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109901129A (en) * 2019-03-06 2019-06-18 中国人民解放军海军航空大学 Object detection method and system in a kind of sea clutter
CN110136134A (en) * 2019-04-03 2019-08-16 深兰科技(上海)有限公司 A kind of deep learning method, apparatus, equipment and medium for road surface segmentation
CN110059654A (en) * 2019-04-25 2019-07-26 台州智必安科技有限责任公司 A kind of vegetable Automatic-settlement and healthy diet management method based on fine granularity identification
CN110110722A (en) * 2019-04-30 2019-08-09 广州华工邦元信息技术有限公司 A kind of region detection modification method based on deep learning model recognition result
CN110379178A (en) * 2019-07-25 2019-10-25 电子科技大学 Pilotless automobile intelligent parking method based on millimetre-wave radar imaging
CN110503643A (en) * 2019-08-23 2019-11-26 闽江学院 A kind of object detection method and device based on the retrieval of multiple dimensioned rapid scene
CN110503643B (en) * 2019-08-23 2021-10-01 闽江学院 Target detection method and device based on multi-scale rapid scene retrieval
CN112580408B (en) * 2019-09-30 2024-03-12 杭州海康威视数字技术股份有限公司 Deep learning model training method and device and electronic equipment
CN112580408A (en) * 2019-09-30 2021-03-30 杭州海康威视数字技术股份有限公司 Deep learning model training method and device and electronic equipment
CN110826457A (en) * 2019-10-31 2020-02-21 上海融军科技有限公司 Vehicle detection method and device under complex scene
CN110826457B (en) * 2019-10-31 2022-08-19 上海融军科技有限公司 Vehicle detection method and device under complex scene
CN110929632A (en) * 2019-11-19 2020-03-27 复旦大学 Complex scene-oriented vehicle target detection method and device
CN111209975A (en) * 2020-01-13 2020-05-29 北京工业大学 Ship target identification method based on multitask learning
CN111292349A (en) * 2020-01-17 2020-06-16 北京大学深圳研究生院 Data enhancement method for target detection based on fusion of recommendation candidate boxes
CN111292349B (en) * 2020-01-17 2023-04-18 北京大学深圳研究生院 Data enhancement method for target detection based on fusion of recommendation candidate boxes
CN111582339A (en) * 2020-04-28 2020-08-25 江西理工大学 Vehicle detection and identification method based on deep learning
CN111784031A (en) * 2020-06-15 2020-10-16 上海东普信息科技有限公司 Logistics vehicle classification prediction method, device, equipment and storage medium
CN112133100A (en) * 2020-09-16 2020-12-25 北京影谱科技股份有限公司 Vehicle detection method based on R-CNN
CN112133100B (en) * 2020-09-16 2022-04-22 北京影谱科技股份有限公司 Vehicle detection method based on R-CNN
CN112132032A (en) * 2020-09-23 2020-12-25 平安国际智慧城市科技股份有限公司 Traffic sign detection method and device, electronic equipment and storage medium
CN112734641A (en) * 2020-12-31 2021-04-30 百果园技术(新加坡)有限公司 Training method and device of target detection model, computer equipment and medium
CN112734641B (en) * 2020-12-31 2024-05-31 百果园技术(新加坡)有限公司 Training method and device for target detection model, computer equipment and medium
CN114463729A (en) * 2021-12-13 2022-05-10 浙江大华技术股份有限公司 Milk cow identification method and device
CN114973023A (en) * 2022-08-01 2022-08-30 中国科学院空天信息创新研究院 High-resolution SAR image vehicle target key part extraction method based on fast RCNN
CN114973023B (en) * 2022-08-01 2022-10-04 中国科学院空天信息创新研究院 High-resolution SAR image vehicle target key part extraction method based on fast RCNN

Similar Documents

Publication Publication Date Title
CN109284704A (en) Complex background SAR vehicle target detection method based on CNN
CN108510467B (en) SAR image target identification method based on depth deformable convolution neural network
CN106874889B (en) Multiple features fusion SAR target discrimination method based on convolutional neural networks
CN106156744B (en) SAR target detection method based on CFAR detection and deep learning
CN105809198B (en) SAR image target recognition method based on depth confidence network
CN105427314B (en) SAR image object detection method based on Bayes's conspicuousness
CN108009509A (en) Vehicle target detection method
CN108898065B (en) Deep network ship target detection method with candidate area rapid screening and scale self-adaption
CN108334848A (en) A kind of small face identification method based on generation confrontation network
CN103729854B (en) A kind of method for detecting infrared puniness target based on tensor model
CN110163275B (en) SAR image target classification method based on deep convolutional neural network
CN107247930A (en) SAR image object detection method based on CNN and Selective Attention Mechanism
CN113012150A (en) Feature-fused high-density rice field unmanned aerial vehicle image rice ear counting method
CN112257605B (en) Three-dimensional target detection method, system and device based on self-labeling training sample
CN110097575B (en) Target tracking method based on local features and scale pool
CN104573731A (en) Rapid target detection method based on convolutional neural network
CN109389080A (en) Hyperspectral image classification method based on semi-supervised WGAN-GP
CN103824079B (en) Multi-level mode sub block division-based image classification method
CN110647802A (en) Remote sensing image ship target detection method based on deep learning
CN113160062B (en) Infrared image target detection method, device, equipment and storage medium
CN109886147A (en) A kind of more attribute detection methods of vehicle based on the study of single network multiple-task
CN109635789B (en) High-resolution SAR image classification method based on intensity ratio and spatial structure feature extraction
CN105488541A (en) Natural feature point identification method based on machine learning in augmented reality system
CN106023134A (en) Automatic grain boundary extraction method for steel grain
CN104657717A (en) Pedestrian detection method based on layered kernel sparse representation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190129