CN109389101A - A kind of SAR image target recognition method based on denoising autoencoder network - Google Patents

A kind of SAR image target recognition method based on denoising autoencoder network Download PDF

Info

Publication number
CN109389101A
CN109389101A CN201811302162.5A CN201811302162A CN109389101A CN 109389101 A CN109389101 A CN 109389101A CN 201811302162 A CN201811302162 A CN 201811302162A CN 109389101 A CN109389101 A CN 109389101A
Authority
CN
China
Prior art keywords
feature
image
training
sift
intensive
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811302162.5A
Other languages
Chinese (zh)
Inventor
漆进
秦金泽
胡顺达
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201811302162.5A priority Critical patent/CN109389101A/en
Publication of CN109389101A publication Critical patent/CN109389101A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of SAR image target recognition methods based on denoising autoencoder network, this method comprises: using three-dimensional module matching algorithm (BM3D) to image denoising;To the training image after original image and denoising, dense feature point is extracted with sliding window, and save in pairs by corresponding position respectively, the input for later period training depth denoising encoder;It is to be originally inputted x with the SIFT feature of image zooming-out after denoising, is to add input of making an uproar with the corresponding SIFT for not denoising image zooming-outTraining deep layer denoises autoencoder network;Use space pyramid (SPM) model calculates image feature vector expression, is summarized using maximum pond method (Max Pooling) to each local feature, obtains final iamge description vector;Depth network training is carried out using a large amount of intensive SIFT features, using high-rise the indicating of deep layer denoising autoencoder network learning characteristic, finally using depth network characterization substitution initial local feature training linear SVM Classification and Identification.

Description

A kind of SAR image target recognition method based on denoising autoencoder network
Technical field
The invention belongs to synthetic aperture radar (SAR) image application fields, are related to a kind of SAR image target recognition method, Especially a kind of SAR image target recognition method based on denoising autoencoder network.
Background technique
Synthetic aperture radar (SAR) is as a kind of important remotely sensed image sensor, in environmental monitoring, resources survey and state The fields such as anti-military affairs have a very wide range of applications.In face of the SAR data of magnanimity, how therefrom to identify automatically, quickly and accurately Target becomes the important directions of current SAR image treatment research, has attracted more and more attention from people and payes attention to.
Deep learning is achieved in the fields such as image procossing and speech recognition and is widely applied in recent years, and core concept is By constructing a kind of deep neural network structure comprising multiple hidden layers, to extract the high-rise language of the signals such as image, voice Adopted information, so as to improve the accuracy rate of later period classification or identification.Neural network passes through construction multitiered network structure, Ke Yishi Now directly original image is successively indicated and classified.Each layer of network is equivalent to a characteristic extraction procedure, thus real The now extraction from image to high-level characteristic, but usually requiring a large amount of training sample can just prevent model from over-fitting occurs.
At present SAR image target recognition method mainly have the method based on template matching, the method based on pattern classification and Method based on rarefaction representation.But template matching method, due to needing to be stored in advance a large amount of templates, algorithm space complexity is high, And performance is easy to be influenced by SAR image quality, and algorithm robustness is inadequate.Method based on pattern classification needs manual designs Feature, the condition hypothesis of classifier selection performance difference under different application background is larger, has certain limitation.Sparse table The method shown is a characteristic extraction procedure by carrying out rarefaction representation to original image, can be directly with original image Rarefaction representation is classified and is identified, to effectively avoid the difficulty of manual extraction feature.On the other hand, rarefaction representation Solution procedure calculates complexity, computationally intensive, quite time-consuming.
The SAR image Target Recognition Algorithms having proposed at present are all not general enough, and a large amount of phases due to containing in SAR image Dry spot noise is typically necessary image preprocessing process by complicated and time consumption before carrying out target identification;Meanwhile traditional optical figure The feature extraction of picture is not sufficiently stable robust in SAR image feature.In order to solve the problems, such as above-mentioned SAR target identification, the present invention Propose the SAR image target recognition method based on denoising autoencoder network.Autoencoder network is right by layer-by-layer unsupervised learning Network carries out pre-training, can reduce model to a certain degree and over-fitting occurs.Autoencoder network is denoised to traditional autoencoder network It is expanded, it is contemplated that the influence of noise of input picture has certain robustness to having to make an uproar to input.Consider that SAR is distinctive Coherent speckle noise influences, and using denoising self-encoding encoder construction depth neural network, carries out SAR using deep layer denoising autoencoder network Image object classification.
Summary of the invention
In view of above-mentioned deficiencies of the prior art, the technical problem to be solved by the present invention is to how be directed to high resolution SAR Image carries out SAR image target point using deep layer denoising autoencoder network using denoising self-encoding encoder construction depth neural network Class.
To achieve the above object, the present invention provides a kind of SAR image target identification sides based on denoising autoencoder network Method.Its feature includes:
(1) SAR image denoises: feature extraction will receive the interference of coherent speckle noise, therefore first with Denoising Algorithm to original Training image is pre-processed;
(2) intensive Scale invariant local feature (SIFT) is extracted: respectively to the original image in (1) and the training after denoising Image extracts dense feature point, and saves in pairs by corresponding position respectively.For the defeated of later period training depth denoising encoder Enter.For test image, intensive SIFT feature is directly extracted in no longer progress denoising;
(3) deep layer denoising is from coding feature extraction: the SIFT feature with image zooming-out after denoising is to be originally inputted x, with right The SIFT for not denoising image zooming-out answered is to add input of making an uproarTraining deep layer denoises autoencoder network;
(4) Feature Mapping: use space pyramid (SPM) model calculates image feature vector expression, utilizes maximum pond Method (Max Pooling) summarizes each local feature, obtains final iamge description vector;
(5) Classification and Identification: with training image feature set training linear SVM (SVM).For test set, equally into The intensive SIFT feature of row is extracted, depth network characterization is converted, Feature Mapping expression, finally carries out classification prediction with oneself training SVM.
Further, utilize three-dimensional module matching algorithm (BM3D) by the image subblock with similar structure in described (1) It is combined into three-dimensional array, image filtering is carried out in transform domain using the method for Federated filter, is denoised finally by inverse transformation Image afterwards.
Intensive sampling is carried out using sliding window method first in (2), selects certain window size and sliding step It is long, Gaussian Blur and direction matching primitives are carried out, guarantee that the SIFT for extracting 128 dimensions in each sliding window is retouched State son.For a secondary M × N size image, it is assumed that scale S={ 0,1,2 }, then sliding window size and corresponding sampling interval Step-length are as follows:
winsize(s)=16 × 2s,
winstep(s)=8 × 2s,
After sliding window, the intensive SIFT feature number of piece image extraction are as follows:
The SIFT feature of image zooming-out is to be originally inputted x after (3) middle denoising, does not denoise image zooming-out with corresponding SIFT be plus make an uproar inputTraining deep layer denoises autoencoder network.Steps are as follows for feature extraction algorithm:
(1) intensive SIFT feature is extracted: by the method for (2), from training image, intensive sampling extraction is more at equal intervals 128 dimension SIFT features of scale, for the SAR image of a width 64*64 size, with [0,1,2] --- a scale extracts intensive SIFT, characteristic reach 59.
(2) unsupervised pre-training: the SIFT feature after using denoising is as being originally inputted, the SIFT feature conduct that does not denoise Add input of making an uproar, layer-by-layer pre-training network parameter.
(3) have supervision to finely tune: the last layer of network is as label output layer, and using back-propagation algorithm, fine tuning is entire The parameter of network, the number for having supervision to finely tune cannot be too many (more because the target category information that single SIFT feature includes is insufficient Secondary fine tuning enough feature raw information so that network parameter is beyond expression), it is selected by cross validation and early-stopping Select more excellent fine tuning the number of iterations.
(4) for each samples pictures (training or test), it is special directly propagated forward feature extraction: to extract intensive SIFT Sign, then using intensive SIFT as network inputs, propagated forward calculates the layer second from the bottom response before output layer, as last Further feature expression, the input feature vector to substitute original intensive SIFT feature, as later period classifier.
Maximum value pond method is utilized in the image subblock of the same scale by spatial pyramid in (4) The method of (Max Pooling) summarizes each local feature, then connects the feature summarized in each scale sub-block and obtains To final iamge description vector.Vacation lets c be the feature coding set of some sub-block generation:
Wherein, M is the sparse features vector dimension after sparse coding, and S is the sparse features vector number in the sub-block.
The M dimensional feature vector of each sub-block is obtained using maximum value pond (max pooling) method:
fi=mjaxci,j, i=1 ..., M, j=1 ..., S
By Chi Huahou, multiple sparse features of each sub-block are aggregated into a feature vector ^, finally by all sons of image Block summarizes combination of eigenvectors into an iamge description:
Depth network training is carried out using a large amount of intensive SIFT features in (5), considers the shadow of SAR image speckle noise It rings, using high-rise the indicating of deep layer denoising autoencoder network learning characteristic, finally substitutes initial local using depth network characterization Feature trains linear SVM Classification and Identification.Compared with traditional SAR image target recognition method, the present invention was being identified Validity and robustness in journey is higher, and algorithm complexity is lower.
It is described further below with reference to technical effect of the attached drawing to design of the invention, concrete scheme and generation, with It is fully understood from the purpose of the present invention, feature and effect.
Detailed description of the invention
Fig. 1 is the SAR image Target Recognition Algorithms frame diagram based on denoising autoencoder network
Fig. 2 is denoising autoencoder network SAR image feature extraction algorithm flow chart;
Specific embodiment
The embodiment that the present invention will now be explained with reference to the accompanying drawings
As shown in Figure 1, SAR image Target Recognition Algorithms principle of the invention of the invention are as follows:
(1) image subblock with similar structure is combined into three dimensions first with three-dimensional module matching algorithm (BM3D) Group carries out image filtering in transform domain using the method for Federated filter, the image after being denoised finally by inverse transformation.
(2) to the training image after original image and denoising, dense feature point is extracted with sliding window, and respectively by correspondence Position saves in pairs.Input for later period training depth denoising encoder.For test image, denoising is no longer carried out, Directly extract intensive SIFT feature.Assuming that scale S={ 0,1,2 }, then sliding window size and corresponding sampling interval step-length are as follows:
winsize(s)=16 × 2s,
winstep(s)=8 × 2s,
After sliding window, the intensive SIFT feature number of piece image extraction are as follows:
It (3) is to be originally inputted x with the SIFT feature of image zooming-out after denoising, with the corresponding SIFT for not denoising image zooming-out To add input of making an uproarTraining deep layer denoises autoencoder network.Steps are as follows for feature extraction algorithm:
(1) intensive SIFT feature is extracted: by the method for (2), from training image, intensive sampling is extracted at equal intervals
128 multiple dimensioned dimension SIFT features, for the SAR image of a width 64*64 size, with [0,1,2] --- a scale mentions
Intensive SIFT is taken, characteristic reaches 59.
(2) unsupervised pre-training: the SIFT feature after using denoising as being originally inputted, make by the SIFT feature not denoised
To add input of making an uproar, layer-by-layer pre-training network parameter.
(3) have supervision to finely tune: the last layer of network is as label output layer, and using back-propagation algorithm, fine tuning is entire
The parameter of network, the number for having supervision to finely tune cannot be too many (because of the target category letter that single SIFT feature includes Breath is insufficient,
Enough feature raw information is repeatedly finely tuned so that network parameter is beyond expression), pass through cross validation and early- stopping
Select more excellent fine tuning the number of iterations.
(4) for each samples pictures (training or test), it is special directly propagated forward feature extraction: to extract intensive SIFT
Sign, then using intensive SIFT as network inputs, propagated forward calculates the layer second from the bottom response before output layer, As
Last further feature expression, to substitute original intensive SIFT feature, the input as later period classifier is special Sign.
(4) maximum value pond method (Max is utilized in the image subblock of the same scale by spatial pyramid Pooling method) summarizes each local feature, then connects the feature summarized in each scale sub-block and obtains most Whole iamge description vector.
(5) depth network training is carried out using a large amount of intensive SIFT features, considers the influence of SAR image speckle noise, benefit It is indicated with the high level of deep layer denoising autoencoder network learning characteristic, is finally instructed using depth network characterization substitution initial local feature Practice linear SVM Classification and Identification.
The preferred embodiment of the present invention has been described in detail above.It should be appreciated that the ordinary skill of this field is without wound The property made labour, which according to the present invention can conceive, makes many modifications and variations.Therefore, all technician in the art Pass through the available technology of logical analysis, reasoning, or a limited experiment on the basis of existing technology under this invention's idea Scheme, all should be within the scope of protection determined by the claims.

Claims (6)

1. a kind of SAR image target recognition method based on multiple dimensioned rarefaction representation characterized by comprising
The denoising of step (1) SAR image: intensive SIFT feature extracts the interference that still will receive coherent speckle noise, therefore first with going Algorithm of making an uproar pre-processes original image;
Step (2) extracts intensive Scale invariant local feature (SIFT): to pretreated training image in (1), each width figure As extracting dense feature point, and pass through the method composing training character subset of stochastical sampling;For test image, equally extract close Collect SIFT feature, retains all features and carry out sparse coding;
Step (3) carries out rarefaction representation to feature: to the training characteristics subset obtained in (2), utilizing multi-scale dictionary study side Method learns global multi-scale dictionary,;Then to all intensive SIFT feature (training image and the test charts of image zooming-out in (2) Picture) sparse coding is carried out, the rarefaction representation of feature is obtained, is extracted using rarefaction representation character displacement original image corresponding position SIFT feature;
Step (4) Feature Mapping: after the rarefaction representation of the feature extraction of (2) and (3), each image is obtained in different location One sparse features vector, clusters feature vector, then use space pyramid (SPM) model calculate characteristics of image to Amount expression, summarizes each local feature using maximum pond method (Max Pooling), obtains final iamge description Vector;
The classification of step (5) linear SVM: after obtaining the final image description vectors of training image in (4), training classification Device carries out SAR image target identification.
2. as the image subblock with similar structure is combined into three using three-dimensional module matching algorithm (BM3D) in claim 1 Dimension group carries out image filtering in transform domain using the method for Federated filter, the image after being denoised finally by inverse transformation.
3. selecting different window size and sliding as carried out intensive sampling using sliding window method first in claim 1 Step-length extracts the intensive SIFT feature under multiple scales in same sub-picture, then carries out Gaussian Blur and direction matching meter It calculates, guarantees SIFT description for extracting 128 dimensions in each sliding window, for a secondary M × N size figure Picture, it is assumed that scale S={ 0,1,2 }, then sliding window size and corresponding sampling interval step-length are as follows:
winsize(s)=16 × 2s,
winstep(s)=8 × 2s,
After sliding window, the intensive SIFT feature number of piece image extraction are as follows:
4. more rulers are added in local shape factor as there is multiple dimensioned characteristic in order to make rarefaction representation equally in claim 1 Feature is spent, then using Analysis On Multi-scale Features as input training dictionary, learns multiple dimensioned characteristic;
The step of algorithm, is as follows:
(1) intensive SIFT feature is extracted: by the method for (2), from training image, intensive sampling extraction is multiple dimensioned at equal intervals 128 dimension SIFT features, for the SAR image of a width 64*64 size, with [0,1,2] --- a scale extracts intensive SIFT, special Sign number reaches 59;
(2) random down-sampling: being extracted a large amount of SIFT feature by intensive sampling and describe son, wherein there are bulk redundancy, it is right It carries out random sampling and obtains multiple dimensioned training characteristics subset according to training set selective sampling ratio;
(3) multi-scale dictionary learns: using multiple dimensioned intensive SIFT feature collection as input, being learnt using RLS-DLA algorithm global Multi-scale dictionary obtains multi-scale dictionary D;
(4) rarefaction representation: for training set, original intensive SIFT feature is passed through into multiple dimensioned rarefaction representation, for later period training Classifier equally extracts the intensive SIFT feature of every width picture for test set image, solves using multi-scale dictionary sparse Expression, the input for later period classifier.
5. as utilized maximum value pond method in the image subblock of the same scale by spatial pyramid in claim 1 The method of (Max Pooling) summarizes each local feature, then connects the feature summarized in each scale sub-block and obtains To final iamge description vector, it is assumed that C is the feature coding set that some sub-block generates:
Wherein, M is the sparse features vector dimension after sparse coding, and S is the sparse features vector number in the sub-block, using most Big value pond (max pooling) method obtains the M dimensional feature vector of each sub-block:
By Chi Huahou, multiple sparse features of each sub-block are aggregated into a feature vector ^, finally by all sub-blocks of image Summarize combination of eigenvectors into an iamge description.
6. the later period only needs as local feature has stronger ability to express after multiple dimensioned rarefaction representation in claim 1 Simple linear SVM also passes through feature extraction, multiple dimensioned rarefaction representation and Feature Mapping for test set Afterwards, it is predicted with trained support vector machines, realizes the identification of target.
CN201811302162.5A 2018-11-02 2018-11-02 A kind of SAR image target recognition method based on denoising autoencoder network Pending CN109389101A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811302162.5A CN109389101A (en) 2018-11-02 2018-11-02 A kind of SAR image target recognition method based on denoising autoencoder network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811302162.5A CN109389101A (en) 2018-11-02 2018-11-02 A kind of SAR image target recognition method based on denoising autoencoder network

Publications (1)

Publication Number Publication Date
CN109389101A true CN109389101A (en) 2019-02-26

Family

ID=65427247

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811302162.5A Pending CN109389101A (en) 2018-11-02 2018-11-02 A kind of SAR image target recognition method based on denoising autoencoder network

Country Status (1)

Country Link
CN (1) CN109389101A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109919870A (en) * 2019-03-05 2019-06-21 西安电子科技大学 A kind of SAR image speckle suppression method based on BM3D
CN109919215A (en) * 2019-02-27 2019-06-21 中国电子科技集团公司第二十八研究所 The object detection method of feature pyramid network is improved based on clustering algorithm
CN116310399A (en) * 2023-03-22 2023-06-23 中南大学 AE-CNN-based high-dimensional feature map target identification method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103886333A (en) * 2014-04-04 2014-06-25 武汉大学 Method for active spectral clustering of remote sensing images
CN104346630A (en) * 2014-10-27 2015-02-11 华南理工大学 Cloud flower identifying method based on heterogeneous feature fusion
CN104778476A (en) * 2015-04-10 2015-07-15 电子科技大学 Image classification method
US20160078600A1 (en) * 2013-04-25 2016-03-17 Thomson Licensing Method and device for performing super-resolution on an input image
CN106778768A (en) * 2016-11-22 2017-05-31 广西师范大学 Image scene classification method based on multi-feature fusion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160078600A1 (en) * 2013-04-25 2016-03-17 Thomson Licensing Method and device for performing super-resolution on an input image
CN103886333A (en) * 2014-04-04 2014-06-25 武汉大学 Method for active spectral clustering of remote sensing images
CN104346630A (en) * 2014-10-27 2015-02-11 华南理工大学 Cloud flower identifying method based on heterogeneous feature fusion
CN104778476A (en) * 2015-04-10 2015-07-15 电子科技大学 Image classification method
CN106778768A (en) * 2016-11-22 2017-05-31 广西师范大学 Image scene classification method based on multi-feature fusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
阮怀玉: "基于稀疏表示和深度学习的SAR图像目标识别研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109919215A (en) * 2019-02-27 2019-06-21 中国电子科技集团公司第二十八研究所 The object detection method of feature pyramid network is improved based on clustering algorithm
CN109919215B (en) * 2019-02-27 2021-03-12 中国电子科技集团公司第二十八研究所 Target detection method for improving characteristic pyramid network based on clustering algorithm
CN109919870A (en) * 2019-03-05 2019-06-21 西安电子科技大学 A kind of SAR image speckle suppression method based on BM3D
CN116310399A (en) * 2023-03-22 2023-06-23 中南大学 AE-CNN-based high-dimensional feature map target identification method and system
CN116310399B (en) * 2023-03-22 2024-04-09 中南大学 AE-CNN-based high-dimensional feature map target identification method and system

Similar Documents

Publication Publication Date Title
CN109522857B (en) People number estimation method based on generation type confrontation network model
He et al. DABNet: Deformable contextual and boundary-weighted network for cloud detection in remote sensing images
Zhu et al. Intelligent logging lithological interpretation with convolution neural networks
Tang et al. Compressed-domain ship detection on spaceborne optical image using deep neural network and extreme learning machine
CN110135267A (en) A kind of subtle object detection method of large scene SAR image
CN105160310A (en) 3D (three-dimensional) convolutional neural network based human body behavior recognition method
CN109086773A (en) Fault plane recognition methods based on full convolutional neural networks
CN105550678A (en) Human body motion feature extraction method based on global remarkable edge area
CN105574820A (en) Deep learning-based adaptive ultrasound image enhancement method
CN110097075A (en) Ocean mesoscale eddy classifying identification method based on deep learning
CN105809198A (en) SAR image target recognition method based on deep belief network
CN109492570A (en) A kind of SAR image target recognition method based on multiple dimensioned rarefaction representation
CN109002848B (en) Weak and small target detection method based on feature mapping neural network
CN101303764A (en) Method for self-adaption amalgamation of multi-sensor image based on non-lower sampling profile wave
CN112950780B (en) Intelligent network map generation method and system based on remote sensing image
CN109389101A (en) A kind of SAR image target recognition method based on denoising autoencoder network
CN107909109A (en) SAR image sorting technique based on conspicuousness and multiple dimensioned depth network model
CN107452022A (en) A kind of video target tracking method
CN109635726B (en) Landslide identification method based on combination of symmetric deep network and multi-scale pooling
CN107977661A (en) The region of interest area detecting method decomposed based on full convolutional neural networks and low-rank sparse
CN108122221A (en) The dividing method and device of diffusion-weighted imaging image midbrain ischemic area
CN103985143A (en) Discriminative online target tracking method based on videos in dictionary learning
CN116206185A (en) Lightweight small target detection method based on improved YOLOv7
CN114241422A (en) Student classroom behavior detection method based on ESRGAN and improved YOLOv5s
CN106251375A (en) A kind of degree of depth study stacking-type automatic coding of general steganalysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190226