CN109492570A - A kind of SAR image target recognition method based on multiple dimensioned rarefaction representation - Google Patents
A kind of SAR image target recognition method based on multiple dimensioned rarefaction representation Download PDFInfo
- Publication number
- CN109492570A CN109492570A CN201811303259.8A CN201811303259A CN109492570A CN 109492570 A CN109492570 A CN 109492570A CN 201811303259 A CN201811303259 A CN 201811303259A CN 109492570 A CN109492570 A CN 109492570A
- Authority
- CN
- China
- Prior art keywords
- feature
- image
- scale
- rarefaction representation
- intensive
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/28—Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/513—Sparse representations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of SAR image target recognition methods based on multiple dimensioned rarefaction representation, this method comprises: being removed dryness using three-dimensional module matching algorithm (BM3D) to image;The intensive SIFT feature under multiple scales is extracted using sliding window;Learn global multi-scale dictionary using RLS-DLA algorithm, obtain multi-scale dictionary D, original intensive SIFT feature is passed through into multiple dimensioned rarefaction representation;Use space pyramid (SPM) model calculates image feature vector expression, is summarized using maximum pond method (Max Pooling) to each local feature, obtains final iamge description vector;Training classifier carries out SAR image target identification.Compared with traditional SAR image target recognition method, validity and robustness of the present invention in identification process are higher, and algorithm complexity is lower.
Description
Technical field
The invention belongs to synthetic aperture radar (SAR) image application fields, are related to a kind of SAR image target recognition method,
Especially a kind of SAR image target recognition method based on multiple dimensioned rarefaction representation.
Background technique
Synthetic aperture radar (SAR) is as a kind of important remotely sensed image sensor, in environmental monitoring, resources survey and state
The fields such as anti-military affairs have a very wide range of applications.In face of the SAR data of magnanimity, how therefrom to identify automatically, quickly and accurately
Target becomes the important directions of current SAR image treatment research, has attracted more and more attention from people and payes attention to.
SAR image target recognition method is broadly divided into three classes at present: method based on template matching, based on pattern classification
Method and method based on rarefaction representation.
Template matching method passes through the training image marked and constructs template image, in the database by template storage.
When predicting new image, given test image is matched with all templates in database, is defined similar
Spend criterion, with most like stereotype, as the label classification of test image, have the characteristics that it is simple and convenient, but due to
Need to be stored in advance a large amount of templates, algorithm space complexity is high, and performance is easy to be influenced by SAR image quality, algorithm Shandong
Stick is inadequate.
Method based on pattern classification generally first carries out feature extraction and selection to image, then utilizes feature training classification
Device is classified.The superiority and inferiority of feature has great influence to classification.However the feature of hand-designed function admirable is very difficult, and
And feature extraction is influenced vulnerable to noise, azimuth etc., generally requires complicated pretreatment.Meanwhile training classifier generally requires
Data set abundant enough, and different classifiers performance difference under different application background is larger.The difficulty of characteristic Design, point
The condition hypothesis of class device selection, so that traditional SAR image target recognition method based on pattern classification has certain limitation.
Sparse representation method generally comprises two steps: dictionary learning and rarefaction representation.First against specific set of data, lead to
Overfitting obtains an excessively complete dictionary, this process is known as dictionary learning.After obtaining complete dictionary, orthogonal matching is utilized
The rarefaction representation that (OMP) algorithm etc. solves signal is tracked, this step is known as sparse coding.Method of the rarefaction representation by learning,
The adaptive expression for seeking being originally inputted is equivalent to and carries out once linear transformation to original signal.In image recognition, pass through
Rarefaction representation is carried out to original image, is a characteristic extraction procedure, can directly be carried out with the rarefaction representation of original image
Classification and identification, to effectively avoid the difficulty of manual extraction feature.On the other hand, the solution procedure of rarefaction representation calculates
Complexity, it is computationally intensive, it is quite time-consuming.
The SAR image Target Recognition Algorithms having proposed at present are all not general enough, and a large amount of phases due to containing in SAR image
Dry spot noise is typically necessary image preprocessing process by complicated and time consumption before carrying out target identification;Meanwhile traditional optical figure
The feature extraction of picture is not sufficiently stable robust in SAR image feature.In order to solve the problems, such as above-mentioned SAR target identification, the present invention
Propose the SAR image target recognition method based on multiple dimensioned rarefaction representation.Multi resolution feature extraction is carried out first, in characteristic layer
Face obtains image sparse feature using rarefaction representation, finally carries out SAR image target classification using multiple dimensioned rarefaction representation feature.
Summary of the invention
In view of above-mentioned deficiencies of the prior art, the technical problem to be solved by the present invention is to how be directed to high resolution SAR
Image carries out Multi resolution feature extraction, obtains image sparse feature with rarefaction representation, and then use multiple dimensioned rarefaction representation feature
Carry out SAR image target classification.
To achieve the above object, the present invention provides a kind of SAR image target identification sides based on multiple dimensioned rarefaction representation
Method.Its feature includes:
(1) SAR image removes dryness: intensive SIFT feature extracts the interference that still will receive coherent speckle noise, therefore first with going
Algorithm of making an uproar pre-processes original image;
(2) intensive Scale invariant local feature (SIFT) is extracted: to pretreated training image in (1), each width figure
As extracting dense feature point, and pass through the method composing training character subset of stochastical sampling;For test image, equally extract close
Collect SIFT feature, retains all features and carry out sparse coding;
(3) rarefaction representation is carried out to feature: to the training characteristics subset obtained in (2), utilizes multi-scale dictionary study side
Method learns global multi-scale dictionary,;Then to all intensive SIFT feature (training image and the test charts of image zooming-out in (2)
Picture) sparse coding is carried out, the rarefaction representation of feature is obtained, is extracted using rarefaction representation character displacement original image corresponding position
SIFT feature;
(4) Feature Mapping: after the rarefaction representation of the feature extraction of (2) and (3), each image is obtained in different location
One sparse features vector, clusters feature vector, then use space pyramid (SPM) model calculate characteristics of image to
Amount expression, summarizes each local feature using maximum pond method (Max Pooling), obtains final iamge description
Vector.
(5) linear SVM is classified: after obtaining the final image description vectors of training image in (4), training classification
Device carries out SAR image target identification.
Further, utilize three-dimensional module matching algorithm (BM3D) by the image subblock with similar structure in described (1)
It is combined into three-dimensional array, image filtering is carried out in transform domain using the method for Federated filter, is denoised finally by inverse transformation
Image afterwards.
Intensive sampling is carried out using sliding window method first in (2), selectes different window size and sliding step
It is long, in same sub-picture, extract the intensive SIFT feature under multiple scales.Then Gaussian Blur and direction matching meter are carried out
It calculates, guarantees SIFT description for extracting 128 dimensions in each sliding window.For a secondary M × N size figure
Picture, it is assumed that scale S={ 0,1,2 }, then sliding window size and corresponding sampling interval step-length are as follows:
winsize(s)=16 × 2s,
winstep(s)=8 × 2s,
After sliding window, the intensive SIFT feature number of piece image extraction are as follows:
Different window sizes can regard one kind of spatial pyramid as, be gradually expanded from small part, be extracted figure
As the local message under different scale, the descriptive power of degree of strengthening dimensional information.
There is multiple dimensioned characteristic in order to make rarefaction representation equally in (3), be added in local shape factor multiple dimensioned
Feature, then using Analysis On Multi-scale Features as input training dictionary, learn multiple dimensioned characteristic.
The step of algorithm, is as follows:
(1) intensive SIFT feature is extracted: by the method for (2), from training image, intensive sampling extraction is more at equal intervals
128 dimension SIFT features of scale, for the SAR image of a width 64*64 size, with [0,1,2] --- a scale extracts intensive
SIFT, characteristic reach 59.
(2) random down-sampling: being extracted a large amount of SIFT feature by intensive sampling and describe son, wherein existing a large amount of superfluous
It is remaining, random sampling is carried out to it, according to training set selective sampling ratio, obtains multiple dimensioned training characteristics subset.
(3) multi-scale dictionary learns: using multiple dimensioned intensive SIFT feature collection as input, being learnt using RLS-DLA algorithm
Global multi-scale dictionary obtains multi-scale dictionary D.
(4) rarefaction representation: for training set, original intensive SIFT feature is passed through into multiple dimensioned rarefaction representation, is used for the later period
Training classifier.For test set image, the intensive SIFT feature of every width picture is equally extracted, is solved using multi-scale dictionary
Sparse expression, the input for later period classifier.
The Global Dictionary learnt using Analysis On Multi-scale Features, each independent training dictionary of scale need less computing resource
(computation), more efficient, and the dictionary learning of single scale is compared, it is contained in later period rarefaction representation multiple dimensioned
Information.
Maximum value pond method is utilized in the image subblock of the same scale by spatial pyramid in (4)
The method of (Max Pooling) summarizes each local feature, then connects the feature summarized in each scale sub-block and obtains
To final iamge description vector.Vacation lets c be the feature coding set of some sub-block generation:
Wherein, M is the sparse features vector dimension after sparse coding, and S is the sparse features vector number in the sub-block.
The M dimensional feature vector of each sub-block is obtained using maximum value pond (max pooling) method:
By Chi Huahou, multiple sparse features of each sub-block are aggregated into a feature vector ^, finally by all sons of image
Block summarizes combination of eigenvectors into an iamge description:
Local feature has stronger ability to express after multiple dimensioned rarefaction representation in (5), and the later period only needs letter
Single linear SVM.For test set, feature extraction, multiple dimensioned rarefaction representation and Feature Mapping are also passed through
Afterwards, it is predicted with trained support vector machines, realizes the identification of target.With traditional SAR image target recognition method phase
Than validity and robustness of the present invention in identification process are higher, and algorithm complexity is lower.
It is described further below with reference to technical effect of the attached drawing to design of the invention, concrete scheme and generation, with
It is fully understood from the purpose of the present invention, feature and effect.
Detailed description of the invention
Fig. 1 is SAR image Target Recognition Algorithms frame diagram of the invention;
Fig. 2 is sparse features by SPM and pond extraction process;
Specific embodiment
The embodiment that the present invention will now be explained with reference to the accompanying drawings
As shown in Figure 1, SAR image Target Recognition Algorithms principle of the invention is
(1) image subblock with similar structure is combined into three dimensions first with three-dimensional module matching algorithm (BM3D)
Group carries out image filtering in transform domain using the method for Federated filter, the image after being denoised finally by inverse transformation;
(2) different window size and sliding step are selected, in same sub-picture, is extracted intensive under multiple scales
SIFT feature.Then Gaussian Blur and direction matching primitives are carried out, guarantee to extract 128 in each sliding window
The SIFT of dimension describes son.For a secondary M × N size image, it is assumed that scale S={ 0,1,2 }, characteristic reaches 59.Then slide
Window size and corresponding sampling interval step-length are as follows:
winsize(s)=16 × 2s,
winstep(s)=8 × 2s,
After sliding window, the intensive SIFT feature number of piece image extraction are as follows:
(3) using multiple dimensioned intensive SIFT feature collection as input, learn global multi-scale dictionary using RLS-DLA algorithm,
Obtain multi-scale dictionary D.Then all intensive SIFT features (training image and test image) of image zooming-out in (2) are carried out
Sparse coding obtains the rarefaction representation of feature, and the SIFT extracted using rarefaction representation character displacement original image corresponding position is special
Sign point;
(4) feature vector is clustered, then use space pyramid (SPM) model calculates image feature vector table
It reaches, each local feature is summarized using maximum pond method (Max Pooling), is then connected in each scale sub-block
The feature summarized obtains final iamge description vector.Vacation lets c be the feature coding set of some sub-block generation:
Wherein, M is the sparse features vector dimension after sparse coding, and S is the sparse features vector number in the sub-block.
The M dimensional feature vector of each sub-block is obtained using maximum value pond (max pooling) method:
By Chi Huahou, multiple sparse features of each sub-block are aggregated into a feature vector ^, finally by all sons of image
Block summarizes combination of eigenvectors into an iamge description:
(5) after the final image description vectors for obtaining training image, training classifier carries out SAR image target identification.
The preferred embodiment of the present invention has been described in detail above.It should be appreciated that the ordinary skill of this field is without wound
The property made labour, which according to the present invention can conceive, makes many modifications and variations.Therefore, all technician in the art
Pass through the available technology of logical analysis, reasoning, or a limited experiment on the basis of existing technology under this invention's idea
Scheme, all should be within the scope of protection determined by the claims.
Claims (6)
1. a kind of SAR image target recognition method based on multiple dimensioned rarefaction representation characterized by comprising
Step (1) SAR image removes dryness: intensive SIFT feature extracts the interference that still will receive coherent speckle noise, therefore first with going
Algorithm of making an uproar pre-processes original image;
Step (2) extracts intensive Scale invariant local feature (SIFT): to pretreated training image in (1), each width figure
As extracting dense feature point, and pass through the method composing training character subset of stochastical sampling;For test image, equally extract close
Collect SIFT feature, retains all features and carry out sparse coding;
Step (3) carries out rarefaction representation to feature: to the training characteristics subset obtained in (2), utilizing multi-scale dictionary study side
Method learns global multi-scale dictionary,;Then to all intensive SIFT feature (training image and the test charts of image zooming-out in (2)
Picture) sparse coding is carried out, the rarefaction representation of feature is obtained, is extracted using rarefaction representation character displacement original image corresponding position
SIFT feature;
Step (4) Feature Mapping: after the rarefaction representation of the feature extraction of (2) and (3), each image is obtained in different location
One sparse features vector, clusters feature vector, then use space pyramid (SPM) model calculate characteristics of image to
Amount expression, summarizes each local feature using maximum pond method (Max Pooling), obtains final iamge description
Vector.
The classification of step (5) linear SVM: after obtaining the final image description vectors of training image in (4), training classification
Device carries out SAR image target identification.
2. as the image subblock with similar structure is combined into three using three-dimensional module matching algorithm (BM3D) in claim 1
Dimension group carries out image filtering in transform domain using the method for Federated filter, the image after being denoised finally by inverse transformation.
3. selecting different window size and sliding as carried out intensive sampling using sliding window method first in claim 1
Step-length extracts the intensive SIFT feature under multiple scales in same sub-picture.Then Gaussian Blur and direction matching meter are carried out
It calculates, guarantees SIFT description for extracting 128 dimensions in each sliding window.For a secondary M × N size figure
Picture, it is assumed that scale S={ 0,1,2 }, then sliding window size and corresponding sampling interval step-length are as follows:
winsize(s)=16 × 2s,
winstep(s)=8 × 2s,
After sliding window, the intensive SIFT feature number of piece image extraction are as follows:
。
4. more rulers are added in local shape factor as there is multiple dimensioned characteristic in order to make rarefaction representation equally in claim 1
Feature is spent, then using Analysis On Multi-scale Features as input training dictionary, learns multiple dimensioned characteristic.
The step of algorithm, is as follows:
(1) intensive SIFT feature is extracted: by the method for (2), from training image, intensive sampling extraction is multiple dimensioned at equal intervals
128 dimension SIFT features, for the SAR image of a width 64*64 size, with [0,1,2] --- a scale extracts intensive SIFT, special
Sign number reaches 59.
(2) random down-sampling: being extracted a large amount of SIFT feature by intensive sampling and describe son, wherein there are bulk redundancy, it is right
It carries out random sampling and obtains multiple dimensioned training characteristics subset according to training set selective sampling ratio.
(3) multi-scale dictionary learns: using multiple dimensioned intensive SIFT feature collection as input, being learnt using RLS-DLA algorithm global
Multi-scale dictionary obtains multi-scale dictionary D.
(4) rarefaction representation: for training set, original intensive SIFT feature is passed through into multiple dimensioned rarefaction representation, for later period training
Classifier.For test set image, the intensive SIFT feature of every width picture is equally extracted, is solved using multi-scale dictionary sparse
Expression, the input for later period classifier.
The Global Dictionary learnt using Analysis On Multi-scale Features, each independent training dictionary of scale need less computing resource
(computation), more efficient, and the dictionary learning of single scale is compared, it is contained in later period rarefaction representation multiple dimensioned
Information.
5. as utilized maximum value pond method in the image subblock of the same scale by spatial pyramid in claim 1
The method of (Max Pooling) summarizes each local feature, then connects the feature summarized in each scale sub-block and obtains
To final iamge description vector.Vacation lets c be the feature coding set of some sub-block generation:
Wherein, M is the sparse features vector dimension after sparse coding, and S is the sparse features vector number in the sub-block.
The M dimensional feature vector of each sub-block is obtained using maximum value pond (max pooling) method:
fi=maxci,j, i=1 ..., M, j=1 ..., S
By Chi Huahou, multiple sparse features of each sub-block are aggregated into a feature vector ^, finally by all sub-blocks of image
Summarize combination of eigenvectors into an iamge description:
。
6. the later period only needs as local feature has stronger ability to express after multiple dimensioned rarefaction representation in claim 1
Simple linear SVM.For test set, feature extraction, multiple dimensioned rarefaction representation and Feature Mapping are also passed through
Afterwards, it is predicted with trained support vector machines, realizes the identification of target.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811303259.8A CN109492570A (en) | 2018-11-02 | 2018-11-02 | A kind of SAR image target recognition method based on multiple dimensioned rarefaction representation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811303259.8A CN109492570A (en) | 2018-11-02 | 2018-11-02 | A kind of SAR image target recognition method based on multiple dimensioned rarefaction representation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109492570A true CN109492570A (en) | 2019-03-19 |
Family
ID=65693762
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811303259.8A Pending CN109492570A (en) | 2018-11-02 | 2018-11-02 | A kind of SAR image target recognition method based on multiple dimensioned rarefaction representation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109492570A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110263687A (en) * | 2019-06-06 | 2019-09-20 | 深圳职业技术学院 | A kind of multi-angle of view pedestrian target detection method based on rarefaction representation |
CN110309793A (en) * | 2019-07-04 | 2019-10-08 | 电子科技大学 | A kind of SAR target identification method based on video bits layering interpretation |
CN110400300A (en) * | 2019-07-24 | 2019-11-01 | 哈尔滨工业大学(威海) | Lesion vessels accurate detecting method based on Block- matching adaptive weighting rarefaction representation |
CN111950443A (en) * | 2020-08-10 | 2020-11-17 | 北京师范大学珠海分校 | Dense crowd counting method of multi-scale convolutional neural network |
CN113627288A (en) * | 2021-07-27 | 2021-11-09 | 武汉大学 | Intelligent information label obtaining method for massive images |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103886333A (en) * | 2014-04-04 | 2014-06-25 | 武汉大学 | Method for active spectral clustering of remote sensing images |
CN104346630A (en) * | 2014-10-27 | 2015-02-11 | 华南理工大学 | Cloud flower identifying method based on heterogeneous feature fusion |
CN104778476A (en) * | 2015-04-10 | 2015-07-15 | 电子科技大学 | Image classification method |
US20160078600A1 (en) * | 2013-04-25 | 2016-03-17 | Thomson Licensing | Method and device for performing super-resolution on an input image |
CN106778768A (en) * | 2016-11-22 | 2017-05-31 | 广西师范大学 | Image scene classification method based on multi-feature fusion |
-
2018
- 2018-11-02 CN CN201811303259.8A patent/CN109492570A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160078600A1 (en) * | 2013-04-25 | 2016-03-17 | Thomson Licensing | Method and device for performing super-resolution on an input image |
CN103886333A (en) * | 2014-04-04 | 2014-06-25 | 武汉大学 | Method for active spectral clustering of remote sensing images |
CN104346630A (en) * | 2014-10-27 | 2015-02-11 | 华南理工大学 | Cloud flower identifying method based on heterogeneous feature fusion |
CN104778476A (en) * | 2015-04-10 | 2015-07-15 | 电子科技大学 | Image classification method |
CN106778768A (en) * | 2016-11-22 | 2017-05-31 | 广西师范大学 | Image scene classification method based on multi-feature fusion |
Non-Patent Citations (1)
Title |
---|
阮怀玉: "基于稀疏表示和深度学习的SAR图像目标识别研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110263687A (en) * | 2019-06-06 | 2019-09-20 | 深圳职业技术学院 | A kind of multi-angle of view pedestrian target detection method based on rarefaction representation |
CN110309793A (en) * | 2019-07-04 | 2019-10-08 | 电子科技大学 | A kind of SAR target identification method based on video bits layering interpretation |
CN110400300A (en) * | 2019-07-24 | 2019-11-01 | 哈尔滨工业大学(威海) | Lesion vessels accurate detecting method based on Block- matching adaptive weighting rarefaction representation |
CN110400300B (en) * | 2019-07-24 | 2023-06-23 | 哈尔滨工业大学(威海) | Pathological blood vessel accurate detection method based on block matching self-adaptive weight sparse representation |
CN111950443A (en) * | 2020-08-10 | 2020-11-17 | 北京师范大学珠海分校 | Dense crowd counting method of multi-scale convolutional neural network |
CN111950443B (en) * | 2020-08-10 | 2023-12-29 | 北京师范大学珠海分校 | Dense crowd counting method of multi-scale convolutional neural network |
CN113627288A (en) * | 2021-07-27 | 2021-11-09 | 武汉大学 | Intelligent information label obtaining method for massive images |
CN113627288B (en) * | 2021-07-27 | 2023-08-18 | 武汉大学 | Intelligent information label acquisition method for massive images |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wu et al. | ORSIm detector: A novel object detection framework in optical remote sensing imagery using spatial-frequency channel features | |
CN109492570A (en) | A kind of SAR image target recognition method based on multiple dimensioned rarefaction representation | |
Zou et al. | Ship detection in spaceborne optical image with SVD networks | |
Tang et al. | Compressed-domain ship detection on spaceborne optical image using deep neural network and extreme learning machine | |
Xu et al. | Deep learning of feature representation with multiple instance learning for medical image analysis | |
Tang | Wavelet theory approach to pattern recognition | |
CN105404886B (en) | Characteristic model generation method and characteristic model generating means | |
Sun et al. | Automatic target detection in high-resolution remote sensing images using spatial sparse coding bag-of-words model | |
Hung et al. | Image texture analysis | |
CN105389550B (en) | It is a kind of based on sparse guide and the remote sensing target detection method that significantly drives | |
CN109902590A (en) | Pedestrian's recognition methods again of depth multiple view characteristic distance study | |
CN105160310A (en) | 3D (three-dimensional) convolutional neural network based human body behavior recognition method | |
CN104091169A (en) | Behavior identification method based on multi feature fusion | |
CN108257151B (en) | PCANet image change detection method based on significance analysis | |
Chen et al. | Invariant pattern recognition using contourlets and AdaBoost | |
CN103020647A (en) | Image classification method based on hierarchical SIFT (scale-invariant feature transform) features and sparse coding | |
CN107452022A (en) | A kind of video target tracking method | |
CN107480620A (en) | Remote sensing images automatic target recognition method based on heterogeneous characteristic fusion | |
CN110334762A (en) | A kind of feature matching method combining ORB and SIFT based on quaternary tree | |
CN108021869A (en) | A kind of convolutional neural networks tracking of combination gaussian kernel function | |
CN103985143A (en) | Discriminative online target tracking method based on videos in dictionary learning | |
Ren et al. | Ship recognition based on Hu invariant moments and convolutional neural network for video surveillance | |
CN114241422A (en) | Student classroom behavior detection method based on ESRGAN and improved YOLOv5s | |
Yang et al. | Structurally enhanced incremental neural learning for image classification with subgraph extraction | |
CN103473559A (en) | SAR image change detection method based on NSCT domain synthetic kernels |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20190319 |