CN109165682A - A kind of remote sensing images scene classification method merging depth characteristic and significant characteristics - Google Patents

A kind of remote sensing images scene classification method merging depth characteristic and significant characteristics Download PDF

Info

Publication number
CN109165682A
CN109165682A CN201810911856.2A CN201810911856A CN109165682A CN 109165682 A CN109165682 A CN 109165682A CN 201810911856 A CN201810911856 A CN 201810911856A CN 109165682 A CN109165682 A CN 109165682A
Authority
CN
China
Prior art keywords
notable
feature
data set
depth characteristic
original
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810911856.2A
Other languages
Chinese (zh)
Other versions
CN109165682B (en
Inventor
薛伟
戴向阳
张斌
罗严
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Geosciences
Original Assignee
China University of Geosciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Geosciences filed Critical China University of Geosciences
Priority to CN201810911856.2A priority Critical patent/CN109165682B/en
Publication of CN109165682A publication Critical patent/CN109165682A/en
Application granted granted Critical
Publication of CN109165682B publication Critical patent/CN109165682B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Abstract

The invention discloses a kind of remote sensing images scene classification methods for merging depth characteristic and significant characteristics, first, convolutional neural networks trained on ImageNet data set are finely adjusted using original graph data set, and extract the depth characteristic of raw data set image using the convolutional neural networks after fine tuning;Then, original graph data set is handled by conspicuousness detection method and obtains notable figure, and extract the significant characteristics of notable figure using the convolutional neural networks after finely tuning on notable figure data set;Next, merging two kinds of features by Concurrent Feature convergence strategy, fused feature Training Support Vector Machines are utilized;Finally, being classified by trained support vector machines to target data set image.The present invention can obtain the global information and high-lighting information of original image simultaneously, and the effective distinction for improving feature vector has better feature representation ability and nicety of grading to remote sensing images scene.

Description

A kind of remote sensing images scene classification method merging depth characteristic and significant characteristics
Technical field
The present invention relates to digital image processing field more particularly to a kind of remote sensing for merging depth characteristic and significant characteristics Image scene classification method.
Background technique
Remote sensing images scene classification is the positive and challenging task that many applications push, and the purpose is to know The land cover pattern classification in other remote sensing images region, it is a background task, is widely used in various practical remote sensing applications, such as soil Resource management, urban planning etc..Learn the core that efficient image expression is remote sensing images scene classification task.Due to actual field Similitude between otherness and high class in high class between scape image, based on the feature coding side with low level hand-designed feature The method of method or the scene classification task of unsupervised feature learning can only generate the spy of the intermediate image with finite presentation ability Sign, this fundamentally limits the performance of scene classification task.
Recently, with deep learning especially convolutional neural networks (Convolutional Neural Network, CNN) Development, convolutional neural networks show surprising performance in Object identifying and context of detection.Currently, convolutional neural networks are answered more For recognition of face, Handwritten Digit Recognition, defects in timber identification etc. in small image classification, for remote sensing image classification research simultaneously Seldom.In current research, convolutional neural networks need a large amount of training data, and consume a large amount of time, this is one Determine to constrain its practical application in degree, so how convolutional neural networks are preferably applied to the calculation of remote sensing image classification In method model, nicety of grading is improved, is the main problem for needing to research and solve in convolutional neural networks research.
Summary of the invention
In view of this, the embodiment provides global informations and protrusion that one kind can obtain original image simultaneously Property information, the effective distinction for improving feature vector has better feature representation ability and classification to remote sensing images scene The fusion depth characteristic of precision and the remote sensing images scene classification method of significant characteristics.
The embodiment of the present invention provides a kind of remote sensing images scene classification method for merging depth characteristic and significant characteristics, The following steps are included:
(1) the original graph data set for carrying out scene classification is selected, original graph training set and original graph test are divided into Collection;
(2) original graph data set is handled by conspicuousness detection method, obtains notable figure data set, and be divided into showing Write figure training set and notable figure test set;
(3) convolutional neural networks trained on ImageNet data set are finely adjusted using original graph training set, Obtain the neural network model VGG-1 of original graph;Using notable figure training set to volume trained on ImageNet data set Product neural network is finely adjusted, and obtains the neural network model VGG-2 of notable figure;
(4) original graph data set is handled using VGG-1, obtains the depth characteristic of original image;It is handled using VGG-2 significant Diagram data collection obtains the significant characteristics of original image;
(5) depth characteristic and significant characteristics are merged by Concurrent Feature convergence strategy, obtains final fusion feature, it will Final character representation of the final fusion feature as original image;
(6) classified to original image according to final fusion feature using support vector machine classifier, obtained final Classification results.
Further, in the step (2), by conspicuousness detection method handle original graph data set the step of it is as follows:
Original image is converted to gray level image by (2-1), obtains the significant of edge after filtering using Canny edge detector Scheme It
The color space of original image is changed to uniform LAB color space by (2-2), then carries out Gauss low pass to image Filtering finally calculates the Euclidean distance between original image and filtered image, as the notable figure I of colorc:
Ic=| | Iu-Iw||
Wherein, IuFor the arithmetic average of pixel value, IwPass through the image that Gassian low-pass filter obtains for original image;
(2-3) is by edge notable figure ItWith color notable figure IcIt carries out linear superposition and obtains final notable figure Is
Further, in the step (3), original graph training set and notable figure training set fine tuning convolutional Neural net are utilized respectively Specific step is as follows for network:
(3-1) sets fine tuning the number of iterations as N;
(3-2) forward-propagating training: it calculates under current coefficient, the true classifying quality which has, repeatedly It is as follows for process:
xi+1=fi(ui)
ui=Wixi+bi
Wherein, xi+1For the input of i+1 layer;xiFor i-th layer of input;WiIt is i-th layer of weight vector, it acts on it In the data of input;biFor i-th layer of additional bias vector;fi() indicates i-th layer of activation primitive, uiTo be carried out to input Result after convolution operation;
(3-3) backpropagation training: through network output compared with true tag, constantly iteration updates coefficient, makes Result must be exported close to desired value, iterative process is as follows:
Wherein, learning rate α is the controlling elements of backpropagation intensity, and L (W, b) is loss function;
The number of iterations N that (3-4) is set according to step (3-1) repeats step (3-2) and step (3-3) n times.
Further, in the step (4), from VGG-1 the last one be fully connected in layer extract feature as original graph The depth characteristic F of picture1, the last one from VGG-2 be fully connected in layer and extract significant characteristics of the feature as original image F2, the depth characteristic F1With significant characteristics F2Feature vector dimension it is consistent.
Further, in the step (5), depth characteristic and significant characteristics are merged by Concurrent Feature convergence strategy, then The final fusion feature representation of original image I are as follows:
Ff(I)=F1(I)+iF2(I)
Wherein, i is imaginary unit, and fused feature vector dimension remains unchanged.
Further, specific step is as follows for the step (6):
(6-1) according to the division of original graph training set and original graph test set, by fusion feature be divided into training characteristics collection and Test feature collection;
(6-2) utilizes training characteristics collection training support vector machine classifier;
(6-3) classifies to test feature collection using trained support vector machine classifier;
(6-4) calculates final classification accuracy.
Compared with prior art, the invention has the following advantages:
1. the present invention in complex scene conspicuousness detect there are aiming at the problem that, it is contemplated that target is on edge feature Conspicuousness extracts the notable figure of scene image using the method that color characteristic and edge feature merge.
2. the present invention merges depth convolutional neural networks by Parallel Fusion strategy and extracts depth characteristic and significant characteristics, The dimension of fused feature is constant, can capture the global information and high-lighting information of original image simultaneously, effectively mention The distinction of high feature vector has better feature representation ability and nicety of grading to remote sensing images scene.
Detailed description of the invention
Fig. 1 is the realization stream of the remote sensing images scene classification method of a kind of fusion depth characteristic of the present invention and significant characteristics Cheng Tu.
Fig. 2 is the exemplary diagram of original diagram data collection in one embodiment of the invention.
Fig. 3 is that original diagram data concentrates a randomly selected original graph in one embodiment of the invention.
Fig. 4 is to carry out the result figure that edge detection obtains to Fig. 3.
Fig. 5 is the result figure that color feature extracted is carried out to Fig. 3.
Fig. 6 is the corresponding final notable figure of Fig. 3.
Fig. 7 is the confusion matrix figure classified in one embodiment of the invention to original graph data set.
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with attached drawing to embodiment party of the present invention Formula is further described.
Referring to FIG. 1, the embodiment provides a kind of remote sensing images for merging depth characteristic and significant characteristics Scene classification method, comprising the following steps:
(1) the original graph data set for carrying out scene classification is selected, original graph training set and original graph test are divided into Collection;
The scene original graph data set of selection be UC Merced data set, the data set include Agricultural, Airplane、Baseball Diamond、Beach、Buildings、Chaparral、Dense Residential、Forest、 Freeway、Golf Course、Harbor、Intersection、Medium Density Residential、Mobile Home Park、Overpass、Parking Lot、River、Runway、Sparse Residential、Storage Tanks An example image of each class is listed with 21 land use scene classes, Fig. 2 such as Tennis Courts.Each class includes 100 images, pixel be 256 × 256, spatial resolution be 0.3 meter, which is divided, in each class with Machine selects 30 images as original graph training set, and remaining 70 images are as original graph test set.
(2) original graph data set is handled by conspicuousness detection method, obtains notable figure data set, and be divided into showing Write figure training set and notable figure test set;
By taking an image of Airplane class in original graph data set as an example, original image is as shown in figure 3, pass through conspicuousness It is as follows that detection method handles the step of original graph data set:
Original image is converted to gray level image by (2-1), obtains the significant of edge after filtering using Canny edge detector Scheme It, as shown in Figure 4;
The color space of original image is changed to uniform LAB color space by (2-2), then carries out Gauss low pass to image Filtering finally calculates the Euclidean distance between original image and filtered image, as the notable figure I of colorc, as shown in Figure 5:
Ic=| | Iu-Iw||
Wherein, IuFor the arithmetic average of pixel value, IwPass through the image that Gassian low-pass filter obtains for original image;
(2-3) is by edge notable figure ItWith color notable figure IcIt carries out linear superposition and obtains final notable figure Is, such as Fig. 6 It is shown.
(3) using original graph training set to convolutional neural networks VGG-Net-16 trained on ImageNet data set It is finely adjusted, obtains the neural network model VGG-1 of original graph;It is instructed using notable figure training set on ImageNet data set The convolutional neural networks VGG-Net-16 perfected is finely adjusted, and obtains the neural network model VGG-2 of notable figure;
Preferably, VGG-Net-16, the number of iterations 500 are finely tuned using original graph training set and notable figure training set respectively It is secondary, obtain the neural network model VGG-1 of original graph and the neural network model VGG-2 of notable figure.
Finely tuning convolutional neural networks, specific step is as follows:
(3-1) sets fine tuning the number of iterations as N;
(3-2) forward-propagating training: it calculates under current coefficient, the true classifying quality which has, repeatedly It is as follows for process:
xi+1=fi(ui)
ui=Wixi+bi
Wherein, xi+1For the input of i+1 layer;xiFor i-th layer of input;WiIt is i-th layer of weight vector, it acts on it In the data of input;biFor i-th layer of additional bias vector;fi() indicates i-th layer of activation primitive, uiTo be carried out to input Result after convolution operation;
(3-3) backpropagation training: through network output compared with true tag, constantly iteration updates coefficient, makes Result must be exported close to desired value, iterative process is as follows:
Wherein, learning rate α is the controlling elements of backpropagation intensity, and L (W, b) is loss function;
The number of iterations N that (3-4) is set according to step (3-1) repeats step (3-2) and step (3-3) n times.
(4) original graph data set is handled using VGG-1, obtains the depth characteristic of original image;It is handled using VGG-2 significant Diagram data collection obtains the significant characteristics of original image;
The last one from VGG-1 is fully connected depth characteristic F of the extraction feature as original image in layer1, from VGG-2 The last one be fully connected in layer and extract significant characteristics F of the feature as original image2, the depth characteristic F1With it is significant Property feature F2Feature vector dimension it is consistent, feature vector dimension is all 4096.
(5) depth characteristic and significant characteristics are merged by Concurrent Feature convergence strategy, obtains final fusion feature, it will Final character representation of the final fusion feature as original image;
The final fusion feature representation of original image I are as follows:
Ff(I)=F1(I)+iF2(I)
Wherein, i is imaginary unit, and fused feature vector dimension remains unchanged, and feature vector dimension is still 4096.
(6) classified to original image according to final fusion feature using support vector machine classifier, obtained final Classification results.
Specific step is as follows:
(6-1) according to the division of original graph training set and original graph test set, by fusion feature be divided into training characteristics collection and Test feature collection;
(6-2) utilizes training characteristics collection training support vector machine classifier;
(6-3) classifies to test feature collection using trained support vector machine classifier;
(6-4) calculates final classification accuracy.
It is as shown in Figure 7 to test obtained confusion matrix.Because the quantity of every one kind image is identical in original graph test set , so overall accuracy is identical with bat, it is 96.10%.
It can be seen from figure 7 that accuracy of this method in 6 classifications has reached 100%, and the standard in 13 classes Exactness has been more than 95%, and 96.10% obtained overall accuracy is better than existing many state-of-the-art methods, it is possible thereby to demonstrate,prove Bright this method is used for the validity of remote sensing images scene classification.It is main to obscure generation in Buildings, Dense Between Residential and Medium Density Residential, because they all include identical target object, and The main distinction of Dense Residential and Medium Density Residential is only close between building Degree.
In the absence of conflict, the feature in embodiment and embodiment herein-above set forth can be combined with each other.
The foregoing is merely presently preferred embodiments of the present invention, is not intended to limit the invention, it is all in spirit of the invention and Within principle, any modification, equivalent replacement, improvement and so on be should all be included in the protection scope of the present invention.

Claims (6)

1. a kind of remote sensing images scene classification method for merging depth characteristic and significant characteristics, which is characterized in that including following Step:
(1) the original graph data set for carrying out scene classification is selected, original graph training set and original graph test set are divided into;
(2) original graph data set is handled by conspicuousness detection method, obtains notable figure data set, and be divided into notable figure Training set and notable figure test set;
(3) convolutional neural networks trained on ImageNet data set are finely adjusted using original graph training set, are obtained The neural network model VGG-1 of original graph;Using notable figure training set to convolution mind trained on ImageNet data set It is finely adjusted through network, obtains the neural network model VGG-2 of notable figure;
(4) original graph data set is handled using VGG-1, obtains the depth characteristic of original image;Notable figure number is handled using VGG-2 According to collection, the significant characteristics of original image are obtained;
(5) depth characteristic and significant characteristics are merged by Concurrent Feature convergence strategy, obtains final fusion feature, it will be final Final character representation of the fusion feature as original image;
(6) classified to original image according to final fusion feature using support vector machine classifier, obtain final point Class result.
2. a kind of remote sensing images scene classification method for merging depth characteristic and significant characteristics according to claim 1, It is characterized in that, in the step (2), the step of handling original graph data set by conspicuousness detection method, is as follows:
Original image is converted to gray level image by (2-1), obtains the notable figure I at edge after filtering using Canny edge detectort
The color space of original image is changed to uniform LAB color space by (2-2), then carries out Gaussian low pass to image Wave finally calculates the Euclidean distance between original image and filtered image, as the notable figure I of colorc:
Ic=| | Iu-Iw||
Wherein, IuFor the arithmetic average of pixel value, IwPass through the image that Gassian low-pass filter obtains for original image;
(2-3) is by edge notable figure ItWith color notable figure IcIt carries out linear superposition and obtains final notable figure Is
3. a kind of remote sensing images scene classification method for merging depth characteristic and significant characteristics according to claim 1, It is characterized in that, being utilized respectively original graph training set in the step (3) and notable figure training set finely tuning convolutional neural networks Specific step is as follows:
(3-1) sets fine tuning the number of iterations as N;
(3-2) forward-propagating training: it calculates under current coefficient, the true classifying quality which has, iteration mistake Journey is as follows:
xi+1=fi(ui)
ui=Wixi+bi
Wherein, xi+1For the input of i+1 layer;xiFor i-th layer of input;WiIt is i-th layer of weight vector, it acts on its input Data in;biFor i-th layer of additional bias vector;fi() indicates i-th layer of activation primitive, uiTo carry out convolution to input Result after operation;
(3-3) backpropagation training: through network output compared with true tag, constantly iteration updates coefficient, so that defeated For result close to desired value, iterative process is as follows out:
Wherein, learning rate α is the controlling elements of backpropagation intensity, and L (W, b) is loss function;
The number of iterations N that (3-4) is set according to step (3-1) repeats step (3-2) and step (3-3) n times.
4. a kind of remote sensing images scene classification method for merging depth characteristic and significant characteristics according to claim 1, It is characterized in that, in the step (4), the last one from VGG-1 is fully connected in layer and extracts feature as original image Depth characteristic F1, the last one from VGG-2 be fully connected in layer and extract significant characteristics F of the feature as original image2, institute State depth characteristic F1With significant characteristics F2Feature vector dimension it is consistent.
5. a kind of remote sensing images scene classification method for merging depth characteristic and significant characteristics according to claim 5, It is characterized in that, depth characteristic and significant characteristics are merged by Concurrent Feature convergence strategy in the step (5), then it is original The final fusion feature representation of image I are as follows:
Ff(I)=F1(I)+iF2(I)
Wherein, i is imaginary unit, and fused feature vector dimension remains unchanged.
6. a kind of remote sensing images scene classification method for merging depth characteristic and significant characteristics according to claim 1, It is characterized in that, specific step is as follows for the step (6):
Fusion feature is divided into training characteristics collection and test according to the division of original graph training set and original graph test set by (6-1) Feature set;
(6-2) utilizes training characteristics collection training support vector machine classifier;
(6-3) classifies to test feature collection using trained support vector machine classifier;
(6-4) calculates final classification accuracy.
CN201810911856.2A 2018-08-10 2018-08-10 Remote sensing image scene classification method integrating depth features and saliency features Active CN109165682B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810911856.2A CN109165682B (en) 2018-08-10 2018-08-10 Remote sensing image scene classification method integrating depth features and saliency features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810911856.2A CN109165682B (en) 2018-08-10 2018-08-10 Remote sensing image scene classification method integrating depth features and saliency features

Publications (2)

Publication Number Publication Date
CN109165682A true CN109165682A (en) 2019-01-08
CN109165682B CN109165682B (en) 2020-06-16

Family

ID=64895565

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810911856.2A Active CN109165682B (en) 2018-08-10 2018-08-10 Remote sensing image scene classification method integrating depth features and saliency features

Country Status (1)

Country Link
CN (1) CN109165682B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109858565A (en) * 2019-02-28 2019-06-07 南京邮电大学 The home interior scene recognition method of amalgamation of global characteristics and local Item Information based on deep learning
CN110020617A (en) * 2019-03-27 2019-07-16 五邑大学 A kind of personal identification method based on biological characteristic, device and storage medium
CN110084741A (en) * 2019-04-26 2019-08-02 衡阳师范学院 Image wind network moving method based on conspicuousness detection and depth convolutional neural networks
CN110188725A (en) * 2019-06-05 2019-08-30 中国科学院长春光学精密机械与物理研究所 The scene Recognition system and model generating method of high-resolution remote sensing image
CN110287800A (en) * 2019-05-29 2019-09-27 河海大学 A kind of remote sensing images scene classification method based on SGSE-GAN
CN110348319A (en) * 2019-06-18 2019-10-18 武汉大学 A kind of face method for anti-counterfeit merged based on face depth information and edge image
CN110555461A (en) * 2019-07-31 2019-12-10 中国地质大学(武汉) scene classification method and system based on multi-structure convolutional neural network feature fusion
CN111126493A (en) * 2019-12-25 2020-05-08 东软睿驰汽车技术(沈阳)有限公司 Deep learning model training method and device, electronic equipment and storage medium
CN111222548A (en) * 2019-12-30 2020-06-02 Oppo广东移动通信有限公司 Similar image detection method, device, equipment and storage medium
CN111897985A (en) * 2020-06-23 2020-11-06 西安交通大学 Image multi-label classification method, system, equipment and readable storage medium
CN112287881A (en) * 2020-11-19 2021-01-29 国网湖南省电力有限公司 Satellite remote sensing image smoke scene detection method and system and computer storage medium
CN112507805A (en) * 2020-11-18 2021-03-16 深圳市银星智能科技股份有限公司 Scene recognition method and device
CN112989927A (en) * 2021-02-03 2021-06-18 杭州电子科技大学 Scene graph generation method based on self-supervision pre-training
WO2023088176A1 (en) * 2021-11-18 2023-05-25 International Business Machines Corporation Data augmentation for machine learning

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104700412A (en) * 2015-03-17 2015-06-10 苏州大学 Calculating method of visual salience drawing
CN105550699A (en) * 2015-12-08 2016-05-04 北京工业大学 CNN-based video identification and classification method through time-space significant information fusion
CN105631480A (en) * 2015-12-30 2016-06-01 哈尔滨工业大学 Hyperspectral data classification method based on multi-layer convolution network and data organization and folding
CN105678278A (en) * 2016-02-01 2016-06-15 国家电网公司 Scene recognition method based on single-hidden-layer neural network
CN106599907A (en) * 2016-11-29 2017-04-26 北京航空航天大学 Multi-feature fusion-based dynamic scene classification method and apparatus
CN106919920A (en) * 2017-03-06 2017-07-04 重庆邮电大学 Scene recognition method based on convolution feature and spatial vision bag of words
CN107622280A (en) * 2017-09-14 2018-01-23 河南科技大学 Modularization prescription formula image significance detection method based on scene classification
CN107808132A (en) * 2017-10-23 2018-03-16 重庆邮电大学 A kind of scene image classification method for merging topic model
CN107871124A (en) * 2017-11-15 2018-04-03 陕西师范大学 A kind of Remote Sensing Target detection method based on deep neural network
US20180130203A1 (en) * 2016-11-06 2018-05-10 International Business Machines Corporation Automated skin lesion segmentation using deep side layers
CN108334830A (en) * 2018-01-25 2018-07-27 南京邮电大学 A kind of scene recognition method based on target semanteme and appearance of depth Fusion Features

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104700412A (en) * 2015-03-17 2015-06-10 苏州大学 Calculating method of visual salience drawing
CN105550699A (en) * 2015-12-08 2016-05-04 北京工业大学 CNN-based video identification and classification method through time-space significant information fusion
CN105631480A (en) * 2015-12-30 2016-06-01 哈尔滨工业大学 Hyperspectral data classification method based on multi-layer convolution network and data organization and folding
CN105678278A (en) * 2016-02-01 2016-06-15 国家电网公司 Scene recognition method based on single-hidden-layer neural network
US20180130203A1 (en) * 2016-11-06 2018-05-10 International Business Machines Corporation Automated skin lesion segmentation using deep side layers
CN106599907A (en) * 2016-11-29 2017-04-26 北京航空航天大学 Multi-feature fusion-based dynamic scene classification method and apparatus
CN106919920A (en) * 2017-03-06 2017-07-04 重庆邮电大学 Scene recognition method based on convolution feature and spatial vision bag of words
CN107622280A (en) * 2017-09-14 2018-01-23 河南科技大学 Modularization prescription formula image significance detection method based on scene classification
CN107808132A (en) * 2017-10-23 2018-03-16 重庆邮电大学 A kind of scene image classification method for merging topic model
CN107871124A (en) * 2017-11-15 2018-04-03 陕西师范大学 A kind of Remote Sensing Target detection method based on deep neural network
CN108334830A (en) * 2018-01-25 2018-07-27 南京邮电大学 A kind of scene recognition method based on target semanteme and appearance of depth Fusion Features

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
SOULEYMAN CHAIB 等: "Deep feature extraction and combination for remote sensing image classification based on pre-trained CNN models", 《NINTH INTERNATIONAL CONFERENCE ON DIGITAL IMAGE PROCESSING》 *
曹强: "图像显著性区域提取技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
李志欣 等: "一种多特征融合的场景分类方法", 《小型微型计算机系统》 *
肖保良: "基于Gist特征与PHOG特征融合的多类场景分类", 《中北大学学报(自然科学版)》 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109858565B (en) * 2019-02-28 2022-08-12 南京邮电大学 Home indoor scene recognition method based on deep learning and integrating global features and local article information
CN109858565A (en) * 2019-02-28 2019-06-07 南京邮电大学 The home interior scene recognition method of amalgamation of global characteristics and local Item Information based on deep learning
CN110020617A (en) * 2019-03-27 2019-07-16 五邑大学 A kind of personal identification method based on biological characteristic, device and storage medium
CN110084741A (en) * 2019-04-26 2019-08-02 衡阳师范学院 Image wind network moving method based on conspicuousness detection and depth convolutional neural networks
CN110287800A (en) * 2019-05-29 2019-09-27 河海大学 A kind of remote sensing images scene classification method based on SGSE-GAN
CN110287800B (en) * 2019-05-29 2022-08-16 河海大学 Remote sensing image scene classification method based on SGSE-GAN
CN110188725A (en) * 2019-06-05 2019-08-30 中国科学院长春光学精密机械与物理研究所 The scene Recognition system and model generating method of high-resolution remote sensing image
CN110348319A (en) * 2019-06-18 2019-10-18 武汉大学 A kind of face method for anti-counterfeit merged based on face depth information and edge image
CN110555461A (en) * 2019-07-31 2019-12-10 中国地质大学(武汉) scene classification method and system based on multi-structure convolutional neural network feature fusion
CN111126493A (en) * 2019-12-25 2020-05-08 东软睿驰汽车技术(沈阳)有限公司 Deep learning model training method and device, electronic equipment and storage medium
CN111126493B (en) * 2019-12-25 2023-08-01 东软睿驰汽车技术(沈阳)有限公司 Training method and device for deep learning model, electronic equipment and storage medium
CN111222548A (en) * 2019-12-30 2020-06-02 Oppo广东移动通信有限公司 Similar image detection method, device, equipment and storage medium
CN111897985B (en) * 2020-06-23 2021-10-01 西安交通大学医学院第一附属医院 Image multi-label classification method, system, equipment and readable storage medium
CN111897985A (en) * 2020-06-23 2020-11-06 西安交通大学 Image multi-label classification method, system, equipment and readable storage medium
CN112507805A (en) * 2020-11-18 2021-03-16 深圳市银星智能科技股份有限公司 Scene recognition method and device
CN112287881A (en) * 2020-11-19 2021-01-29 国网湖南省电力有限公司 Satellite remote sensing image smoke scene detection method and system and computer storage medium
CN112989927A (en) * 2021-02-03 2021-06-18 杭州电子科技大学 Scene graph generation method based on self-supervision pre-training
CN112989927B (en) * 2021-02-03 2024-03-05 杭州电子科技大学 Scene graph generation method based on self-supervision pre-training
WO2023088176A1 (en) * 2021-11-18 2023-05-25 International Business Machines Corporation Data augmentation for machine learning

Also Published As

Publication number Publication date
CN109165682B (en) 2020-06-16

Similar Documents

Publication Publication Date Title
CN109165682A (en) A kind of remote sensing images scene classification method merging depth characteristic and significant characteristics
Zhang et al. Joint Deep Learning for land cover and land use classification
Tang et al. Improving image classification with location context
CN104700099B (en) The method and apparatus for recognizing traffic sign
EP3614308A1 (en) Joint deep learning for land cover and land use classification
CN108388927A (en) Small sample polarization SAR terrain classification method based on the twin network of depth convolution
CN108596108B (en) Aerial remote sensing image change detection method based on triple semantic relation learning
CN107506740A (en) A kind of Human bodys' response method based on Three dimensional convolution neutral net and transfer learning model
CN109657716A (en) A kind of vehicle appearance damnification recognition method based on deep learning
CN108960404B (en) Image-based crowd counting method and device
CN108491797A (en) A kind of vehicle image precise search method based on big data
CN103578093B (en) Method for registering images, device and augmented reality system
CN106548169A (en) Fuzzy literal Enhancement Method and device based on deep neural network
CN109948593A (en) Based on the MCNN people counting method for combining global density feature
CN112287983B (en) Remote sensing image target extraction system and method based on deep learning
CN110163213A (en) Remote sensing image segmentation method based on disparity map and multiple dimensioned depth network model
CN107767416A (en) The recognition methods of pedestrian's direction in a kind of low-resolution image
Liu et al. Coastline extraction method based on convolutional neural networks—A case study of Jiaozhou Bay in Qingdao, China
CN110555461A (en) scene classification method and system based on multi-structure convolutional neural network feature fusion
CN107944354A (en) A kind of vehicle checking method based on deep learning
CN111626357B (en) Image identification method based on neural network model
CN105654122A (en) Spatial pyramid object identification method based on kernel function matching
CN111797920A (en) Remote sensing extraction method and system for depth network impervious surface with gate control feature fusion
CN110334622A (en) Based on the pyramidal pedestrian retrieval method of self-adaptive features
Hua et al. LAHNet: A convolutional neural network fusing low-and high-level features for aerial scene classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant