CN107451616A - Multi-spectral remote sensing image terrain classification method based on the semi-supervised transfer learning of depth - Google Patents

Multi-spectral remote sensing image terrain classification method based on the semi-supervised transfer learning of depth Download PDF

Info

Publication number
CN107451616A
CN107451616A CN201710648900.0A CN201710648900A CN107451616A CN 107451616 A CN107451616 A CN 107451616A CN 201710648900 A CN201710648900 A CN 201710648900A CN 107451616 A CN107451616 A CN 107451616A
Authority
CN
China
Prior art keywords
training
data
layer
sample
test
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710648900.0A
Other languages
Chinese (zh)
Inventor
焦李成
屈嵘
程林
唐旭
张丹
陈璞花
马文萍
侯彪
杨淑媛
尚荣华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201710648900.0A priority Critical patent/CN107451616A/en
Publication of CN107451616A publication Critical patent/CN107451616A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses a kind of multi-spectral remote sensing image terrain classification method based on the semi-supervised transfer learning of depth, and training dataset and kNN data are taken out according to ground truth;Training dataset is divided into two parts to be respectively trained;Multispectral image to be sorted is inputted, two classification results figures are obtained in two CNN models;Two kNN nearest neighbor algorithm figures are constructed according to training sample;The data of test are taken out with two classification results figures, data are classified with kNN nearest neighbor algorithms;Update classification results figure;Update the training sample and kNN training samples of coorinated training;Two CNN networks of re -training coorinated training, the point for having category to test data set using the model trained are classified, and obtain the classification of test data concentrated part pixel, and compared with true category.Invention introduces k nearest neighbor algorithms and sample similarity, prevent coorinated training from wandering off, the classification accuracy when lack of training samples is improved, available for target identification.

Description

Multi-spectral remote sensing image terrain classification method based on the semi-supervised transfer learning of depth
Technical field
The invention belongs to technical field of remote sensing image processing, and in particular to a kind of based on the more of the semi-supervised transfer learning of depth Spectral remote sensing image terrain classification method, based on pixel scale to there is the Classification of Multispectral Images of a small amount of marker samples, it is also possible to In target identification.
Background technology
Multi-spectral remote sensing image is while same target (region or target) is seen on multiple narrow spectral bands Obtained image is surveyed, it reflects reflection, transmission or radiation characteristic of the object of observation on each narrow spectral band, thus wraps The more information of object of observation is contained.With the continuous improvement of satellite sensor quality and quantity, for multi-spectral remote sensing image Sort research theory it is also gradually ripe, relative to traditional remote sensing images, the band class information of multi-spectral remote sensing image is enriched, empty Between information more enrich.Multispectral remote sensing image classification based on pixel is in different multi light spectrum hands according to each pixel On personal characteristics, divided into using some algorithms or decision-making different classes of.From the past till now, multispectral remote sensing was classified always It is the primary study object of numerous researchers, substantial amounts of research has been carried out in sorting technique.
In general, Classifying Method in Remote Sensing Image can be divided into unsupervised classification and supervised classification.In supervised classification, point The training of class device is that in many practical applications, marker samples are deficient, and are obtained using the largely sample with category label Take the cost of significant notation sample sufficiently expensive.Unsupervised classification referred under conditions of no priori classification is as sample, Carried out calculating automatic discrimination classification according to similarity size between pixel.Semi-supervised learning, it is different from traditional supervised learning, can be same When learnt in a small amount of marked data and substantial amounts of Unlabeled data, so as to improve performance.
At present, many learning tasks can not produce more satisfactory of accuracy rate because of there is the data of label excessively rare Practise device.To solve the problems, such as this supervision property, transfer learning is introduced.Its basic ideas is combined using the knowledge of association area Or the not knowledge in combining target domain, it is established that for the learning algorithm of target domain, to complete related learning tasks.Traditional Machine learning algorithm all assumes that source domain and aiming field have identical probability distribution.When source domain and different aiming field distribution, Migration effect will have a greatly reduced quality or even occur negative transfer, therefore propose the transfer learning of various visual angles, and it is to pass through different visual angles Object is described more fully.Because the description for a things often has many attributes, for example analyze one Personal statement implication, has a variety of description attributes such as the shape of the mouth as one speaks, sound, limb action, and analysis from different angles will obtain One side of its real meaning, with reference to the analysis result of all angles, coordinate multiple training data domains, just can be in aiming field Train relatively reliable effectively learner.It is exactly that source domain is trained from two different visual angles herein, obtains two differences Learner, then carry out sample migration.
Coorinated training is a kind of popular semi-supervised learning algorithm, it has also become at present in machine learning and area of pattern recognition A study hotspot.Initial coorinated training algorithm (or being standard in combination training algorithm) is A.Blum and T.Mitchell [BlumM98] proposed in 1998, and so far, what two models of most of coorinated training all extracted is shallow-layer feature, And during classifier training, the introducing of data untagged easily produces noise, causes training to wander off, so as to reduce classification Precision.Therefore, propose a kind of pixel level classification based on the semi-supervised and adaptive cross-cutting sample transfer learning of various visual angles depth Algorithm.Depth characteristic is extracted using a small amount of marker samples of the sample and aiming field of similar regions, in conjunction with sample data itself Structural information, unmarked sample generic probability is calculated, the accuracy of selection data untagged classification is improved with this, is reduced The introducing of noise.The validity that contrast experiment demonstrates the algorithm is carried out on the data set of IGRSS2017 data fusion contests.
The content of the invention
In view of the above-mentioned deficiencies in the prior art, the technical problem to be solved by the present invention is that providing one kind is based on depth The multi-spectral remote sensing image terrain classification method of semi-supervised transfer learning, to improve the nicety of grading of aiming field, solves training sample The problem of this deficiency and training and inconsistent test sample distribution.
The present invention uses following technical scheme:
Multi-spectral remote sensing image terrain classification method based on the semi-supervised transfer learning of depth, when lack of training samples, Classified using the source domain and aiming field unlabelled sample auxiliary mark numeric field data similar to aiming field, renewal classification knot Fruit is schemed and the training sample and kNN training samples of coorinated training;Two CNN networks of re -training coorinated training again, repeat Above-mentioned steps, the point for having category in test EO-1 hyperion picture is inputted into coorinated training model predictive classification result, obtains two points Class result, the high classification results of confidence level are selected RGB color result figure to be drawn, with truly having mark as final output result Label ground truth RGB color figure is contrasted, and calculates classification accuracy rate.
Preferably, comprise the following steps:
S1, the multi-spectral remote sensing image for inputting training sample, training dataset D and kNN are taken out according to ground truth Data K1 and K2;
S2, training dataset D is divided into two parts, two different CNN MODEL Cs NN1 and CNN2 are respectively trained;
S3, input test city multispectral image to be sorted, will test city test data sliding block pixel-by-pixel, take 28 × 28 × 9 data block, class scale value is predicted line by line, all test datas are inputted into two CNN trained models, CNN1 respectively Model measurement show that category result figure F1, CNN2 model measurement draws category result figure F2;
S4, according to step S1 training sample K1 and K2 construct two sizes be 50 samples kNN nearest neighbor algorithm figures KNN1 and kNN2;
S5, the data A1 and A2 tested, the kNN constructed with step S4 are taken out with step S3 two category result figures F1 and F2 Nearest neighbor algorithm is classified to data A1 and A2;
S6, renewal step S5 category result figure;
S7, training sample and kNN training samples according to step S6 renewal coorinated trainings;
S8, two CNN networks of re -training coorinated training, repeat step S3~S7;
S9, the point for having category to test data set using the model trained are classified, and are obtained in the middle part of test data set Divide the classification of pixel, and compared with true category.
Preferably, step S1 is specifically included as follows:
S11, the position according to the point for having label in training city ground truth, category take out 200 classes per class The position of punctuate;
There is the position of the point of label in S12, the ground truth according to test city, category takes out 10 classes per class The position of punctuate;
S13, step S12 is tested to data block of 9 band overlappings in city into 666 × 643 × 9, according to the 100 of taking-up The position of individual data point, correspond in data block and take 3 × 3 × 9 data block, category is divided into two parts, obtains two of kNN Divided data K1 and K2;
S14, data block of 9 band overlappings in city into 666 × 643 × 9 will be tested, with the position of the class punctuate of taking-up Centered on, the label of different cities corresponds to the data of different cities, takes 28 × 28 × 9 data block, and number corresponding to imparting Label according to block center point is the label of whole data block, obtains overall training dataset D, and size of data is 2100 × 28 × 28 × 9, tag size is 2100 × 1.
Preferably, step S2 is specially:
S21, obtained data set upset at random, will train city per 200 a kind of samples and test city Per 10 a kind of samples, two parts are randomly divided into, 10 class Data Synthesis data set D1 and D2, D1 and D2 there are into 1050 samples This;
S22, selection one are by input layer → convolutional layer → pond layer → convolutional layer → pond layer, and → full articulamentum → connects entirely 8 layers of convolutional neural networks CNN1 of layer → softmax graders composition are met, CNN1 network is trained with data set D1;
S23, equally one 8 similar with CNN1 structures layer convolutional neural networks CNN2 of selection, but the size of convolution kernel And feature map quantity differs, CNN2 network is trained with data set D2.
Preferably, in step S22, it is 9 that the 1st layer of input layer, which sets Feature Mapping map number,;Level 2 volume lamination sets feature It is 32 to map map number, and it is 3 to set filter size;It is 2 that 3rd layer of pond layer, which sets down-sampling size,;4th layer of convolutional layer is set Feature Mapping map number is 64, and it is 3 to set filter size;It is 2 that 5th layer of pond layer, which sets down-sampling size,;6th layer of full connection It is 512 that layer, which sets Feature Mapping map number,;It is 256 that 7th layer of full articulamentum, which sets Feature Mapping map number,;8th layer softmax points It is 10 that class device, which sets Feature Mapping map number,.
Preferably, in step S23, it is 9 that the 1st layer of input layer, which sets Feature Mapping map number,;Level 2 volume lamination sets feature It is 24 to map map number, and it is 3 to set filter size;It is 2 that 3rd layer of pond layer, which sets down-sampling size,;4th layer of convolutional layer, if It is 48 to put Feature Mapping map number, and it is 3 to set filter size;5th layer of pond layer, it is 2 to set down-sampling size;6th layer complete Articulamentum, it is 256 to set Feature Mapping map number;7th layer of full articulamentum, it is 128 to set Feature Mapping map number;8th layer Softmax graders, it is 10 to set Feature Mapping map number.
Preferably, in step S3, according to the confidence level of softmax graders, by softmax classify highest confidence level and The label of prediction preserves, and F1 and F2 size are 988 × 1160 × 2.
Preferably, step S5 is specially:
S51, the position according to nonzero coordinates value in category result figure F1, and select wherein confidence level highest 6000 The position of sample, the data block that 3 × 3 are chosen in original test data obtain A1, and kNN original training datas space is K1, by Point test A1 draws the result L1 ' of test;
S52, the position according to nonzero coordinates value in category result figure F2, and select wherein 6000 samples of confidence level highest This position, the data block that 3 × 3 are chosen in original test data obtain A2, and kNN original training datas space is K2, point by point Test A2 draws the result L2 ' of test.
Preferably, step S6 is specially:
S61, step S5 category result figures F1 test result L1 ' and softmax classification results F1 is compared, if KNN classification results are identical with softmax classification results, then the result of retention forecasting, if differing, then it is assumed that this prediction result is not Credible, renewal obtains new classification results figure F1;
S62, step S5 category result figures F2 test result L2 ' and softmax classification results F2 is compared, used KNN classifies to test sample, if kNN classification results are identical with softmax classification results, the result of retention forecasting, if Differ, then it is assumed that this prediction result is insincere, and renewal obtains new classification results figure F2.
Preferably, step S7 is specially:
S71, test sample selected according to the category result figure F1 of renewal, sorted according to the size of every class confidence level, per class Select the sample of confidence level highest 1000, take out the position correspondence of sample into data block, take 3 × 3 data block with it is corresponding Class scale value is saved in T1;
S72, calculate T1 in per the sample of class 1000 corresponded to in K1 classification 5 samples distance, chosen distance recently 20 samples, retain the positional information of 20 samples, correspond in original data block and take 28 × 28 data block, be added to CNN2 Training data D2 in, and original data D2 is halved;
S73, test sample selected according to the category result figure F2 of renewal, sorted according to the size of every class confidence level, per class Select the sample of confidence level highest 1000, take out the position correspondence of sample into data block, take 3 × 3 data block with it is corresponding Class scale value is saved in T2;
S74, calculate T2 in per the sample of class 1000 corresponded to in K2 classification 5 samples distance, chosen distance recently 20 samples, retain the positional information of 20 samples, correspond in original data block and take 28 × 28 data block, be added to CNN1 Training data D1 in, and original data D1 is halved;
S75, the positional information of 20 obtained samples of step S72 is corresponded in data block take 3 × 3 data block, make For the new training samples of kNN1;
S76, the positional information of 20 obtained samples of step S74 is corresponded in data block take 3 × 3 data block, make For the new training samples of kNN2.
Compared with prior art, the present invention at least has the advantages that:
The multi-spectral remote sensing image terrain classification method based on the semi-supervised transfer learning of depth of the invention, when training sample not When sufficient, classified using the source domain and aiming field unlabelled sample auxiliary mark numeric field data similar to aiming field, updated Classification results figure and the training sample and kNN training samples of coorinated training;Two CNN nets of re -training coorinated training again Network, repeat the above steps, the point for having category in test EO-1 hyperion picture is inputted into coorinated training model predictive classification result, obtained Two classification results, the high classification results of confidence level are selected to draw RGB color result figure as final output result, it is and true The RGB color figure for having label ground truth in fact is contrasted, and calculates classification accuracy rate.Utilize two different structures CNN model interoperabilities are trained, and compared to the shallow-layer feature of other graders extraction, the feature accuracy rate that deep layer CNN is obtained is higher, point Class effect is more preferable, and the present invention is semi-supervised based on depth, introduces k nearest neighbor algorithms and sample similarity, prevents coorinated training from wandering off, The classification accuracy when lack of training samples is improved, available for target identification.
Further, the mapping relations between substantial amounts of input and output can be learnt using convolutional neural networks, without The accurate mathematic(al) representation between any input and output is needed, as long as pattern is instructed to convolutional neural networks known to Practice, network just has the mapping ability between input and output.8 layers of convolutional neural networks model are set, and network parameter is unlikely to It is excessive, also it is enough to learn to suitable network parameter, the feature extraction of data can be come out, each picture is obtained by grader The classification of member.
Further, the classification results of the CNN high-level characteristics extracted are combined with KNN classification results, to a certain degree On prevent wandering off for coorinated training, improve add training set in sample correctness.
Further, for lack of training samples the problem of, the similar regions and aiming field for there are enough marker samples are utilized Data train the semi-supervised model of depth, and the high sample of classification results confidence level in aiming field is gradually migrated into addition training set, And the sample in source domain is gradually decreased, which adds the influence of aiming field, reduces the influence of sample in similar regions.
Further, the present invention filters out the sample of classification error according to sample similarity after the semi-supervised model of depth This, binding characteristic represents and sample similarity realizes the adaptive study in domain.
Below by drawings and examples, technical scheme is described in further detail.
Brief description of the drawings
Fig. 1 is implementation process figure of the present invention;
Fig. 2 is that the true class of image to be classified is marked on a map;
Fig. 3 is the result figure using aiming field sample classification;
Fig. 4 is to obtain result figure using source domain and aiming field sample classification;
Fig. 5 is classification results figure of the present invention to image to be classified;
Fig. 6 is the inventive method block diagram.
Embodiment
The invention provides a kind of multi-spectral remote sensing image terrain classification method based on the semi-supervised transfer learning of depth, when During lack of training samples, carried out using the source domain and aiming field unlabelled sample auxiliary mark numeric field data similar to aiming field Classification, it is specially:The multi-spectral remote sensing image of training sample is inputted, training dataset D and kNN are taken out according to ground truth Data K1 and K2;Training dataset D is divided into two parts, two different CNN MODEL Cs NN1 and CNN2 are respectively trained;Input is treated The multispectral image of classification, two classification results figures F1 and F2 are obtained in two CNN models;According to training sample K1 and K2 structure Make two kNN nearest neighbor algorithm figures, kNN1 and kNN2;The data A1 and A2 of test are taken out with F1 and F2, with kNN nearest neighbor algorithm logarithms According to classification;Update classification results figure;Update the training sample and kNN training samples of coorinated training;Re -training coorinated training Two CNN networks, repeat the above steps;The point for having category to test data set using the model trained is classified, and is obtained The classification of test data concentrated part pixel, and compared with true category.
Refer to Fig. 1 and Fig. 6, the present invention proposes that a kind of new depth is semi-supervised and the side of various visual angles sample transfer learning Method, for pixel level multi-spectrum remote sensing image terrain classification, comprise the following steps:
S1, the multi-spectral remote sensing image for inputting training sample, training dataset D and kNN are taken out according to ground truth Data K1 and K2;
Training sample is multi-spectral remote sensing image to be sorted, chooses Berlin (Berlin) as training city, selects Berlin (Berlin) city plan picture of 9 wave bands of Landsat8 satellites shooting, wherein only some sample has label, image Size is 666 × 643;
Test sample is the multi-spectral remote sensing image of source domain, chooses Paris (Paris) as test city, same to select Paris (Paris) city plan picture of 9 wave bands of Landsat8 satellites shooting, image size are 1160 × 998, and two cities are total to Same class indicates 10 classes;
There is the position of the point of label in S11, the ground truth according to training city berlin, category takes out per class The position of 200 class punctuates;
There is the position of the point of label in S12, the ground truth according to test city paris, category takes out per class The position of 10 class punctuates;
S13, data block of city paris 9 band overlappings into 666 × 643 × 9 will be tested, according to the 100 of taking-up The position of data point, correspond in data block and take 3 × 3 × 9 data block, category is divided into two parts, obtains kNN two parts Data K1 and K2;
S14, data block of city paris 9 band overlappings into 666 × 643 × 9 will be tested, with the class punctuate of taking-up Position centered on, the label of different cities corresponds to the data of different cities, takes 28 × 28 × 9 data block, and assigns pair The label for the data block center point answered is the label of whole data block.Overall training set data D is obtained, size of data is 2100 × 28 × 28 × 9, tag size is 2100 × 1.
S2, training dataset D is divided into two parts, two different CNN MODEL Cs NN1 and CNN2 are respectively trained;
S21, obtained data set upset at random, will train city berlin per 200 a kind of samples and test City paris 10 samples per one kind, are randomly divided into two parts, by 10 class Data Synthesis data set D1 and D2, D1 and D2 There are 1050 samples;
S22, selection one are by input layer → convolutional layer → pond layer → convolutional layer → pond layer, and → full articulamentum → connects entirely 8 layers of convolutional neural networks CNN1 of layer → softmax graders composition are met, CNN1 network is trained with data set D1;
For the 1st layer of input layer, it is 9 to set Feature Mapping map number;
For level 2 volume lamination, it is 32 to set Feature Mapping map number, and it is 3 to set filter size;
For the 3rd layer of pond layer, it is 2 to set down-sampling size;
For the 4th layer of convolutional layer, it is 64 to set Feature Mapping map number, and it is 3 to set filter size;
For the 5th layer of pond layer, it is 2 to set down-sampling size;
For the 6th layer of full articulamentum, it is 512 to set Feature Mapping map number;
For the 7th layer of full articulamentum, it is 256 to set Feature Mapping map number;
For the 8th layer of softmax grader, it is 10 (10 classes) to set Feature Mapping map number.
S23, equally selection one 8 layer convolutional neural networks CNN2 similar with CNN1 structures, but the size of convolution kernel with And feature map quantity differs, CNN2 network is trained with data set D2.
For the 1st layer of input layer, it is 9 to set Feature Mapping map number;
For level 2 volume lamination, it is 24 to set Feature Mapping map number, and it is 3 to set filter size;
For the 3rd layer of pond layer, it is 2 to set down-sampling size;
For the 4th layer of convolutional layer, it is 48 to set Feature Mapping map number, and it is 3 to set filter size;
For the 5th layer of pond layer, it is 2 to set down-sampling size;
For the 6th layer of full articulamentum, it is 256 to set Feature Mapping map number;
For the 7th layer of full articulamentum, it is 128 to set Feature Mapping map number;
For the 8th layer of softmax grader, it is 10 (10 classes) to set Feature Mapping map number.
S3, input multispectral image to be sorted, obtain two classification results figures, F1 and F2 in two CNN models;
Test data is Paris (Paris) city, by test data sliding block pixel-by-pixel, takes 28 × 28 × 9 data block, by The data of all tests are inputted two CNN trained models, draw two result categories by row prediction class scale value respectively Figure;CNN1 model measurements show that category result figure F1, CNN2 model measurement draws category result figure F2.
S4, according to training sample K1 and K2 construct two kNN nearest neighbor algorithm figures, kNN1 and kNN2 sizes are 50 samples This;
If kNN core concept is exactly most of category in the k in feature space most adjacent samples of a sample In some classification, then the sample falls within this classification.The distance between sample is Euclidean distance, it is contemplated that neighborhood is believed Breath, so point takes 3 × 3 block centered on sample when kNN classifies.
Euclidean distance calculation formula:
Wherein,iIt is the length of data, if taking 3 × 3 data block, i=81 (81=3 × 3 × 9).
S5, the data A1 and A2 tested is taken out with F1 and F2, data are classified with kNN nearest neighbor algorithms;
S51, the position according to nonzero coordinates value in category result figure F1, and select wherein per class confidence level highest The position of 6000 samples, the data block that 3 × 3 are chosen in original test data obtain A1, kNN (kNN1) original training data Space is K1, and pointwise test A1 draws the result L1 ' of test;
S52, the position according to nonzero coordinates value in category result figure F2, and select wherein per class confidence level highest 6000 The position of individual sample, the data block that 3 × 3 are chosen in original test data obtain A2, kNN (kNN2) original training datas space The result L2 ' of test is drawn for K2, pointwise test A2.
S6, renewal category result figure;
S61, compare test result L1 ' and softmax classification results F1, if kNN (kNN1) classification results divide with softmax Class result is identical, then the result of retention forecasting, if differing, then it is assumed that this prediction result is insincere, and renewal obtains new category Result figure F1;
S62, compare test result L2 ' and softmax classification results F2, if kNN (kNN2) classification results divide with softmax Class result is identical, then the result of retention forecasting, if differing, then it is assumed that this prediction result is insincere, and renewal obtains new category Result figure F2;
S7, the training sample and kNN training samples for updating coorinated training;
S71, test sample selected according to the category result figure F1 of renewal, sorted according to the size of every class confidence level, per class Select the sample of confidence level highest 1000, take out the position correspondence of sample into data block, take 3 × 3 data block with it is corresponding Class scale value is saved in T1;
S72, calculate T1 in per the sample of class 1000 corresponded to in K1 classification 5 samples distance, chosen distance recently 20 samples, retain the positional information of 20 samples, correspond in original data block and take 28 × 28 data block, be added to CNN2 Training data D2 in, and original data D2 is halved.
S73, test sample selected according to the category result figure F2 of renewal, sorted according to the size of every class confidence level, per class Select the sample of confidence level highest 1000, take out the position correspondence of sample into data block, take 3 × 3 data block with it is corresponding Class scale value is saved in T2;
S74, calculate T2 in per the sample of class 1000 corresponded to in K2 classification 5 samples distance, chosen distance recently 20 samples, retain the positional information of 20 samples, correspond in original data block and take 28 × 28 data block, be added to CNN1 Training data D1 in, and original data D1 is halved.
S75, the positional information of 20 obtained samples of above T1 is corresponded in data block take 3 × 3 data block, as Training sample new kNN1, because new sample is the data taken on test set, and data are by original 5 samples of every class It is changed into 20 samples, therefore the confidence level of next subseries is higher;
S76, similarly the positional information of 20 obtained samples of above T2 is corresponded in data block take 3 × 3 data block, The training sample new as kNN2.
S8, two CNN networks of re -training coorinated training, repeat step S3~S7;
After the training data of coorinated training and kNN models is updated, re -training model, testing classification, then again more New samples, move in circles, until the model trained;
S9, the point for having category to test data set using the model trained are classified, and are obtained in the middle part of test data set Divide the classification of pixel, and compared with true category, output category result figure.
The point input coorinated training model predictive classification result for having category will be tested in EO-1 hyperion picture, obtain two classification As a result, the high classification results of confidence level are selected RGB color result figure to be drawn, with really having mark as final output result The ground truth of label RGB color figure contrast, and calculate classification accuracy rate.
Simulated conditions:
Hardware platform is:Intel (R) Xeon (R) CPU E5-2630,2.40GHz × 16, inside save as 64G.
Software platform is:TensorFlow.
Emulation content and result:
Tested with the inventive method under above-mentioned simulated conditions, i.e., from 10 classifications in training Berlin city, often One kind chooses 100 samples as training set, in 10 classifications in the city of Paris, chooses 10 samples per a kind of, altogether 2100 samples are as training set;The sample of 10 classifications of all tape labels is all as test in city Paris is tested Collection.Directly with convolutional neural networks training test obtain the classification results figure such as Fig. 3, with the inventive method training test obtain as Fig. 4 classification results figure.
Fig. 3 is the result figure using aiming field sample classification, classification accuracy 74.3%;Fig. 4 is source domain and aiming field The classification results figure of sample, accuracy rate 76.8%.It can be seen that Fig. 4 classification results are better than Fig. 3 classification knot Fruit, because adding the sample auxiliary mark domain classification of source domain, classification results are more accurate.
The first step, the present invention is contrasted with convolutional neural networks test data set nicety of grading, parameter and result are such as Shown in following table:
Network parameter model_1 model_2
Learning rate 0.007/0.95 0.007/0.95
Iterations 100 100
Network structure 32 64 512 10 24 48 256 10
Knn radiuses 1 1
batch_size 64 64
Obtained accuracy rate result is as shown in the table:
Test set accuracy rate model_1 model_2
For the first time 71.5% 76.4%
Second 71.8% 78.0%
For the third time 74.3% 77.8%
4th time 73.8% 78.0%
With reference to model_1 and model_2 prediction result;The classification accuracy rate finally drawn is:78.24%.
Second step, overall data D is trained into a convolutional neural networks, parameter is arranged to:Learning rate 0.01, attenuation rate 0.95, batch_size 64, iterations 280, network structure:32 64 512
Final classification accuracy is:76.8%.
3rd step, by the convolutional neural networks of sample training one of every class 10 in paris cities, because amount of training data ratio It is less, 100 samples of training sample, therefore parameter is arranged to altogether:Learning rate 0.01, attenuation rate 0.99, batch_ Size is 64, iterations 2000, network structure:32 64 512
Final classification accuracy is:74.3%.
Fig. 2 and Fig. 5 are referred to, wherein, Fig. 2 is true category result figure, and Fig. 5 is the classification results figure of the present invention, this hair Bright classification accuracy is 78.24%;It can be seen that Fig. 5 classification results are better than Fig. 4 and Fig. 3 classification results, it The result of classification is more nearly with real result figure, and overall classification results are more smooth, without many noises, is illustrated in phase Under conditions of sample, the method for the semi-supervised adaptive-migration study of Depth of Opportunity of the invention is favorably improved the accurate of classification Rate.
The present invention, according to similar regions, will have enough marker samples by the semi-supervised sample transfer learning of depth in aiming field Similar regions and aiming field it is a small amount of marker samples training, effectively increase the ability to express of characteristics of image, enhance model Generalization ability, and it is adaptive according to sample similarity and character representation to realize domain, it is automatic to choose similar regions useful feature Information subsidiary classification so that obtain higher nicety of grading in the case of not enough training samples.

Claims (10)

1. the multi-spectral remote sensing image terrain classification method based on the semi-supervised transfer learning of depth, it is characterised in that when training sample When this is insufficient, classified using the source domain and aiming field unlabelled sample auxiliary mark numeric field data similar to aiming field, Update classification results figure and the training sample and kNN training samples of coorinated training;Two CNN of re -training coorinated training again Network, repeat the above steps, the point for having category in test EO-1 hyperion picture is inputted into coorinated training model predictive classification result, obtained To two classification results, the high classification results of confidence level are selected to draw RGB color result figure as final output result, with The true RGB color figure for having label ground truth is contrasted, and calculates classification accuracy rate.
2. the multi-spectral remote sensing image terrain classification method based on the semi-supervised transfer learning of depth according to claim 1, its It is characterised by, comprises the following steps:
S1, the multi-spectral remote sensing image for inputting training sample, training dataset D and kNN data are taken out according to ground truth K1 and K2;
S2, training dataset D is divided into two parts, two different CNN MODEL Cs NN1 and CNN2 are respectively trained;
S3, input test city multispectral image to be sorted, will test city test data sliding block pixel-by-pixel, take 28 × 28 × 9 data block, class scale value is predicted line by line, all test datas are inputted into two CNN trained models, CNN1 models respectively Test show that category result figure F1, CNN2 model measurement draws category result figure F2;
S4, according to step S1 training sample K1 and K2 construct two sizes be 50 samples kNN nearest neighbor algorithm figures kNN1 And kNN2;
S5, the data A1 and A2 tested is taken out with step S3 two category result figures F1 and F2, the kNN constructed with step S4 is neighbouring Algorithm is classified to data A1 and A2;
S6, renewal step S5 category result figure;
S7, training sample and kNN training samples according to step S6 renewal coorinated trainings;
S8, two CNN networks of re -training coorinated training, repeat step S3~S7;
S9, the point for having category to test data set using the model trained are classified, and obtain test data concentrated part picture The classification of vegetarian refreshments, and compared with true category.
3. the multi-spectral remote sensing image terrain classification method based on the semi-supervised transfer learning of depth according to claim 2, its It is characterised by, step S1 specifically includes as follows:
S11, the position according to the point for having label in training city ground truth, category take out 200 class punctuates per class Position;
There is the position of the point of label in S12, the ground truth according to test city, category takes out 10 class punctuates per class Position;
S13, step S12 is tested to data block of 9 band overlappings in city into 666 × 643 × 9, according to the 100 of taking-up numbers The position at strong point, correspond in data block and take 3 × 3 × 9 data block, category is divided into two parts, obtains kNN two parts number According to K1 and K2;
S14, data block of 9 band overlappings in city into 666 × 643 × 9 will be tested, using the position of the class punctuate of taking-up in The heart, the label of different cities correspond to the data of different cities, take 28 × 28 × 9 data block, and data block corresponding to imparting The label of center point is the label of whole data block, obtains overall training dataset D, and size of data is 2100 × 28 × 28 × 9, tag size is 2100 × 1.
4. the multi-spectral remote sensing image terrain classification method based on the semi-supervised transfer learning of depth according to claim 2, its It is characterised by, step S2 is specially:
S21, obtained data set upset at random, will train city per 200 a kind of samples and test city it is each 10 samples of class, are randomly divided into two parts, and 10 class Data Synthesis data set D1 and D2, D1 and D2 are had into 1050 samples;
S22, selection one are by input layer → convolutional layer → pond layer → convolutional layer → pond layer → full articulamentum → full articulamentum 8 layers of convolutional neural networks CNN1 of → softmax graders composition, CNN1 network is trained with data set D1;
S23, equally selection one 8 similar with CNN1 structures layer convolutional neural networks CNN2, but the size of convolution kernel and Feature map quantity is differed, and CNN2 network is trained with data set D2.
5. the multi-spectral remote sensing image terrain classification method based on the semi-supervised transfer learning of depth according to claim 4, its It is characterised by, in step S22, it is 9 that the 1st layer of input layer, which sets Feature Mapping map number,;Level 2 volume lamination sets Feature Mapping figure Number is 32, and it is 3 to set filter size;It is 2 that 3rd layer of pond layer, which sets down-sampling size,;4th layer of convolutional layer sets feature to reflect It is 64 to penetrate map number, and it is 3 to set filter size;It is 2 that 5th layer of pond layer, which sets down-sampling size,;6th layer of full articulamentum is set Feature Mapping map number is 512;It is 256 that 7th layer of full articulamentum, which sets Feature Mapping map number,;8th layer of softmax grader is set Feature Mapping map number is put as 10.
6. the multi-spectral remote sensing image terrain classification method based on the semi-supervised transfer learning of depth according to claim 4, its It is characterised by, in step S23, it is 9 that the 1st layer of input layer, which sets Feature Mapping map number,;Level 2 volume lamination sets Feature Mapping figure Number is 24, and it is 3 to set filter size;It is 2 that 3rd layer of pond layer, which sets down-sampling size,;4th layer of convolutional layer, feature is set It is 48 to map map number, and it is 3 to set filter size;5th layer of pond layer, it is 2 to set down-sampling size;6th layer of full articulamentum, It is 256 to set Feature Mapping map number;7th layer of full articulamentum, it is 128 to set Feature Mapping map number;8th layer softmax points Class device, it is 10 to set Feature Mapping map number.
7. the multi-spectral remote sensing image terrain classification method based on the semi-supervised transfer learning of depth according to claim 2, its It is characterised by, in step S3, according to the confidence level of softmax graders, by softmax classification highest confidence levels and prediction Label preserves, and F1 and F2 size are 988 × 1160 × 2.
8. the multi-spectral remote sensing image terrain classification method based on the semi-supervised transfer learning of depth according to claim 2, its It is characterised by, step S5 is specially:
S51, the position according to nonzero coordinates value in category result figure F1, and select wherein 6000 samples of confidence level highest Position, the data block that 3 × 3 are chosen in original test data obtains A1, and kNN original training datas space be K1, point-by-point survey Examination A1 draws the result L1 ' of test;
S52, the position according to nonzero coordinates value in category result figure F2, and selection wherein 6000 samples of confidence level highest Position, the data block that 3 × 3 are chosen in original test data obtain A2, and kNN original training datas space is K2, pointwise test A2 draws the result L2 ' of test.
9. the multi-spectral remote sensing image terrain classification method based on the semi-supervised transfer learning of depth according to claim 2, its It is characterised by, step S6 is specially:
S61, step S5 category result figures F1 test result L1 ' and softmax classification results F1 is compared, if kNN points Class result is identical with softmax classification results, then the result of retention forecasting, if differing, then it is assumed that and this prediction result is insincere, Renewal obtains new classification results figure F1;
S62, step S5 category result figures F2 test result L2 ' and softmax classification results F2 is compared, with kNN pairs Test sample is classified, if kNN classification results are identical with softmax classification results, the result of retention forecasting, if not phase Together, then it is assumed that this prediction result is insincere, and renewal obtains new classification results figure F2.
10. the multi-spectral remote sensing image terrain classification method based on the semi-supervised transfer learning of depth according to claim 2, its It is characterised by, step S7 is specially:
S71, test sample selected according to the category result figure F1 of renewal, sorted according to the size of every class confidence level, selected per class The sample of confidence level highest 1000, the position correspondence of sample is taken out into data block, take 3 × 3 data block and corresponding category Value is saved in T1;
S72, calculate T1 in per the sample of class 1000 corresponded to in K1 classification 5 samples distance, chosen distance recently 20 Sample, retain the positional information of 20 samples, correspond in original data block and take 28 × 28 data block, be added to CNN2 instruction Practice in data D2, and original data D2 is halved;
S73, test sample selected according to the category result figure F2 of renewal, sorted according to the size of every class confidence level, selected per class The sample of confidence level highest 1000, the position correspondence of sample is taken out into data block, take 3 × 3 data block and corresponding category Value is saved in T2;
S74, calculate T2 in per the sample of class 1000 corresponded to in K2 classification 5 samples distance, chosen distance recently 20 Sample, retain the positional information of 20 samples, correspond in original data block and take 28 × 28 data block, be added to CNN1 instruction Practice in data D1, and original data D1 is halved;
S75, the positional information of 20 obtained samples of step S72 is corresponded in data block take 3 × 3 data block, as Training sample new kNN1;
S76, the positional information of 20 obtained samples of step S74 is corresponded in data block take 3 × 3 data block, as Training sample new kNN2.
CN201710648900.0A 2017-08-01 2017-08-01 Multi-spectral remote sensing image terrain classification method based on the semi-supervised transfer learning of depth Pending CN107451616A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710648900.0A CN107451616A (en) 2017-08-01 2017-08-01 Multi-spectral remote sensing image terrain classification method based on the semi-supervised transfer learning of depth

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710648900.0A CN107451616A (en) 2017-08-01 2017-08-01 Multi-spectral remote sensing image terrain classification method based on the semi-supervised transfer learning of depth

Publications (1)

Publication Number Publication Date
CN107451616A true CN107451616A (en) 2017-12-08

Family

ID=60489366

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710648900.0A Pending CN107451616A (en) 2017-08-01 2017-08-01 Multi-spectral remote sensing image terrain classification method based on the semi-supervised transfer learning of depth

Country Status (1)

Country Link
CN (1) CN107451616A (en)

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108051233A (en) * 2017-12-16 2018-05-18 太原理工大学 A kind of soft sensing method for load parameter of ball mill
CN108229425A (en) * 2018-01-29 2018-06-29 浙江大学 A kind of identifying water boy method based on high-resolution remote sensing image
CN108345856A (en) * 2018-02-09 2018-07-31 电子科技大学 The SAR automatic target recognition methods integrated based on isomery convolutional neural networks
CN108399420A (en) * 2018-01-30 2018-08-14 北京理工雷科电子信息技术有限公司 A kind of visible light naval vessel false-alarm elimination method based on depth convolutional network
CN108805160A (en) * 2018-04-17 2018-11-13 平安科技(深圳)有限公司 Transfer learning method, apparatus, computer equipment and storage medium
CN109034224A (en) * 2018-07-16 2018-12-18 西安电子科技大学 Hyperspectral classification method based on double branching networks
CN109146847A (en) * 2018-07-18 2019-01-04 浙江大学 A kind of wafer figure batch quantity analysis method based on semi-supervised learning
CN109447149A (en) * 2018-10-25 2019-03-08 腾讯科技(深圳)有限公司 A kind of training method of detection model, device and terminal device
CN109583492A (en) * 2018-11-26 2019-04-05 平安科技(深圳)有限公司 A kind of method and terminal identifying antagonism image
CN109784392A (en) * 2019-01-07 2019-05-21 华南理工大学 A kind of high spectrum image semisupervised classification method based on comprehensive confidence
CN109919073A (en) * 2019-03-01 2019-06-21 中山大学 A kind of recognition methods again of the pedestrian with illumination robustness
CN110379464A (en) * 2019-07-29 2019-10-25 桂林电子科技大学 The prediction technique of DNA transcription terminator in a kind of bacterium
CN111144565A (en) * 2019-12-27 2020-05-12 中国人民解放军军事科学院国防科技创新研究院 Self-supervision field self-adaptive deep learning method based on consistency training
CN111222576A (en) * 2020-01-08 2020-06-02 西安理工大学 High-resolution remote sensing image classification method
CN111414936A (en) * 2020-02-24 2020-07-14 北京迈格威科技有限公司 Determination method of classification network, image detection method, device, equipment and medium
CN111461006A (en) * 2020-03-31 2020-07-28 哈尔滨航耀光韬科技有限公司 Optical remote sensing image tower position detection method based on deep migration learning
CN111523521A (en) * 2020-06-18 2020-08-11 西安电子科技大学 Remote sensing image classification method for double-branch fusion multi-scale attention neural network
CN112084843A (en) * 2020-07-28 2020-12-15 北京工业大学 Multispectral river channel remote sensing monitoring method based on semi-supervised learning
CN112560960A (en) * 2020-12-16 2021-03-26 北京影谱科技股份有限公司 Hyperspectral image classification method and device and computing equipment
CN112580670A (en) * 2020-12-31 2021-03-30 中国人民解放军国防科技大学 Hyperspectral-spatial-spectral combined feature extraction method based on transfer learning
CN112733859A (en) * 2021-01-25 2021-04-30 重庆大学 Depth migration semi-supervised domain self-adaptive classification method for histopathology image
CN112967296A (en) * 2021-03-10 2021-06-15 重庆理工大学 Point cloud dynamic region graph convolution method, classification method and segmentation method
CN113011513A (en) * 2021-03-29 2021-06-22 华南理工大学 Image big data classification method based on general domain self-adaption
US11047819B2 (en) * 2018-02-26 2021-06-29 Raytheon Technologies Corporation Nondestructive multispectral vibrothermography inspection system and method therefor
CN113298774A (en) * 2021-05-20 2021-08-24 复旦大学 Image segmentation method and device based on dual condition compatible neural network
CN113316790A (en) * 2019-01-30 2021-08-27 赫尔实验室有限公司 System and method for unsupervised domain adaptation via SLICED-WASSERSTEIN distance
CN113554627A (en) * 2021-07-27 2021-10-26 广西师范大学 Wheat head detection method based on computer vision semi-supervised pseudo label learning
CN113743474A (en) * 2021-08-10 2021-12-03 扬州大学 Digital picture classification method and system based on cooperative semi-supervised convolutional neural network
CN115588140A (en) * 2022-10-24 2023-01-10 北京市遥感信息研究所 Multi-spectral remote sensing image multi-directional target detection method
US20230080164A1 (en) * 2017-12-20 2023-03-16 Alpvision S.A. Authentication machine learning from multiple digital presentations

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105279519A (en) * 2015-09-24 2016-01-27 四川航天系统工程研究所 Remote sensing image water body extraction method and system based on cooperative training semi-supervised learning

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105279519A (en) * 2015-09-24 2016-01-27 四川航天系统工程研究所 Remote sensing image water body extraction method and system based on cooperative training semi-supervised learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KEILLER NOGUEIRA等: "《Improving Spatial Feature Representation from Aerial Scenes by Using Convolutional Networks》", 《2015 28TH SIBGRAPI CONFERENCE ON GRAPHICS, PATTERNS AND IMAGES》 *
杨伟: "《基于半监督学习的遥感影像分类》", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
滑文强等: "《基于半监督学习的SVM-Wishart极化SAR图像分类方法》", 《雷达学报》 *

Cited By (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108051233A (en) * 2017-12-16 2018-05-18 太原理工大学 A kind of soft sensing method for load parameter of ball mill
US20230080164A1 (en) * 2017-12-20 2023-03-16 Alpvision S.A. Authentication machine learning from multiple digital presentations
CN108229425A (en) * 2018-01-29 2018-06-29 浙江大学 A kind of identifying water boy method based on high-resolution remote sensing image
CN108399420A (en) * 2018-01-30 2018-08-14 北京理工雷科电子信息技术有限公司 A kind of visible light naval vessel false-alarm elimination method based on depth convolutional network
CN108399420B (en) * 2018-01-30 2021-07-06 北京理工雷科电子信息技术有限公司 Visible light ship false alarm rejection method based on deep convolutional network
CN108345856A (en) * 2018-02-09 2018-07-31 电子科技大学 The SAR automatic target recognition methods integrated based on isomery convolutional neural networks
US11047819B2 (en) * 2018-02-26 2021-06-29 Raytheon Technologies Corporation Nondestructive multispectral vibrothermography inspection system and method therefor
CN108805160A (en) * 2018-04-17 2018-11-13 平安科技(深圳)有限公司 Transfer learning method, apparatus, computer equipment and storage medium
CN109034224B (en) * 2018-07-16 2022-03-11 西安电子科技大学 Hyperspectral classification method based on double branch network
CN109034224A (en) * 2018-07-16 2018-12-18 西安电子科技大学 Hyperspectral classification method based on double branching networks
CN109146847B (en) * 2018-07-18 2022-04-05 浙江大学 Wafer map batch analysis method based on semi-supervised learning
CN109146847A (en) * 2018-07-18 2019-01-04 浙江大学 A kind of wafer figure batch quantity analysis method based on semi-supervised learning
CN109447149A (en) * 2018-10-25 2019-03-08 腾讯科技(深圳)有限公司 A kind of training method of detection model, device and terminal device
CN109583492A (en) * 2018-11-26 2019-04-05 平安科技(深圳)有限公司 A kind of method and terminal identifying antagonism image
CN109784392A (en) * 2019-01-07 2019-05-21 华南理工大学 A kind of high spectrum image semisupervised classification method based on comprehensive confidence
CN109784392B (en) * 2019-01-07 2020-12-22 华南理工大学 Hyperspectral image semi-supervised classification method based on comprehensive confidence
CN113316790A (en) * 2019-01-30 2021-08-27 赫尔实验室有限公司 System and method for unsupervised domain adaptation via SLICED-WASSERSTEIN distance
CN109919073B (en) * 2019-03-01 2021-04-06 中山大学 Pedestrian re-identification method with illumination robustness
CN109919073A (en) * 2019-03-01 2019-06-21 中山大学 A kind of recognition methods again of the pedestrian with illumination robustness
CN110379464A (en) * 2019-07-29 2019-10-25 桂林电子科技大学 The prediction technique of DNA transcription terminator in a kind of bacterium
CN111144565A (en) * 2019-12-27 2020-05-12 中国人民解放军军事科学院国防科技创新研究院 Self-supervision field self-adaptive deep learning method based on consistency training
CN111144565B (en) * 2019-12-27 2020-10-27 中国人民解放军军事科学院国防科技创新研究院 Self-supervision field self-adaptive deep learning method based on consistency training
CN111222576B (en) * 2020-01-08 2023-03-24 西安理工大学 High-resolution remote sensing image classification method
CN111222576A (en) * 2020-01-08 2020-06-02 西安理工大学 High-resolution remote sensing image classification method
CN111414936A (en) * 2020-02-24 2020-07-14 北京迈格威科技有限公司 Determination method of classification network, image detection method, device, equipment and medium
CN111414936B (en) * 2020-02-24 2023-08-18 北京迈格威科技有限公司 Determination method, image detection method, device, equipment and medium of classification network
CN111461006A (en) * 2020-03-31 2020-07-28 哈尔滨航耀光韬科技有限公司 Optical remote sensing image tower position detection method based on deep migration learning
CN111523521B (en) * 2020-06-18 2023-04-07 西安电子科技大学 Remote sensing image classification method for double-branch fusion multi-scale attention neural network
CN111523521A (en) * 2020-06-18 2020-08-11 西安电子科技大学 Remote sensing image classification method for double-branch fusion multi-scale attention neural network
CN112084843B (en) * 2020-07-28 2024-03-12 北京工业大学 Multispectral river channel remote sensing monitoring method based on semi-supervised learning
CN112084843A (en) * 2020-07-28 2020-12-15 北京工业大学 Multispectral river channel remote sensing monitoring method based on semi-supervised learning
CN112560960A (en) * 2020-12-16 2021-03-26 北京影谱科技股份有限公司 Hyperspectral image classification method and device and computing equipment
CN112580670A (en) * 2020-12-31 2021-03-30 中国人民解放军国防科技大学 Hyperspectral-spatial-spectral combined feature extraction method based on transfer learning
CN112580670B (en) * 2020-12-31 2022-04-19 中国人民解放军国防科技大学 Hyperspectral-spatial-spectral combined feature extraction method based on transfer learning
CN112733859B (en) * 2021-01-25 2023-12-19 重庆大学 Depth migration semi-supervised domain self-adaptive classification method for histopathological image
CN112733859A (en) * 2021-01-25 2021-04-30 重庆大学 Depth migration semi-supervised domain self-adaptive classification method for histopathology image
CN112967296A (en) * 2021-03-10 2021-06-15 重庆理工大学 Point cloud dynamic region graph convolution method, classification method and segmentation method
CN113011513B (en) * 2021-03-29 2023-03-24 华南理工大学 Image big data classification method based on general domain self-adaption
CN113011513A (en) * 2021-03-29 2021-06-22 华南理工大学 Image big data classification method based on general domain self-adaption
CN113298774B (en) * 2021-05-20 2022-10-18 复旦大学 Image segmentation method and device based on dual condition compatible neural network
CN113298774A (en) * 2021-05-20 2021-08-24 复旦大学 Image segmentation method and device based on dual condition compatible neural network
CN113554627B (en) * 2021-07-27 2022-04-29 广西师范大学 Wheat head detection method based on computer vision semi-supervised pseudo label learning
CN113554627A (en) * 2021-07-27 2021-10-26 广西师范大学 Wheat head detection method based on computer vision semi-supervised pseudo label learning
CN113743474A (en) * 2021-08-10 2021-12-03 扬州大学 Digital picture classification method and system based on cooperative semi-supervised convolutional neural network
CN113743474B (en) * 2021-08-10 2023-09-26 扬州大学 Digital picture classification method and system based on collaborative semi-supervised convolutional neural network
CN115588140A (en) * 2022-10-24 2023-01-10 北京市遥感信息研究所 Multi-spectral remote sensing image multi-directional target detection method
CN115588140B (en) * 2022-10-24 2023-04-18 北京市遥感信息研究所 Multi-spectral remote sensing image multi-directional target detection method

Similar Documents

Publication Publication Date Title
CN107451616A (en) Multi-spectral remote sensing image terrain classification method based on the semi-supervised transfer learning of depth
CN109614985B (en) Target detection method based on densely connected feature pyramid network
CN111414942B (en) Remote sensing image classification method based on active learning and convolutional neural network
CN108830188A (en) Vehicle checking method based on deep learning
CN111914907A (en) Hyperspectral image classification method based on deep learning space-spectrum combined network
CN108171103A (en) Object detection method and device
CN108537742A (en) A kind of panchromatic sharpening method of remote sensing images based on generation confrontation network
CN112446388A (en) Multi-category vegetable seedling identification method and system based on lightweight two-stage detection model
CN107229904A (en) A kind of object detection and recognition method based on deep learning
CN110084159A (en) Hyperspectral image classification method based on the multistage empty spectrum information CNN of joint
CN106845418A (en) A kind of hyperspectral image classification method based on deep learning
CN106991382A (en) A kind of remote sensing scene classification method
CN104484681B (en) Hyperspectral Remote Sensing Imagery Classification method based on spatial information and integrated study
CN107016405A (en) A kind of insect image classification method based on classification prediction convolutional neural networks
CN112070078B (en) Deep learning-based land utilization classification method and system
CN107832797B (en) Multispectral image classification method based on depth fusion residual error network
CN106611423B (en) SAR image segmentation method based on ridge ripple filter and deconvolution structural model
CN107368852A (en) A kind of Classification of Polarimetric SAR Image method based on non-down sampling contourlet DCGAN
CN107194937A (en) Tongue image partition method under a kind of open environment
CN107145830A (en) Hyperspectral image classification method with depth belief network is strengthened based on spatial information
CN110852369B (en) Hyperspectral image classification method combining 3D/2D convolutional network and adaptive spectrum unmixing
CN109190491A (en) Residual error convolutional neural networks SAR image sea ice classification method
CN106683102A (en) SAR image segmentation method based on ridgelet filters and convolution structure model
CN108280396A (en) Hyperspectral image classification method based on depth multiple features active migration network
CN111639587B (en) Hyperspectral image classification method based on multi-scale spectrum space convolution neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20171208

RJ01 Rejection of invention patent application after publication