CN111461067B - Zero sample remote sensing image scene identification method based on priori knowledge mapping and correction - Google Patents
Zero sample remote sensing image scene identification method based on priori knowledge mapping and correction Download PDFInfo
- Publication number
- CN111461067B CN111461067B CN202010338879.6A CN202010338879A CN111461067B CN 111461067 B CN111461067 B CN 111461067B CN 202010338879 A CN202010338879 A CN 202010338879A CN 111461067 B CN111461067 B CN 111461067B
- Authority
- CN
- China
- Prior art keywords
- remote sensing
- sensing image
- mapping
- visible
- invisible
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a zero sample remote sensing image scene recognition method based on prior knowledge mapping and correction. Based on a visible remote sensing image scene sample with a class label and a priori knowledge representation vector set of a visible class, a depth feature extractor and a mapping model from robust visual features to priori knowledge representation features are obtained through remote sensing scene class learning and cross-mode learning between the visual feature vectors and the priori knowledge representation vectors. Based on the category prior knowledge representation vector of the whole category and the invisible remote sensing image scene sample, the prior knowledge representation vector of the invisible category is progressively corrected through unsupervised collaborative representation learning and unsupervised k-nearest neighbor algorithm respectively, so that the classification precision of the zero-sample remote sensing image scene is effectively improved.
Description
Technical Field
The invention belongs to the technical field of remote sensing and photogrammetry, relates to a zero sample remote sensing image scene classification method, and particularly relates to a zero sample remote sensing image scene identification method based on priori knowledge mapping and correction.
Background
After the 21 st century, the development of remote sensing technology is more rapid, and the remote sensing technology plays an important role in land resource investigation, ecological environment monitoring, disaster analysis and prediction and the like. With the improvement of the resolution of the remote sensing image, the classification method based on the pixels and the objects is widely influenced by the phenomena of 'same-object different spectrum and same-spectrum foreign matter' of the high-resolution remote sensing image, and the requirement for efficient and stable remote sensing image interpretation cannot be met. Based on this consideration, remote sensing image scene classification is widely concerned by researchers at home and abroad. The remote sensing image scene classification aims at predicting the semantic category of an image block by excavating visual primitives and the spatial relationship among the visual primitives in the remote sensing image scene (image block), and can greatly reduce the confusion degree of pixel-level or object-level ground object interpretation, thereby improving the stability and accuracy of high-resolution remote sensing image interpretation, and having important application in the aspects of content-based remote sensing image retrieval, remote sensing image target detection and the like.
With the continuous opening of remote sensing image scene data sets, a large number of remote sensing image scene classification methods based on artificial features or deep learning are provided by multi-field researchers. However, most of the existing remote sensing image scene classification methods rely on all types of remote sensing image samples to learn classification models. With the coming of the remote sensing big data era, the remote sensing ground object categories show an explosive growth trend, so that it is unrealistic to collect sufficient remote sensing image samples for all the categories. How to introduce the priori knowledge in the field of remote sensing into the scene understanding process of the remote sensing image can identify the remote sensing image scene with the class never appearing in the training stage only by learning partial classes containing the remote sensing image, and the method has important practical significance in the era of remote sensing big data. Therefore, the development of Zero-sample learning (Zero-shot learning) in recent years provides a new idea for remote sensing image scene classification. Zero-sample learning aims to simulate the process of human learning, and samples in an invisible class (unseen) are inferentially identified with the aid of class prior knowledge (e.g. attribute vectors of classes, natural language semantic vectors of classes) through visible class (seen) sample learning. Currently, zero sample learning is mainly focused on the field of computer vision, the research on the classification of remote sensing image scenes is few, and a great deal of research work is needed to promote the development of the zero sample remote sensing image scene classification technology.
Disclosure of Invention
The invention provides a zero-sample remote sensing image scene recognition method based on priori knowledge mapping and correction, which is based on the problems of large modal span between a bottom remote sensing image scene sample and high-level priori knowledge representation, drift of a visible class priori knowledge space and an invisible class priori knowledge space, and offset of an invisible class priori knowledge representation space generated by remote sensing image scene mapping and an invisible class semantic space corrected based on the visible class priori knowledge space. Based on a visible remote sensing image scene sample with a class label and a priori knowledge representation vector set of a visible class, a depth feature extractor and a mapping model from robust visual features to priori knowledge representation features are obtained through remote sensing scene class learning and cross-mode learning between the visual feature vectors and the priori knowledge representation vectors. Based on the category prior knowledge representation vector of the whole category and the invisible remote sensing image scene sample, the prior knowledge representation vector of the invisible category is progressively corrected through unsupervised collaborative representation learning and unsupervised k-nearest neighbor algorithm respectively, so that the classification precision of the zero-sample remote sensing image scene is effectively improved.
The technical scheme adopted by the invention is as follows: a zero sample remote sensing image scene recognition method based on priori knowledge mapping and correction comprises the following steps:
a training stage:
step 1: creating a priori knowledge representation vector corresponding to each category of visible classes based on open natural language corpus or domain expert knowledgeVector of prior knowledge representation corresponding to each category of invisible classesWhere p and q represent the number of classes, visible and invisible, respectively, dsRepresenting the dimensionality of the vector for a priori knowledge;
and 2, step: input original remote sensing image scene data set D { (x)i,yi):i=1,...,M},
Where D is a visible class data set, xiRepresenting in visible classesIth remote sensing image scene, yiRepresenting a category label of the ith image in the visible category, wherein M is the total number of samples of the visible remote sensing data; dUIn the case of a data set of the invisible class,representing the kth remote sensing image scene in the invisible class,a category label of the kth image in the invisible category is represented, and N is the total number of samples of the invisible category data;
extracting image characteristics F of visible class data set and image characteristics F of invisible class data set by utilizing deep convolutional networkU;
And step 3: solving a mapping matrix W from F to S based on a robust cross-modal mapping target function of visual feature self-coding constraint, and thus finishing the learning of depth cross-modal mapping;
Step 6: solving by using k nearest neighbor algorithmSemantic vector obtained through mappingThe neighbor vectors in (1) are averaged to obtain
And (3) a testing stage:
and 7: giving an invisible test remote sensing image scene, extracting visual features and mapping to obtain semantic vectors according to the steps 2-5
And step 8: computingAndcosine similarity between the two images is obtained, and a label of a test remote sensing image scene is obtained.
Furthermore, in the step 2, T is used for representing the convolutional layer hyper-parameter of the deep convolutional network, and V is the mapping hyper-parameter of the last fully-connected layer feature and the classification layer; learning the hyper-parameter T of the convolution layer and the hyper-parameter V of the mapping of the full connection layer by fine tuning the deep convolution network, and extracting the image characteristics of the visible data set by utilizing the hyper-parameter T of the convolution layerThe fine tuning deep network process only uses visible data; wherein f isi=Q(xi(ii) a T), Q (; a) represents a non-linear mapping of a deep convolutional network, the deep convolutional network based on the remote sensing image scene data set optimizes a loss function as in equation one, wherein ci=σ(fiV), σ () denotes the Softmax map,
wherein M is the total number of samples of the visible remote sensing data, and p represents the number of categories of the visible remote sensing data.
Further, the mapping matrix W in step 3 is obtained by the self-encoder, and the objective function is as follows:
where alpha is a self-encoding regularization coefficient,denotes the F norm, s denotes the sum of FiAnd (3) simplifying a corresponding priori knowledge semantic vector into a Sylvester equation, and solving W by using a Bartels-Stewart algorithm.
Further, the objective function of the collaborative representation coefficient ρ in unsupervised collaborative representation learning in step 4 is:
where β is the regularization constant, the closed form solution of the above equation:
wherein, I is a discrimination matrix, and the optimal co-expression coefficient is obtained by formulaPerforming matrix operation with S to obtain reconstructed invisible semantic vector
wherein the content of the first and second substances,to representThe k-th invisible class prior knowledge of the medium represents that the vector is inThe m neighbor prior knowledge searched in (1) represents a vector, k is 1 … q, and o is 1 … m.
Further, the label of the invisible type test remote sensing image scene in step 8 is calculated according to the following formula:
specifically, a set of test remote sensing image scenes is givenVisual features of remote sensing scene imagesFurther mapping it into semantic vector with matrix WCalculating outAndcosine similarity between them, wherein,is an image of a sceneD (-) is the cosine distance equation.
The invention has the following advantages: the invention aims at the problems of mapping learning and reference correction of prior knowledge in a remote sensing scene zero sample classification task. Based on the category prior knowledge representation vector of the visible category and the remote sensing image scene sample, the depth cross-modal mapping from the visual space of the remote sensing image scene to the category prior knowledge representation space is realized by combining the scene category classification and the multitask learning of self-coding cross-modal mapping. Aiming at the offset problem of a visible prior knowledge representation space and an invisible prior knowledge representation space and the offset problem of an invisible prior knowledge representation space after self-coding cross-modal mapping model mapping and the offset problem of an invisible prior knowledge representation space after collaborative representation, the invention corrects the prior knowledge representation vector of the invisible category through unsupervised collaborative representation learning and unsupervised k-nearest neighbor algorithm respectively based on category prior knowledge representation vectors of all categories and invisible remote sensing image samples, and realizes a stable invisible remote sensing image scene recognition task.
Drawings
FIG. 1: is a general flow diagram of an embodiment of the invention;
FIG. 2: is a sample diagram of a data set according to an embodiment of the present invention.
Detailed Description
In order to facilitate the understanding and implementation of the present invention for those of ordinary skill in the art, the present invention is further described in detail with reference to the accompanying drawings and examples, it is to be understood that the embodiments described herein are merely illustrative and explanatory of the present invention and are not restrictive thereof.
Referring to fig. 1, the method for identifying the zero-sample remote sensing image scene based on prior knowledge mapping and correction provided by the invention comprises the following steps:
step 1: creating a priori knowledge representation vector corresponding to each category of visible classes based on open natural language corpus or domain expert knowledgeVector of prior knowledge representation corresponding to each category of invisible classesWhere p and q represent the number of classes, visible and invisible, respectively, dsIs a semantic vector dimension.
Step 2: input original remote sensing image scene data set D { (x)i,yi):i=1,...,M},
Extracting image characteristics F of visible class data set and image characteristics F of invisible class data set by utilizing deep convolutional networkU;
Let T denote the convolutional layer hyper-parameter of the deep convolutional network Resnet-50, and V is the mapping hyper-parameter of the last fully-connected layer feature f and the classification layer y. And learning the convolutional layer hyper-parameter T and the full connection layer mapping hyper-parameter V through a fine tuning deep convolutional network. The network optimization loss function based on the remote sensing image scene data set is as the formula I, wherein ci=σ(fiV), σ () denotes the Softmax mapping, fi=Q(xi(ii) a T), Q (; a.) represents a non-linear mapping of the deep convolutional network.
And learning the convolutional layer hyper-parameter T and the full-connection layer mapping hyper-parameter V through a fine tuning deep convolutional network. Extracting image features of visible class data set by using parameter TExtracting image features of visible class datasetsD is a visible class data set, xiRepresenting the ith remote-sensing image scene in the visible class, yiRepresenting a category label of the ith image in the visible category, wherein M is the total number of samples of the visible remote sensing data; dUIn the case of a data set of the invisible class,representing the ith remote sensing image scene in the invisible class,a category label representing the ith image in the invisible category, wherein N is the total number of samples of the invisible category data;
And step 3: and solving a mapping matrix W from F to S. The mapping matrix W is obtained by the self-encoder, and the objective function is as follows:
wherein, alpha is a regularization coefficient of self-coding, and the optimal value is 0.001 through experimental analysis.
Denotes the F norm, s denotes the sum of FiThe corresponding priori knowledge semantic vector, formula one, can be simplified into Sylvester equation, and the W is solved by using Bartels-Stewart algorithm.
And 4, step 4: correcting S with collaborative representationUTo obtainThe objective function of the co-expression coefficient ρ is:
where β is the regularization constant. The closed-form solution of the above formula is:
wherein, I is a discrimination matrix. Optimal co-expression coefficient obtained by formula IIIPerforming matrix operation with S to obtain reconstructed invisible semantic vector
step 6: solving by using k nearest neighbor algorithmIn the process of passingMapped a priori knowledge representative vectorThe neighbor vectors in (1) are averaged to obtainWhereinCalculated as follows:
to representThe j-th invisible class prior knowledge represents the vector is inThe m neighbor prior knowledge sought in (1) represents a vector.
And 7: giving an invisible image, extracting visual features and mapping to obtain a priori knowledge expression vector
And 8: computingAndthe cosine similarity between the two images, and the label of the test image is predicted. The label of the invisible type test image can be calculated as follows:
specifically, a set of test remote sensing scene images is givenVisual features of remote sensing scene imagesFurther mapping it to a priori knowledge representation vector by a matrix WComputingAndcosine similarity between them, wherein,is an image of a sceneD (-) is the cosine distance equation.
In order to verify the effectiveness of the disclosed technology, a plurality of existing disclosed remote sensing image scene data sets are integrated, and a remote sensing image scene data set with more scene types is established. Based on the natural language models Word2vec and Bert, two types of class prior knowledge expression vectors are created for each class of the newly constructed remote sensing scene data set. Based on two different classification prior knowledge representation methods, experimental results show that the algorithm disclosed by the invention can obtain ideal classification precision under the condition of dividing multiple different visible classes and invisible classes.
The described method has been an evaluation test on a new data set obtained by integrating public data sets, which may reflect the effectiveness of the method. Specifically, a public evaluation dataset is shown in fig. 2, which comprises 70 classes of scene categories, each class comprising 800 images. Table 1 shows the results of vector testing using two prior knowledge of Word2vec and Bert under different partitioning modes for the visible class and the invisible class.
Table 1. visible class and invisible class are divided into two kinds of prior knowledge expression vectors of Word2vec and Bert according to different proportions, and the overall accuracy of the method on the test data set is measured
It should be understood that parts of the specification not set forth in detail are well within the prior art.
It should be understood that the above description of the preferred embodiments is given for clarity and not for any purpose of limitation, and that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (7)
1. A zero sample remote sensing image scene recognition method based on prior knowledge mapping and correction is characterized by comprising the following steps:
a training stage:
step 1: creating a priori knowledge representation vector corresponding to each category of visible classes based on open natural language corpus or domain expert knowledgeVector of prior knowledge representation corresponding to each category of invisible classesWhere p and q represent the number of classes, visible and invisible, respectively, dsRepresenting the dimensionality of the vector for a priori knowledge;
step 2: input original remote sensing image scene data set D { (x)i,yi):i=1,…,M},Where D is a visible class data set, xiRepresenting the ith remote-sensing image scene in the visible class, yiRepresenting a category label of the ith image in the visible category, wherein M is the total number of samples of the visible remote sensing data; dUIn the case of a data set of the invisible class,representing the kth remote sensing image scene in the invisible class,a category label representing the kth image in the invisible category, wherein N is the total number of samples of the invisible category data;
extracting image characteristics F of visible class data set and image characteristics F of invisible class data set by utilizing deep convolutional networkU;
And step 3: solving a mapping matrix W from F to S based on a robust cross-modal mapping target function of visual feature self-coding constraint, and thus finishing the learning of depth cross-modal mapping;
Step 6:solving by using k nearest neighbor algorithmSemantic vector obtained through mappingThe neighbor vectors in (1) are averaged to obtain
And (3) a testing stage:
and 7: giving an invisible test remote sensing image scene, extracting visual features and mapping to obtain semantic vectors according to the steps 2-5
2. The method for identifying the scene of the zero-sample remote sensing image mapped and corrected based on the prior knowledge as claimed in claim 1, wherein: in the step 2, T is used for representing the convolutional layer hyper-parameter of the deep convolutional network, and V is the mapping hyper-parameter of the last fully-connected layer feature and the classification layer; learning the hyper-parameter T of the convolution layer and the hyper-parameter V of the mapping of the full connection layer by fine tuning the deep convolution network, and extracting the image characteristics of the visible data set by utilizing the hyper-parameter T of the convolution layerThe fine tuning deep network process only uses visible data; wherein f isi=Q(xi;T),Q represents the nonlinear mapping of the depth convolution network, the optimization loss function of the depth convolution network based on the remote sensing image scene data set is shown as the formula I, wherein ci=σ(fiV), σ () denotes the Softmax map,
wherein x isiRepresenting the ith remote-sensing image scene in the visible class, yiClass label indicating the ith image in the visible class, dfAnd representing the dimension of the characteristic, wherein M is the total number of samples of the visible remote sensing data, and p represents the number of categories of the visible category.
3. The zero-sample remote sensing image scene recognition method based on priori knowledge mapping and correction according to claim 2, wherein: the mapping matrix W in the step 3 is obtained through a self-encoder, and the objective function is as follows:
4. The method for identifying the scene of the zero-sample remote sensing image mapped and corrected based on the prior knowledge as claimed in claim 1, wherein: the objective function of the collaborative representation coefficient ρ in unsupervised collaborative representation learning in step 4 is:
where β is the regularization constant, the closed form solution of the above equation:
wherein, I is a discrimination matrix, and the optimal co-expression coefficient is obtained by formulaPerforming matrix operation with S to obtain reconstructed invisible semantic vector
6. the method for zero-sample remote sensing image scene recognition based on priori knowledge mapping and correction of claim 3, wherein: in step 6Calculated as follows:
7. The method for zero-sample remote sensing image scene recognition based on priori knowledge mapping and correction of claim 6, wherein: the label of the invisible test remote sensing image scene in the step 8 is calculated according to the following formula:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010338879.6A CN111461067B (en) | 2020-04-26 | 2020-04-26 | Zero sample remote sensing image scene identification method based on priori knowledge mapping and correction |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010338879.6A CN111461067B (en) | 2020-04-26 | 2020-04-26 | Zero sample remote sensing image scene identification method based on priori knowledge mapping and correction |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111461067A CN111461067A (en) | 2020-07-28 |
CN111461067B true CN111461067B (en) | 2022-06-14 |
Family
ID=71686040
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010338879.6A Active CN111461067B (en) | 2020-04-26 | 2020-04-26 | Zero sample remote sensing image scene identification method based on priori knowledge mapping and correction |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111461067B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112070023B (en) * | 2020-09-09 | 2022-08-16 | 郑州轻工业大学 | Neighborhood prior embedded type collaborative representation mode identification method |
CN115100532B (en) * | 2022-08-02 | 2023-04-07 | 北京卫星信息工程研究所 | Small sample remote sensing image target detection method and system |
CN115018472B (en) * | 2022-08-03 | 2022-11-11 | 中国电子科技集团公司第五十四研究所 | Interactive incremental information analysis system based on interpretable mechanism |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102054274A (en) * | 2010-12-01 | 2011-05-11 | 南京大学 | Method for full automatic extraction of water remote sensing information in coastal zone |
CN109558890A (en) * | 2018-09-30 | 2019-04-02 | 天津大学 | Zero sample image classification method of confrontation network is recycled based on adaptive weighting Hash |
CN110334781A (en) * | 2019-06-10 | 2019-10-15 | 大连理工大学 | A kind of zero sample learning algorithm based on Res-Gan |
CN110728187A (en) * | 2019-09-09 | 2020-01-24 | 武汉大学 | Remote sensing image scene classification method based on fault tolerance deep learning |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150310862A1 (en) * | 2014-04-24 | 2015-10-29 | Microsoft Corporation | Deep learning for semantic parsing including semantic utterance classification |
-
2020
- 2020-04-26 CN CN202010338879.6A patent/CN111461067B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102054274A (en) * | 2010-12-01 | 2011-05-11 | 南京大学 | Method for full automatic extraction of water remote sensing information in coastal zone |
CN109558890A (en) * | 2018-09-30 | 2019-04-02 | 天津大学 | Zero sample image classification method of confrontation network is recycled based on adaptive weighting Hash |
CN110334781A (en) * | 2019-06-10 | 2019-10-15 | 大连理工大学 | A kind of zero sample learning algorithm based on Res-Gan |
CN110728187A (en) * | 2019-09-09 | 2020-01-24 | 武汉大学 | Remote sensing image scene classification method based on fault tolerance deep learning |
Non-Patent Citations (2)
Title |
---|
Zero-Shot Scene Classification for High Spatial Resolution Remote Sensing Images;Aoxue Li 等;《IEEE Transactions on Geoscience and Remote Sensing》;20170731;全文 * |
结合深度学习和半监督学习的遥感影像分类进展;谭琨 等;《中国图象图形学报》;20191130;第24卷(第11期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN111461067A (en) | 2020-07-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111461067B (en) | Zero sample remote sensing image scene identification method based on priori knowledge mapping and correction | |
Zhu et al. | Intelligent logging lithological interpretation with convolution neural networks | |
CN110516095B (en) | Semantic migration-based weak supervision deep hash social image retrieval method and system | |
CN109871875B (en) | Building change detection method based on deep learning | |
CN112232371B (en) | American license plate recognition method based on YOLOv3 and text recognition | |
CN113223042B (en) | Intelligent acquisition method and equipment for remote sensing image deep learning sample | |
CN115934990B (en) | Remote sensing image recommendation method based on content understanding | |
CN110929746A (en) | Electronic file title positioning, extracting and classifying method based on deep neural network | |
CN108805102A (en) | A kind of video caption detection and recognition methods and system based on deep learning | |
CN112149758A (en) | Hyperspectral open set classification method based on Euclidean distance and deep learning | |
CN113988147A (en) | Multi-label classification method and device for remote sensing image scene based on graph network, and multi-label retrieval method and device | |
CN115049841A (en) | Depth unsupervised multistep anti-domain self-adaptive high-resolution SAR image surface feature extraction method | |
CN115393666A (en) | Small sample expansion method and system based on prototype completion in image classification | |
CN115546553A (en) | Zero sample classification method based on dynamic feature extraction and attribute correction | |
CN117572457B (en) | Cross-scene multispectral point cloud classification method based on pseudo tag learning | |
CN109002771A (en) | A kind of Classifying Method in Remote Sensing Image based on recurrent neural network | |
CN114579794A (en) | Multi-scale fusion landmark image retrieval method and system based on feature consistency suggestion | |
CN113269274A (en) | Zero sample identification method and system based on cycle consistency | |
CN114511787A (en) | Neural network-based remote sensing image ground feature information generation method and system | |
CN117315556A (en) | Improved Vision Transformer insect fine grain identification method | |
CN115497006B (en) | Urban remote sensing image change depth monitoring method and system based on dynamic mixing strategy | |
CN111652265A (en) | Robust semi-supervised sparse feature selection method based on self-adjusting graph | |
CN115496950A (en) | Neighborhood information embedded semi-supervised discrimination dictionary pair learning image classification method | |
CN115511214A (en) | Multi-scale sample unevenness-based mineral product prediction method and system | |
CN114708501A (en) | Remote sensing image building change detection method based on condition countermeasure network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |