CN112347970A - Remote sensing image ground object identification method based on graph convolution neural network - Google Patents
Remote sensing image ground object identification method based on graph convolution neural network Download PDFInfo
- Publication number
- CN112347970A CN112347970A CN202011294356.2A CN202011294356A CN112347970A CN 112347970 A CN112347970 A CN 112347970A CN 202011294356 A CN202011294356 A CN 202011294356A CN 112347970 A CN112347970 A CN 112347970A
- Authority
- CN
- China
- Prior art keywords
- neural network
- remote sensing
- sensing image
- feature
- graph convolution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 36
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 27
- 230000007246 mechanism Effects 0.000 claims abstract description 15
- 238000012549 training Methods 0.000 claims abstract description 13
- 238000012545 processing Methods 0.000 claims abstract description 9
- 230000003044 adaptive effect Effects 0.000 claims abstract description 7
- 238000000605 extraction Methods 0.000 claims abstract description 5
- 238000012360 testing method Methods 0.000 claims abstract description 4
- 238000013145 classification model Methods 0.000 claims description 13
- 230000008569 process Effects 0.000 claims description 9
- 238000012937 correction Methods 0.000 claims description 8
- 230000006870 function Effects 0.000 claims description 8
- 238000005457 optimization Methods 0.000 claims description 7
- 239000011159 matrix material Substances 0.000 claims description 5
- 238000007781 pre-processing Methods 0.000 claims description 4
- 238000005259 measurement Methods 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 2
- 230000004913 activation Effects 0.000 claims description 2
- 230000015572 biosynthetic process Effects 0.000 claims description 2
- 238000002790 cross-validation Methods 0.000 claims description 2
- 230000000694 effects Effects 0.000 claims description 2
- 230000009286 beneficial effect Effects 0.000 abstract description 6
- 238000010586 diagram Methods 0.000 abstract description 4
- 230000011218 segmentation Effects 0.000 abstract description 4
- 230000000903 blocking effect Effects 0.000 abstract 1
- 238000009472 formulation Methods 0.000 abstract 1
- 238000004519 manufacturing process Methods 0.000 abstract 1
- 239000000203 mixture Substances 0.000 abstract 1
- 238000011160 research Methods 0.000 description 5
- 238000012544 monitoring process Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000013526 transfer learning Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/194—Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Biophysics (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Astronomy & Astrophysics (AREA)
- Mathematical Physics (AREA)
- Remote Sensing (AREA)
- Computational Linguistics (AREA)
- Multimedia (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a remote sensing image ground object recognition method based on a graph convolution neural network, which comprises the steps of carrying out blocking processing on a scene remote sensing image, and manufacturing a training set and a test set; performing superpixel segmentation based on a simple linear iterative clustering algorithm; carrying out superpixel label formulation based on a combination strategy of a voting mechanism; carrying out automatic feature extraction based on a deep convolutional neural network, and making an input parameter set of the convolutional neural network; and (3) training a convolutional neural network model based on the adaptive attention system. According to the method, a remote sensing image ground feature class feature description set is constructed by utilizing super-pixel segmentation and most voting mechanisms, approximate portraits of the remote sensing image ground feature class are obtained, and meanwhile, the data size constructed by the same image map is reduced in a large scale, so that the method is beneficial to processing the remote sensing image ground feature classification problem under a large data sample; and finally, training a convolutional neural network through the fused characteristic diagram set so as to achieve the purpose of accurately distinguishing the ground feature types.
Description
Technical Field
The invention relates to the field of ground object type identification based on a remote sensing satellite imaging technology, in particular to a remote sensing image ground object identification method based on a graph convolution neural network.
Background
With the rapid development of the related technology of artificial intelligence, deep learning obtains breakthrough achievements in many fields. The convolutional neural network obtains outstanding performances in image recognition by depending on the characteristics of local feature extraction, parameter sharing, global pooling and the like. In the field of remote sensing image processing, remote sensing image classification based on a convolutional neural network is one of the current research hotspots. The classification precision is high, and the generalization ability is strong. But at the same time there are some problems. In the aspect of model construction, the network hierarchical structure of the convolutional neural network model for classification is rarely directly adjusted, the classical convolutional neural network model is adopted to carry out feature extraction on classification samples, and then classification work is carried out by combining the traditional remote sensing image classification model, so that related research and application of convolutional neural network structure adjustment and transfer learning in remote sensing image classification are few. Because the remote sensing image contains a large amount of information and the data application condition is far lagged behind the data acquisition capability, how to rapidly classify the massive images is the key point for accelerating the utilization rate of the remote sensing image. Recently, more and more research is beginning to apply the deep learning method to the field of graph data. In recent years, excited by the great success of the convolutional neural network in the field of computer vision, a plurality of methods for redefining the convolution concept based on spectrogram data emerge. Based on the information of the neighbor nodes gathered by the graph convolution neural network, the convolution is directly executed on the graph structure. And through a sampling strategy, calculation can be executed in a batch of nodes instead of the whole graph, and efficiency can be effectively improved. However, how to apply the neural network based on the graph convolution to the remote sensing image and construct the high-precision classifier is still urgent to further research and development. Therefore, the invention provides remote sensing image ground object classification and identification based on the graph convolution neural network, and the type of the ground object can be accurately identified according to part of ground object real data.
Disclosure of Invention
The invention aims to provide a remote sensing image ground feature recognition method based on a graph convolution neural network according to a remote sensing satellite imaging technology, based on different remote sensing image training data sets, firstly, ground feature type feature maps in the training data sets are extracted to form feature description sets of all ground feature types, then, a deep convolution neural network is adopted to learn and fuse all the feature maps of all the types in the remote sensing image ground feature description sets, feature descriptions of nodes, edges and the like of the ground feature types are obtained, then, the graph convolution neural network is adopted to carry out node classification model training on the fused graph features, remote sensing image classification model parameters are determined, and finally, accurate recognition of unmarked image ground features is achieved.
In order to achieve the purpose, the invention provides the following technical scheme: a remote sensing image ground object identification method based on a graph convolution neural network comprises the following steps:
step 1: taking a scene remote sensing image, respectively carrying out relevant preprocessing operations including geometric correction, atmospheric correction and image enhancement, converting the color remote sensing image into a gray image or a CIELAB color space under XY coordinates, carrying out simple linear iterative clustering on the gray image or the CIELAB color space, initializing seed points, and generating compact and approximately uniform superpixels;
step 2: based on the generated super-pixel remote sensing image map, creating nodes and edges of the map, and extracting texture features and color features of the image to be used as a map input feature matrix;
and step 3: making labels by combining most voting mechanisms of ensemble learning, if a certain ground object is marked with more than half of the votes, predicting the superpixels as the ground object categories, and if not, rejecting the marks to further form a label data set of the map;
and 4, step 4: automatically extracting the high-resolution remote sensing image characteristic map by using a deep convolutional neural network to obtain node and edge input data about the image map and form a characteristic description set about the remote sensing image;
and 5: training a convolutional neural network model based on a remote sensing image feature description set to obtain an image ground object classification model and classify a test sample.
As a preferred technical solution of the present invention, in the steps 1 to 4, a feature description set of the feature type of the remote sensing image is extracted, and in the step 5, a feature classification model based on a convolution neural network is proposed.
The invention has the beneficial effects that: according to the method, a remote sensing image ground feature class feature description set is constructed by utilizing super-pixel segmentation and most voting mechanisms, approximate portraits of the remote sensing image ground feature class are obtained, and meanwhile, the data size constructed by the same image map is reduced in a large scale, so that the method is beneficial to processing the remote sensing image ground feature classification problem under a large data sample; and finally, training a convolutional neural network through the fused characteristic diagram set so as to achieve the purpose of accurately distinguishing the ground feature types. The invention is very beneficial to the research on the aspects of remote sensing natural disaster monitoring, land coverage type discrimination, urban planning, ecological environment change monitoring and the like.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a detailed flow chart of the present invention.
Detailed Description
The following detailed description of the preferred embodiments of the present invention, taken in conjunction with the accompanying drawings, will make the advantages and features of the invention more readily understood by those skilled in the art, and thus will more clearly and distinctly define the scope of the invention.
Example (b): referring to fig. 1-2, the present invention provides a technical solution: a remote sensing image ground object identification method based on a graph convolution neural network comprises the following steps:
step 1: taking a scene remote sensing image, respectively carrying out relevant preprocessing operations including geometric correction, atmospheric correction and image enhancement, converting the color remote sensing image into a gray image or a CIELAB color space under XY coordinates, carrying out simple linear iterative clustering on the gray image or the CIELAB color space, initializing seed points, and generating compact and approximately uniform superpixels;
step 2: based on the generated super-pixel remote sensing image map, creating nodes and edges of the map, and extracting texture features and color features of the image to be used as a map input feature matrix;
and step 3: making labels by combining most voting mechanisms of ensemble learning, if a certain ground object is marked with more than half of the votes, predicting the superpixels as the ground object categories, and if not, rejecting the marks to further form a label data set of the map;
and 4, step 4: automatically extracting the high-resolution remote sensing image characteristic map by using a deep convolutional neural network to obtain node and edge input data about the image map and form a characteristic description set about the remote sensing image;
and 5: training a convolutional neural network model based on a remote sensing image feature description set to obtain an image ground object classification model and classify a test sample.
In the steps 1-4, a feature description set of the ground feature types of the remote sensing images is extracted, and in the step 5, a ground feature classification model based on a graph convolution neural network is provided.
For convenience of description, terms specific to the present invention are first defined as follows:
the remote sensing image ground object category feature description set comprises:
the feature description set of the remote sensing image ground object category refers to the following steps: based on the feature maps of the map nodes and edges extracted by the depth convolution network, the map feature maps of different ground feature types are extracted from a block image divided by a super pixel to form a remote sensing image ground feature type feature description set based on the maps so as to approximately describe the category information of the image.
The specific method comprises the following steps:
step 1: forming a remote sensing image characteristic description set;
taking a scene remote sensing image to respectively carry out relevant preprocessing operations including geometric correction, atmospheric correction, image enhancement and the like, supposing that the remote sensing image is divided into M blocks, processing the remote sensing image block by block, leading in corresponding real ground object labels of the image together, converting a color image into a gray image under XY coordinates, carrying out simple linear iterative clustering on the gray image, initializing seed points (clustering centers), and uniformly distributing the seed points in the image according to the set number of superpixels; then constructing a distance measurement standard for a feature vector formed by a gray image, carrying out a local clustering process on block image pixels to generate compact and approximately uniform N superpixels, then creating nodes and edges of a map for the generated superpixels, setting a local binary pattern for an image, comparing the superpixels with adjacent superpixels, and storing the result as a binary number to form texture features as features of map input; then, label making is carried out by combining most voting mechanisms, if the number of the ground object marked tickets exceeds half, the super pixel is predicted to be the ground object type, otherwise, marking is refused, and therefore, the remote sensing image ground object identification problem is modeled into a classification problem;
step 2: the process of automatically extracting the remote sensing image feature map set by the deep convolutional neural network comprises the following steps: the feature atlas formation of convolutional neural network automatic extraction assuming K common ground categories can be expressed asWhereinWherein, k is more than or equal to 1 and less than or equal to M to represent the k block remote sensing image, N (k is more than or equal to 1 and less than or equal to N) represents the nth super pixel, and X is CNN (·) to represent the execution of the deep convolution neural network;
and step 3: the remote sensing image surface feature identification and classification process based on the graph convolution neural network comprises the following steps: according to the feature map automatically extracted by the fully-connected convolutional neural network, assuming that the ground object categories are totally K in number, the ground object identification process can be composed of the following steps;
step 3.1: remote sensing image surface feature class characteristic atlas set generated by step 2K is more than or equal to 1 and less than or equal to M, a fused feature map is generated by connection, a graph convolution neural network is trained, a leave-one-out method is adopted to execute a cross validation process, training parameters theta of a ground feature classification model of the graph convolution neural network are obtained, the number of layers of the graph convolution neural network is 2, an adaptive attention mechanism layer is added behind the graph convolution neural network, each node in the graph can be distributed with different weights according to the features of adjacent nodes, a LOSS function is NLL _ LOSS, log _ softmax function activation is carried out on input parameters, an optimization method Adam optimization function of adaptive learning rate is selected as the optimization function, and the first order of gradient is utilized for optimizationMoment estimation and second moment estimation dynamic adjustment learning rate;
step 3.2: performing a graph convolution operation without an attention mechanism layer on the adaptive feature description set extracted by the graph convolution neural network added with the attention mechanism and the attention feature weight thereof, performing normalization processing, and finally determining the category of each node according to a classification model, wherein the formula description is OpreGCN (-), wherein OpreRepresenting the class to which the output belongs, GCN (-) representing the operation of executing graph convolution neural network;
and 3.3, performing precision operation on the output result and the real ground object type, solving corresponding measurement parameters such as Kappa coefficient, overall precision, confusion matrix and the like, and further judging the model classification effect.
According to the method, a remote sensing image ground feature class feature description set is constructed by utilizing super-pixel segmentation and most voting mechanisms, approximate portraits of the remote sensing image ground feature class are obtained, and meanwhile, the data size constructed by the same image map is reduced in a large scale, so that the method is beneficial to processing the remote sensing image ground feature classification problem under a large data sample; and finally, training a convolutional neural network through the fused characteristic diagram set so as to achieve the purpose of accurately distinguishing the ground feature types. The invention is very beneficial to the research on the aspects of remote sensing natural disaster monitoring, land coverage type discrimination, urban planning, ecological environment change monitoring and the like.
The above examples only show some embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention.
Claims (5)
1. A remote sensing image surface feature identification method based on a graph convolution neural network is characterized by comprising the following steps:
step 1: taking a scene remote sensing image, respectively carrying out relevant preprocessing operations including geometric correction, atmospheric correction and image enhancement, converting the color remote sensing image into a gray image or a CIELAB color space under XY coordinates, carrying out simple linear iterative clustering on the gray image or the CIELAB color space, initializing seed points, and generating compact and approximately uniform superpixels;
step 2: based on the generated super-pixel remote sensing image map, creating nodes and edges of the map, and extracting texture features and color features of the image to be used as a map input feature matrix;
and step 3: making labels by combining most voting mechanisms of ensemble learning, if a certain ground object is marked with more than half of the votes, predicting the superpixels as the ground object categories, and if not, rejecting the marks to further form a label data set of the map;
and 4, step 4: automatically extracting the high-resolution remote sensing image characteristic map by using a deep convolutional neural network to obtain node and edge input data about the image map and form a characteristic description set about the remote sensing image;
and 5: training a convolutional neural network model based on a remote sensing image feature description set to obtain an image ground object classification model and classify a test sample.
2. The method for recognizing the ground features of the remote sensing image based on the graph convolution neural network as claimed in claim 1, wherein the method comprises the following steps: in the steps 1-4, a feature description set of the ground feature class of the remote sensing image is extracted.
3. The method for recognizing the ground features of the remote sensing image based on the graph convolution neural network as claimed in claim 1, wherein the method comprises the following steps: the step 5 provides a terrain classification model based on a graph convolution neural network.
4. The method for recognizing the ground features of the remote sensing image based on the graph convolution neural network as claimed in claim 1, wherein the method comprises the following steps: extracting texture features and color features from the image in the step 2: the process of automatically extracting the remote sensing image feature map set by the deep convolutional neural network comprises the following steps: the feature atlas formation of convolutional neural network automatic extraction assuming K common ground categories can be expressed asWhereinAnd M is more than or equal to 1 and less than or equal to k and represents the k block remote sensing image, N (1 and less than or equal to k and less than or equal to N) represents the nth super pixel, and X is CNN (·) and represents the execution of the deep convolution neural network.
5. The method for recognizing the ground features of the remote sensing image based on the graph convolution neural network as claimed in claim 1, wherein the method comprises the following steps: and in the step 4, the high-resolution remote sensing image characteristic map is automatically extracted by using a deep convolutional neural network: a remote sensing image ground object identification and classification process based on a graph convolution neural network; according to the feature map automatically extracted by the fully-connected convolutional neural network, assuming that the ground object categories are totally K in number, the ground object identification process can be composed of the following steps;
step 4.1: remote sensing image surface feature class characteristic atlas set generated by step 2K is more than or equal to 1 and less than or equal to M, a fused feature map is generated by connection, a graph convolution neural network is trained, a leave-one-out method is adopted to execute a cross validation process, and training parameters theta of a ground feature classification model of the graph convolution neural network are obtained, wherein the number of layers of the graph convolution neural network is 2, an adaptive attention mechanism layer is added behind the graph convolution neural network, so that each node in the graph can be distributed with different weights according to the characteristics of adjacent nodes, a LOSS function selects NLL _ LOSS, log _ softmax function activation is carried out on input parameters, an optimization method Adam optimization function of an adaptive learning rate is selected as the optimization function for optimization, and the learning rate is dynamically adjusted by utilizing first moment estimation and second moment estimation of gradients;
step 4.2: performing a graph convolution operation without an attention mechanism layer and normalization processing on the adaptive feature description set extracted by the graph convolution neural network added with the attention mechanism and the attention feature weight thereof, and finally performing normalization processing according to the graph convolution operation without the attention mechanism layerDetermining the category of each node according to the classification model, wherein the classification is represented as OpreGCN (-), wherein OpreRepresenting the class to which the output belongs, GCN (-) representing the operation of executing graph convolution neural network;
and 4.3, performing precision operation on the output result and the real ground object type, solving corresponding measurement parameters such as Kappa coefficient, overall precision, confusion matrix and the like, and further judging the model classification effect.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011294356.2A CN112347970B (en) | 2020-11-18 | 2020-11-18 | Remote sensing image ground object identification method based on graph convolution neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011294356.2A CN112347970B (en) | 2020-11-18 | 2020-11-18 | Remote sensing image ground object identification method based on graph convolution neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112347970A true CN112347970A (en) | 2021-02-09 |
CN112347970B CN112347970B (en) | 2024-04-05 |
Family
ID=74362836
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011294356.2A Active CN112347970B (en) | 2020-11-18 | 2020-11-18 | Remote sensing image ground object identification method based on graph convolution neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112347970B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113298129A (en) * | 2021-05-14 | 2021-08-24 | 西安理工大学 | Polarized SAR image classification method based on superpixel and graph convolution network |
CN113435268A (en) * | 2021-06-09 | 2021-09-24 | 武汉理工大学 | Earthquake disaster area remote sensing image interpretation method based on graph transformation knowledge embedding algorithm |
CN113469226A (en) * | 2021-06-16 | 2021-10-01 | 中国地质大学(武汉) | Street view image-based land utilization classification method and system |
CN113780188A (en) * | 2021-09-14 | 2021-12-10 | 福州大学 | Method for automatically identifying combustible substances of surface fire model based on field pictures |
CN116030355A (en) * | 2023-03-30 | 2023-04-28 | 武汉城市职业学院 | Ground object classification method and system |
CN116703744A (en) * | 2023-04-18 | 2023-09-05 | 二十一世纪空间技术应用股份有限公司 | Remote sensing image dodging and color homogenizing method and device based on convolutional neural network |
CN116934754A (en) * | 2023-09-18 | 2023-10-24 | 四川大学华西第二医院 | Liver image identification method and device based on graph neural network |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105550709A (en) * | 2015-12-14 | 2016-05-04 | 武汉大学 | Remote sensing image power transmission line corridor forest region extraction method |
CN110084294A (en) * | 2019-04-18 | 2019-08-02 | 北京师范大学 | A kind of Remote Image Classification based on multiple dimensioned depth characteristic |
CN111461258A (en) * | 2020-04-26 | 2020-07-28 | 武汉大学 | Remote sensing image scene classification method of coupling convolution neural network and graph convolution network |
WO2020215557A1 (en) * | 2019-04-24 | 2020-10-29 | 平安科技(深圳)有限公司 | Medical image interpretation method and apparatus, computer device and storage medium |
-
2020
- 2020-11-18 CN CN202011294356.2A patent/CN112347970B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105550709A (en) * | 2015-12-14 | 2016-05-04 | 武汉大学 | Remote sensing image power transmission line corridor forest region extraction method |
CN110084294A (en) * | 2019-04-18 | 2019-08-02 | 北京师范大学 | A kind of Remote Image Classification based on multiple dimensioned depth characteristic |
WO2020215557A1 (en) * | 2019-04-24 | 2020-10-29 | 平安科技(深圳)有限公司 | Medical image interpretation method and apparatus, computer device and storage medium |
CN111461258A (en) * | 2020-04-26 | 2020-07-28 | 武汉大学 | Remote sensing image scene classification method of coupling convolution neural network and graph convolution network |
Non-Patent Citations (1)
Title |
---|
刘万军;梁雪剑;曲海成;: "自适应增强卷积神经网络图像识别", 中国图象图形学报, no. 12, 16 December 2017 (2017-12-16) * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113298129A (en) * | 2021-05-14 | 2021-08-24 | 西安理工大学 | Polarized SAR image classification method based on superpixel and graph convolution network |
CN113298129B (en) * | 2021-05-14 | 2024-02-02 | 西安理工大学 | Polarized SAR image classification method based on superpixel and graph convolution network |
CN113435268A (en) * | 2021-06-09 | 2021-09-24 | 武汉理工大学 | Earthquake disaster area remote sensing image interpretation method based on graph transformation knowledge embedding algorithm |
CN113469226A (en) * | 2021-06-16 | 2021-10-01 | 中国地质大学(武汉) | Street view image-based land utilization classification method and system |
CN113780188A (en) * | 2021-09-14 | 2021-12-10 | 福州大学 | Method for automatically identifying combustible substances of surface fire model based on field pictures |
CN113780188B (en) * | 2021-09-14 | 2023-08-08 | 福州大学 | Automatic combustible identification method of surface fire model based on field pictures |
CN116030355A (en) * | 2023-03-30 | 2023-04-28 | 武汉城市职业学院 | Ground object classification method and system |
CN116703744A (en) * | 2023-04-18 | 2023-09-05 | 二十一世纪空间技术应用股份有限公司 | Remote sensing image dodging and color homogenizing method and device based on convolutional neural network |
CN116703744B (en) * | 2023-04-18 | 2024-05-28 | 二十一世纪空间技术应用股份有限公司 | Remote sensing image dodging and color homogenizing method and device based on convolutional neural network |
CN116934754A (en) * | 2023-09-18 | 2023-10-24 | 四川大学华西第二医院 | Liver image identification method and device based on graph neural network |
CN116934754B (en) * | 2023-09-18 | 2023-12-01 | 四川大学华西第二医院 | Liver image identification method and device based on graph neural network |
Also Published As
Publication number | Publication date |
---|---|
CN112347970B (en) | 2024-04-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112347970A (en) | Remote sensing image ground object identification method based on graph convolution neural network | |
CN107092870B (en) | A kind of high resolution image Semantic features extraction method | |
CN110619282B (en) | Automatic extraction method for unmanned aerial vehicle orthoscopic image building | |
CN113449594B (en) | Multilayer network combined remote sensing image ground semantic segmentation and area calculation method | |
CN111368896A (en) | Hyperspectral remote sensing image classification method based on dense residual three-dimensional convolutional neural network | |
CN113313164B (en) | Digital pathological image classification method and system based on super-pixel segmentation and graph convolution | |
CN108052966A (en) | Remote sensing images scene based on convolutional neural networks automatically extracts and sorting technique | |
CN112052783A (en) | High-resolution image weak supervision building extraction method combining pixel semantic association and boundary attention | |
CN113223042B (en) | Intelligent acquisition method and equipment for remote sensing image deep learning sample | |
CN110598564B (en) | OpenStreetMap-based high-spatial-resolution remote sensing image transfer learning classification method | |
CN113888547A (en) | Non-supervision domain self-adaptive remote sensing road semantic segmentation method based on GAN network | |
CN114694038A (en) | High-resolution remote sensing image classification method and system based on deep learning | |
CN110807485B (en) | Method for fusing two-classification semantic segmentation maps into multi-classification semantic map based on high-resolution remote sensing image | |
CN113988147B (en) | Multi-label classification method and device for remote sensing image scene based on graph network, and multi-label retrieval method and device | |
CN113032613B (en) | Three-dimensional model retrieval method based on interactive attention convolution neural network | |
CN114842264A (en) | Hyperspectral image classification method based on multi-scale spatial spectral feature joint learning | |
CN112329559A (en) | Method for detecting homestead target based on deep convolutional neural network | |
CN112001293A (en) | Remote sensing image ground object classification method combining multi-scale information and coding and decoding network | |
CN113052121B (en) | Multi-level network map intelligent generation method based on remote sensing image | |
CN112381730B (en) | Remote sensing image data amplification method | |
CN118135209A (en) | Weak supervision semantic segmentation method based on shape block semantic association degree | |
CN115359304B (en) | Single image feature grouping-oriented causal invariance learning method and system | |
CN115082778B (en) | Multi-branch learning-based homestead identification method and system | |
CN116310621A (en) | Feature library construction-based few-sample image recognition method | |
CN109934292B (en) | Unbalanced polarization SAR terrain classification method based on cost sensitivity assisted learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |