CN112347970A - A method for remote sensing image recognition based on graph convolutional neural network - Google Patents

A method for remote sensing image recognition based on graph convolutional neural network Download PDF

Info

Publication number
CN112347970A
CN112347970A CN202011294356.2A CN202011294356A CN112347970A CN 112347970 A CN112347970 A CN 112347970A CN 202011294356 A CN202011294356 A CN 202011294356A CN 112347970 A CN112347970 A CN 112347970A
Authority
CN
China
Prior art keywords
neural network
remote sensing
sensing image
feature
graph convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011294356.2A
Other languages
Chinese (zh)
Other versions
CN112347970B (en
Inventor
王倪传
何爽
卢霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Ocean University
Original Assignee
Jiangsu Ocean University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Ocean University filed Critical Jiangsu Ocean University
Priority to CN202011294356.2A priority Critical patent/CN112347970B/en
Publication of CN112347970A publication Critical patent/CN112347970A/en
Application granted granted Critical
Publication of CN112347970B publication Critical patent/CN112347970B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/194Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明公开了一种基于图卷积神经网络的遥感影像地物识别方法,对一景遥感影像进行分块处理,制作训练集和测试集;基于简单线性迭代聚类算法进行超像素分割;基于投票机制的结合策略进行超像素标签制定;基于深度卷积神经网络进行自动特征提取,制作图卷积神经网络的输入参数集;基于自适应注意力机制训练图卷积神经网络模型。本发明通过首先利用超像素分割和绝大多数投票机制来构造遥感影像地物类别特征描述集,获得关于遥感影像地物类别近似刻画,同时大规模约减了同一幅影像图谱构建的数据量,有利于处理大数据样本下的遥感影像地物分类问题;最后通过融合型特征图谱集训练图卷积神经网络,从而达到精确区分地物类型的目的。

Figure 202011294356

The invention discloses a remote sensing image feature recognition method based on a graph convolutional neural network. The remote sensing image of a scene is processed in blocks, and a training set and a test set are produced; superpixel segmentation is performed based on a simple linear iterative clustering algorithm; The combination strategy of voting mechanism is used to formulate superpixel labels; automatic feature extraction is performed based on deep convolutional neural network, and the input parameter set of graph convolutional neural network is produced; the graph convolutional neural network model is trained based on adaptive attention mechanism. The present invention constructs the feature description set of the remote sensing image feature category by firstly using the superpixel segmentation and the majority voting mechanism to obtain the approximate description of the remote sensing image feature category, and at the same time reduces the amount of data constructed by the same image map on a large scale, It is beneficial to deal with the classification of remote sensing images under big data samples; finally, the graph convolutional neural network is trained through the fusion feature atlas, so as to achieve the purpose of accurately distinguishing the types of ground objects.

Figure 202011294356

Description

Remote sensing image ground object identification method based on graph convolution neural network
Technical Field
The invention relates to the field of ground object type identification based on a remote sensing satellite imaging technology, in particular to a remote sensing image ground object identification method based on a graph convolution neural network.
Background
With the rapid development of the related technology of artificial intelligence, deep learning obtains breakthrough achievements in many fields. The convolutional neural network obtains outstanding performances in image recognition by depending on the characteristics of local feature extraction, parameter sharing, global pooling and the like. In the field of remote sensing image processing, remote sensing image classification based on a convolutional neural network is one of the current research hotspots. The classification precision is high, and the generalization ability is strong. But at the same time there are some problems. In the aspect of model construction, the network hierarchical structure of the convolutional neural network model for classification is rarely directly adjusted, the classical convolutional neural network model is adopted to carry out feature extraction on classification samples, and then classification work is carried out by combining the traditional remote sensing image classification model, so that related research and application of convolutional neural network structure adjustment and transfer learning in remote sensing image classification are few. Because the remote sensing image contains a large amount of information and the data application condition is far lagged behind the data acquisition capability, how to rapidly classify the massive images is the key point for accelerating the utilization rate of the remote sensing image. Recently, more and more research is beginning to apply the deep learning method to the field of graph data. In recent years, excited by the great success of the convolutional neural network in the field of computer vision, a plurality of methods for redefining the convolution concept based on spectrogram data emerge. Based on the information of the neighbor nodes gathered by the graph convolution neural network, the convolution is directly executed on the graph structure. And through a sampling strategy, calculation can be executed in a batch of nodes instead of the whole graph, and efficiency can be effectively improved. However, how to apply the neural network based on the graph convolution to the remote sensing image and construct the high-precision classifier is still urgent to further research and development. Therefore, the invention provides remote sensing image ground object classification and identification based on the graph convolution neural network, and the type of the ground object can be accurately identified according to part of ground object real data.
Disclosure of Invention
The invention aims to provide a remote sensing image ground feature recognition method based on a graph convolution neural network according to a remote sensing satellite imaging technology, based on different remote sensing image training data sets, firstly, ground feature type feature maps in the training data sets are extracted to form feature description sets of all ground feature types, then, a deep convolution neural network is adopted to learn and fuse all the feature maps of all the types in the remote sensing image ground feature description sets, feature descriptions of nodes, edges and the like of the ground feature types are obtained, then, the graph convolution neural network is adopted to carry out node classification model training on the fused graph features, remote sensing image classification model parameters are determined, and finally, accurate recognition of unmarked image ground features is achieved.
In order to achieve the purpose, the invention provides the following technical scheme: a remote sensing image ground object identification method based on a graph convolution neural network comprises the following steps:
step 1: taking a scene remote sensing image, respectively carrying out relevant preprocessing operations including geometric correction, atmospheric correction and image enhancement, converting the color remote sensing image into a gray image or a CIELAB color space under XY coordinates, carrying out simple linear iterative clustering on the gray image or the CIELAB color space, initializing seed points, and generating compact and approximately uniform superpixels;
step 2: based on the generated super-pixel remote sensing image map, creating nodes and edges of the map, and extracting texture features and color features of the image to be used as a map input feature matrix;
and step 3: making labels by combining most voting mechanisms of ensemble learning, if a certain ground object is marked with more than half of the votes, predicting the superpixels as the ground object categories, and if not, rejecting the marks to further form a label data set of the map;
and 4, step 4: automatically extracting the high-resolution remote sensing image characteristic map by using a deep convolutional neural network to obtain node and edge input data about the image map and form a characteristic description set about the remote sensing image;
and 5: training a convolutional neural network model based on a remote sensing image feature description set to obtain an image ground object classification model and classify a test sample.
As a preferred technical solution of the present invention, in the steps 1 to 4, a feature description set of the feature type of the remote sensing image is extracted, and in the step 5, a feature classification model based on a convolution neural network is proposed.
The invention has the beneficial effects that: according to the method, a remote sensing image ground feature class feature description set is constructed by utilizing super-pixel segmentation and most voting mechanisms, approximate portraits of the remote sensing image ground feature class are obtained, and meanwhile, the data size constructed by the same image map is reduced in a large scale, so that the method is beneficial to processing the remote sensing image ground feature classification problem under a large data sample; and finally, training a convolutional neural network through the fused characteristic diagram set so as to achieve the purpose of accurately distinguishing the ground feature types. The invention is very beneficial to the research on the aspects of remote sensing natural disaster monitoring, land coverage type discrimination, urban planning, ecological environment change monitoring and the like.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a detailed flow chart of the present invention.
Detailed Description
The following detailed description of the preferred embodiments of the present invention, taken in conjunction with the accompanying drawings, will make the advantages and features of the invention more readily understood by those skilled in the art, and thus will more clearly and distinctly define the scope of the invention.
Example (b): referring to fig. 1-2, the present invention provides a technical solution: a remote sensing image ground object identification method based on a graph convolution neural network comprises the following steps:
step 1: taking a scene remote sensing image, respectively carrying out relevant preprocessing operations including geometric correction, atmospheric correction and image enhancement, converting the color remote sensing image into a gray image or a CIELAB color space under XY coordinates, carrying out simple linear iterative clustering on the gray image or the CIELAB color space, initializing seed points, and generating compact and approximately uniform superpixels;
step 2: based on the generated super-pixel remote sensing image map, creating nodes and edges of the map, and extracting texture features and color features of the image to be used as a map input feature matrix;
and step 3: making labels by combining most voting mechanisms of ensemble learning, if a certain ground object is marked with more than half of the votes, predicting the superpixels as the ground object categories, and if not, rejecting the marks to further form a label data set of the map;
and 4, step 4: automatically extracting the high-resolution remote sensing image characteristic map by using a deep convolutional neural network to obtain node and edge input data about the image map and form a characteristic description set about the remote sensing image;
and 5: training a convolutional neural network model based on a remote sensing image feature description set to obtain an image ground object classification model and classify a test sample.
In the steps 1-4, a feature description set of the ground feature types of the remote sensing images is extracted, and in the step 5, a ground feature classification model based on a graph convolution neural network is provided.
For convenience of description, terms specific to the present invention are first defined as follows:
the remote sensing image ground object category feature description set comprises:
the feature description set of the remote sensing image ground object category refers to the following steps: based on the feature maps of the map nodes and edges extracted by the depth convolution network, the map feature maps of different ground feature types are extracted from a block image divided by a super pixel to form a remote sensing image ground feature type feature description set based on the maps so as to approximately describe the category information of the image.
The specific method comprises the following steps:
step 1: forming a remote sensing image characteristic description set;
taking a scene remote sensing image to respectively carry out relevant preprocessing operations including geometric correction, atmospheric correction, image enhancement and the like, supposing that the remote sensing image is divided into M blocks, processing the remote sensing image block by block, leading in corresponding real ground object labels of the image together, converting a color image into a gray image under XY coordinates, carrying out simple linear iterative clustering on the gray image, initializing seed points (clustering centers), and uniformly distributing the seed points in the image according to the set number of superpixels; then constructing a distance measurement standard for a feature vector formed by a gray image, carrying out a local clustering process on block image pixels to generate compact and approximately uniform N superpixels, then creating nodes and edges of a map for the generated superpixels, setting a local binary pattern for an image, comparing the superpixels with adjacent superpixels, and storing the result as a binary number to form texture features as features of map input; then, label making is carried out by combining most voting mechanisms, if the number of the ground object marked tickets exceeds half, the super pixel is predicted to be the ground object type, otherwise, marking is refused, and therefore, the remote sensing image ground object identification problem is modeled into a classification problem;
step 2: the process of automatically extracting the remote sensing image feature map set by the deep convolutional neural network comprises the following steps: the feature atlas formation of convolutional neural network automatic extraction assuming K common ground categories can be expressed as
Figure BDA0002784917280000051
Wherein
Figure BDA0002784917280000052
Wherein, k is more than or equal to 1 and less than or equal to M to represent the k block remote sensing image, N (k is more than or equal to 1 and less than or equal to N) represents the nth super pixel, and X is CNN (·) to represent the execution of the deep convolution neural network;
and step 3: the remote sensing image surface feature identification and classification process based on the graph convolution neural network comprises the following steps: according to the feature map automatically extracted by the fully-connected convolutional neural network, assuming that the ground object categories are totally K in number, the ground object identification process can be composed of the following steps;
step 3.1: remote sensing image surface feature class characteristic atlas set generated by step 2
Figure BDA0002784917280000053
K is more than or equal to 1 and less than or equal to M, a fused feature map is generated by connection, a graph convolution neural network is trained, a leave-one-out method is adopted to execute a cross validation process, training parameters theta of a ground feature classification model of the graph convolution neural network are obtained, the number of layers of the graph convolution neural network is 2, an adaptive attention mechanism layer is added behind the graph convolution neural network, each node in the graph can be distributed with different weights according to the features of adjacent nodes, a LOSS function is NLL _ LOSS, log _ softmax function activation is carried out on input parameters, an optimization method Adam optimization function of adaptive learning rate is selected as the optimization function, and the first order of gradient is utilized for optimizationMoment estimation and second moment estimation dynamic adjustment learning rate;
step 3.2: performing a graph convolution operation without an attention mechanism layer on the adaptive feature description set extracted by the graph convolution neural network added with the attention mechanism and the attention feature weight thereof, performing normalization processing, and finally determining the category of each node according to a classification model, wherein the formula description is OpreGCN (-), wherein OpreRepresenting the class to which the output belongs, GCN (-) representing the operation of executing graph convolution neural network;
and 3.3, performing precision operation on the output result and the real ground object type, solving corresponding measurement parameters such as Kappa coefficient, overall precision, confusion matrix and the like, and further judging the model classification effect.
According to the method, a remote sensing image ground feature class feature description set is constructed by utilizing super-pixel segmentation and most voting mechanisms, approximate portraits of the remote sensing image ground feature class are obtained, and meanwhile, the data size constructed by the same image map is reduced in a large scale, so that the method is beneficial to processing the remote sensing image ground feature classification problem under a large data sample; and finally, training a convolutional neural network through the fused characteristic diagram set so as to achieve the purpose of accurately distinguishing the ground feature types. The invention is very beneficial to the research on the aspects of remote sensing natural disaster monitoring, land coverage type discrimination, urban planning, ecological environment change monitoring and the like.
The above examples only show some embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention.

Claims (5)

1. A remote sensing image surface feature identification method based on a graph convolution neural network is characterized by comprising the following steps:
step 1: taking a scene remote sensing image, respectively carrying out relevant preprocessing operations including geometric correction, atmospheric correction and image enhancement, converting the color remote sensing image into a gray image or a CIELAB color space under XY coordinates, carrying out simple linear iterative clustering on the gray image or the CIELAB color space, initializing seed points, and generating compact and approximately uniform superpixels;
step 2: based on the generated super-pixel remote sensing image map, creating nodes and edges of the map, and extracting texture features and color features of the image to be used as a map input feature matrix;
and step 3: making labels by combining most voting mechanisms of ensemble learning, if a certain ground object is marked with more than half of the votes, predicting the superpixels as the ground object categories, and if not, rejecting the marks to further form a label data set of the map;
and 4, step 4: automatically extracting the high-resolution remote sensing image characteristic map by using a deep convolutional neural network to obtain node and edge input data about the image map and form a characteristic description set about the remote sensing image;
and 5: training a convolutional neural network model based on a remote sensing image feature description set to obtain an image ground object classification model and classify a test sample.
2. The method for recognizing the ground features of the remote sensing image based on the graph convolution neural network as claimed in claim 1, wherein the method comprises the following steps: in the steps 1-4, a feature description set of the ground feature class of the remote sensing image is extracted.
3. The method for recognizing the ground features of the remote sensing image based on the graph convolution neural network as claimed in claim 1, wherein the method comprises the following steps: the step 5 provides a terrain classification model based on a graph convolution neural network.
4. The method for recognizing the ground features of the remote sensing image based on the graph convolution neural network as claimed in claim 1, wherein the method comprises the following steps: extracting texture features and color features from the image in the step 2: the process of automatically extracting the remote sensing image feature map set by the deep convolutional neural network comprises the following steps: the feature atlas formation of convolutional neural network automatic extraction assuming K common ground categories can be expressed as
Figure FDA0002784917270000011
Wherein
Figure FDA0002784917270000021
And M is more than or equal to 1 and less than or equal to k and represents the k block remote sensing image, N (1 and less than or equal to k and less than or equal to N) represents the nth super pixel, and X is CNN (·) and represents the execution of the deep convolution neural network.
5. The method for recognizing the ground features of the remote sensing image based on the graph convolution neural network as claimed in claim 1, wherein the method comprises the following steps: and in the step 4, the high-resolution remote sensing image characteristic map is automatically extracted by using a deep convolutional neural network: a remote sensing image ground object identification and classification process based on a graph convolution neural network; according to the feature map automatically extracted by the fully-connected convolutional neural network, assuming that the ground object categories are totally K in number, the ground object identification process can be composed of the following steps;
step 4.1: remote sensing image surface feature class characteristic atlas set generated by step 2
Figure FDA0002784917270000022
K is more than or equal to 1 and less than or equal to M, a fused feature map is generated by connection, a graph convolution neural network is trained, a leave-one-out method is adopted to execute a cross validation process, and training parameters theta of a ground feature classification model of the graph convolution neural network are obtained, wherein the number of layers of the graph convolution neural network is 2, an adaptive attention mechanism layer is added behind the graph convolution neural network, so that each node in the graph can be distributed with different weights according to the characteristics of adjacent nodes, a LOSS function selects NLL _ LOSS, log _ softmax function activation is carried out on input parameters, an optimization method Adam optimization function of an adaptive learning rate is selected as the optimization function for optimization, and the learning rate is dynamically adjusted by utilizing first moment estimation and second moment estimation of gradients;
step 4.2: performing a graph convolution operation without an attention mechanism layer and normalization processing on the adaptive feature description set extracted by the graph convolution neural network added with the attention mechanism and the attention feature weight thereof, and finally performing normalization processing according to the graph convolution operation without the attention mechanism layerDetermining the category of each node according to the classification model, wherein the classification is represented as OpreGCN (-), wherein OpreRepresenting the class to which the output belongs, GCN (-) representing the operation of executing graph convolution neural network;
and 4.3, performing precision operation on the output result and the real ground object type, solving corresponding measurement parameters such as Kappa coefficient, overall precision, confusion matrix and the like, and further judging the model classification effect.
CN202011294356.2A 2020-11-18 2020-11-18 Remote sensing image ground object identification method based on graph convolution neural network Active CN112347970B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011294356.2A CN112347970B (en) 2020-11-18 2020-11-18 Remote sensing image ground object identification method based on graph convolution neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011294356.2A CN112347970B (en) 2020-11-18 2020-11-18 Remote sensing image ground object identification method based on graph convolution neural network

Publications (2)

Publication Number Publication Date
CN112347970A true CN112347970A (en) 2021-02-09
CN112347970B CN112347970B (en) 2024-04-05

Family

ID=74362836

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011294356.2A Active CN112347970B (en) 2020-11-18 2020-11-18 Remote sensing image ground object identification method based on graph convolution neural network

Country Status (1)

Country Link
CN (1) CN112347970B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113298129A (en) * 2021-05-14 2021-08-24 西安理工大学 Polarized SAR image classification method based on superpixel and graph convolution network
CN113435268A (en) * 2021-06-09 2021-09-24 武汉理工大学 Earthquake disaster area remote sensing image interpretation method based on graph transformation knowledge embedding algorithm
CN113469226A (en) * 2021-06-16 2021-10-01 中国地质大学(武汉) Street view image-based land utilization classification method and system
CN113780188A (en) * 2021-09-14 2021-12-10 福州大学 Method for automatically identifying combustible substances of surface fire model based on field pictures
CN114022786A (en) * 2021-12-10 2022-02-08 深圳大学 Hyperspectral image classification method based on graph convolutional network
CN114596499A (en) * 2021-12-03 2022-06-07 江苏海洋大学 A dual-stream encoding and decoding method for feature recognition in coastal wetlands high-definition remote sensing images
CN116030355A (en) * 2023-03-30 2023-04-28 武汉城市职业学院 Ground object classification method and system
CN116703744A (en) * 2023-04-18 2023-09-05 二十一世纪空间技术应用股份有限公司 Remote sensing image dodging and color homogenizing method and device based on convolutional neural network
CN116934754A (en) * 2023-09-18 2023-10-24 四川大学华西第二医院 Liver image recognition method and device based on graph neural network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105550709A (en) * 2015-12-14 2016-05-04 武汉大学 Remote sensing image power transmission line corridor forest region extraction method
CN110084294A (en) * 2019-04-18 2019-08-02 北京师范大学 A kind of Remote Image Classification based on multiple dimensioned depth characteristic
CN111461258A (en) * 2020-04-26 2020-07-28 武汉大学 Remote sensing image scene classification method of coupling convolution neural network and graph convolution network
WO2020215557A1 (en) * 2019-04-24 2020-10-29 平安科技(深圳)有限公司 Medical image interpretation method and apparatus, computer device and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105550709A (en) * 2015-12-14 2016-05-04 武汉大学 Remote sensing image power transmission line corridor forest region extraction method
CN110084294A (en) * 2019-04-18 2019-08-02 北京师范大学 A kind of Remote Image Classification based on multiple dimensioned depth characteristic
WO2020215557A1 (en) * 2019-04-24 2020-10-29 平安科技(深圳)有限公司 Medical image interpretation method and apparatus, computer device and storage medium
CN111461258A (en) * 2020-04-26 2020-07-28 武汉大学 Remote sensing image scene classification method of coupling convolution neural network and graph convolution network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘万军;梁雪剑;曲海成;: "自适应增强卷积神经网络图像识别", 中国图象图形学报, no. 12, 16 December 2017 (2017-12-16) *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113298129B (en) * 2021-05-14 2024-02-02 西安理工大学 Polarized SAR image classification method based on superpixel and graph convolution network
CN113298129A (en) * 2021-05-14 2021-08-24 西安理工大学 Polarized SAR image classification method based on superpixel and graph convolution network
CN113435268A (en) * 2021-06-09 2021-09-24 武汉理工大学 Earthquake disaster area remote sensing image interpretation method based on graph transformation knowledge embedding algorithm
CN113469226A (en) * 2021-06-16 2021-10-01 中国地质大学(武汉) Street view image-based land utilization classification method and system
CN113780188A (en) * 2021-09-14 2021-12-10 福州大学 Method for automatically identifying combustible substances of surface fire model based on field pictures
CN113780188B (en) * 2021-09-14 2023-08-08 福州大学 Automatic combustible identification method of surface fire model based on field pictures
CN114596499A (en) * 2021-12-03 2022-06-07 江苏海洋大学 A dual-stream encoding and decoding method for feature recognition in coastal wetlands high-definition remote sensing images
CN114022786A (en) * 2021-12-10 2022-02-08 深圳大学 Hyperspectral image classification method based on graph convolutional network
CN116030355A (en) * 2023-03-30 2023-04-28 武汉城市职业学院 Ground object classification method and system
CN116703744A (en) * 2023-04-18 2023-09-05 二十一世纪空间技术应用股份有限公司 Remote sensing image dodging and color homogenizing method and device based on convolutional neural network
CN116703744B (en) * 2023-04-18 2024-05-28 二十一世纪空间技术应用股份有限公司 Remote sensing image dodging and color homogenizing method and device based on convolutional neural network
CN116934754A (en) * 2023-09-18 2023-10-24 四川大学华西第二医院 Liver image recognition method and device based on graph neural network
CN116934754B (en) * 2023-09-18 2023-12-01 四川大学华西第二医院 Liver image recognition method and device based on graph neural network

Also Published As

Publication number Publication date
CN112347970B (en) 2024-04-05

Similar Documents

Publication Publication Date Title
CN112347970A (en) A method for remote sensing image recognition based on graph convolutional neural network
CN113807210B (en) Remote sensing image semantic segmentation method based on pyramid segmentation attention module
CN113449594B (en) Multilayer network combined remote sensing image ground semantic segmentation and area calculation method
CN107092870B (en) A kind of high resolution image Semantic features extraction method
Fan et al. Semi-MCNN: A semisupervised multi-CNN ensemble learning method for urban land cover classification using submeter HRRS images
CN113888547A (en) Unsupervised Domain Adaptive Remote Sensing Road Semantic Segmentation Method Based on GAN Network
CN108805070A (en) A kind of deep learning pedestrian detection method based on built-in terminal
CN106650690A (en) Night vision image scene identification method based on deep convolution-deconvolution neural network
CN111489370B (en) A segmentation method of remote sensing images based on deep learning
CN113223042B (en) Intelligent acquisition method and equipment for remote sensing image deep learning sample
CN110633708A (en) Deep network significance detection method based on global model and local optimization
CN110399819A (en) A kind of remote sensing image residential block extraction method based on deep learning
CN115471467A (en) A method for detecting building changes in high-resolution optical remote sensing images
CN113537173B (en) A Face Image Authenticity Recognition Method Based on Facial Patch Mapping
CN102982539A (en) Characteristic self-adaption image common segmentation method based on image complexity
CN112329559A (en) Method for detecting homestead target based on deep convolutional neural network
CN116682021A (en) A Method for Extracting Building Vector Outline Data from High Resolution Remote Sensing Image
CN118298182B (en) Method and system for remote sensing mapping of cultivated land based on cross-resolution semantic segmentation
CN112329771B (en) Deep learning-based building material sample identification method
CN115497006B (en) Urban remote sensing image change depth monitoring method and system based on dynamic mixing strategy
CN112465821A (en) Multi-scale pest image detection method based on boundary key point perception
CN115908924A (en) A method and system for semantic segmentation of small-sample hyperspectral images based on multiple classifiers
CN112381730B (en) Remote sensing image data amplification method
CN116052018B (en) Remote sensing image interpretation method based on life learning
CN115082778B (en) Multi-branch learning-based homestead identification method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant