CN117437234A - Aerial photo ground object classification and change detection method based on graph neural network - Google Patents

Aerial photo ground object classification and change detection method based on graph neural network Download PDF

Info

Publication number
CN117437234A
CN117437234A CN202311766572.6A CN202311766572A CN117437234A CN 117437234 A CN117437234 A CN 117437234A CN 202311766572 A CN202311766572 A CN 202311766572A CN 117437234 A CN117437234 A CN 117437234A
Authority
CN
China
Prior art keywords
node
scale
change
neural network
graph
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311766572.6A
Other languages
Chinese (zh)
Other versions
CN117437234B (en
Inventor
袁俊江
黄涛
曾昱富
廖园
朱航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Yunshi Information Technology Co ltd
Original Assignee
Sichuan Yunshi Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Yunshi Information Technology Co ltd filed Critical Sichuan Yunshi Information Technology Co ltd
Priority to CN202311766572.6A priority Critical patent/CN117437234B/en
Publication of CN117437234A publication Critical patent/CN117437234A/en
Application granted granted Critical
Publication of CN117437234B publication Critical patent/CN117437234B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Quality & Reliability (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an aerial photo ground feature classification and change detection method based on a graph neural network, which relates to the technical field of neural networks, and comprises the following steps: step 1: constructing a graph, and defining a pixel area or an image block in the aerial image as a node in the graph; step 2: extracting space features; step 3: calculating Euclidean distance of two node characteristics under the same scale; step 4: applying a higher-order graph convolution process to each node to aggregate information from each node and its neighbor nodes and update node characteristics of the node; step 5: calculating the dynamic weight between each node and the neighbor nodes thereof through an attention mechanism; step 6: processing the new node characteristic representation of each node through the hierarchical graph neural network; step 7: obtaining a comprehensive change map; learning the change feature vector of the node under each scale; step 9: the varying feature vectors at a plurality of scales are summarized. The method effectively improves the accuracy and the robustness of the classification of the ground objects.

Description

Aerial photo ground object classification and change detection method based on graph neural network
Technical Field
The invention relates to the technical field of neural networks, in particular to an aerial photo ground feature classification and change detection method based on a graph neural network.
Background
In the field of remote sensing, aerial image analysis is an important research direction, and in particular, classification and change detection of ground features are important for applications such as geographic information systems, environmental monitoring, urban planning and the like. Traditional ground object classification methods rely on manual feature extraction and classical machine learning techniques, which often require extensive manual intervention and are difficult to adapt to complex surface coverage and dynamically changing environments. In recent years, deep learning techniques, particularly Convolutional Neural Networks (CNNs), have made significant progress in still image classification due to their powerful feature automatic extraction capabilities. However, for time-series aerial images, these models do not make efficient use of spatio-temporal information and therefore perform poorly in the change detection task.
With the development of Graph Neural Networks (GNNs), more and more studies have attempted to apply them to image classification and change detection. GNNs are able to process non-euclidean data and take into account the spatial relationship between pixels, which in theory makes them well suited for analyzing aerial images. However, existing GNN methods face several challenges in classifying aerial features and detecting changes:
Dynamic processing is insufficient: aerial images typically relate to time series data and the state of the feature may change at different points in time. Most of the existing GNN models are designed for static image processing, and lack a structure sensitive to time sequence variation.
Spatial multiscale feature integration problem: the aerial image contains rich spatial information, and the contribution of features of different scales to the classification of the ground features is different. The prior art often fails to effectively integrate these multi-scale features, limiting the accuracy of classification.
Refinement problem of change detection: in the task of detecting changes, it is not enough to identify only the area where the change occurs, and more importantly, the type, extent and evaluation of the environmental impact of the change, but the existing methods are not mature enough in this respect.
Disclosure of Invention
The invention aims to provide an aerial photo feature classification and change detection method based on a graph neural network, and the detail recognition capability of feature classification is improved by integrating the graph neural network output of different levels. In addition, the type and the degree of the change can be identified by designing a fine change detection algorithm, and a foundation is provided for subsequent environmental impact assessment.
In order to solve the technical problems, the invention provides an aerial photo ground feature classification and change detection method based on a graphic neural network, which comprises the following steps:
step 1: constructing a graph, and defining a pixel area or an image block in the aerial image as a node in the graph; defining edges between nodes based on spatial proximity; each node is described by its multiscale features in the aerial image;
step 2: for each node, first processing its corresponding pixel or predefined region using a convolutional neural network for the current scale to extract spatial features; then, processing the feature vector under the previous scale by using the deep neural network to obtain abstract features; finally, splicing the space features and the abstract features to obtain node features under the current scale;
step 3: calculating Euclidean distance of two node features under the same scale, and converting the Euclidean distance by a Gaussian function to obtain a similarity weight based on feature similarity; the similarity weight is adjusted by the relation weight among the node types, and the relation weight is determined according to priori knowledge among different ground object types;
step 4: applying a higher-order graph convolution process to each node to aggregate information from each node and its neighbor nodes and update node characteristics of the node;
Step 5: calculating dynamic weights between each node and neighbor nodes thereof through an attention mechanism, and weighting node characteristics of the neighbor nodes by using the dynamic weights to obtain weighted characteristics; the weighted features are aggregated through a dynamic aggregation function to form a new node feature representation;
step 6: processing the new node characteristic representation of each node through the hierarchical graph neural network and transmitting the new node characteristic representation to the classification layer; the classification layer uses the weight and the bias parameter to generate a classification score of the node, and finally the classification score is converted into a classification probability distribution through a softmax function;
step 7: firstly, calculating the difference of the graphs between two time points so as to establish a change graph; the differences are defined by differences in attributes of nodes or edges of the graph; integrating the change maps by using a fusion function to obtain a comprehensive change map;
step 8: the convolutional graph neural network receives the comprehensive change graph as input, and learns the change feature vectors of the nodes under each scale on the basis of the input;
step 9: summarizing the change feature vectors under a plurality of scales, calculating the weighted sum of each scale, and finally obtaining the change significance score of each node through an activation function; weighting and nonlinear adjustment are carried out on the change significance scores under each scale, and a decision boundary is smoothed through a nonlinear function; this process will produce a score that will determine that a node has changed if it exceeds a set global threshold.
Further, the graph constructed in step 1 isThe method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>Is node set, ++>Is a collection of edges, +.>Is a multidimensional tensor of node characteristics, +.>Is a collection of node types; in the step 2, the following formula is used to splice the spatial feature and the abstract feature to obtain the node feature under the current scale:
wherein,is the scale->Lower node->Node characteristics of->Is the scale->Is a convolutional neural network, ">Is the scale->Deep neural network, < >>Representing a stitching operation; />Is the scale->Lower node->Node characteristics of (a); />Is node->Corresponding pixel areas or image blocks.
Further, in step 3, the following formula is used to calculate the similarity weight based on the feature similarity:
wherein,is the scale->Lower node->And->Weights of the edges between->Is node type->And->A relationship weight between the two; />Is the standard deviation of the Gaussian function, and controls the rate of weight change along with the distance; />Is node->And->Squaring the Euclidean distance of the node features between the nodes; />Is a natural exponential function.
Further, step 4 specifically includes: for each nodeCalculate all +.>A weighted sum of node characteristics of the rank neighbor nodes; />The order neighbor node is by- >The neighbor nodes of the next iteration neighbor nodes are obtained; the information of these neighbor nodes is weighted and aggregated, wherein the weight is +.>To the power and by a normalization constantNormalizing; the weighted node characteristics are passed through a weight matrix which is dependent on the order +.>Converting, and finally obtaining node representation of the next layer through a nonlinear activation function; the node characteristics are updated using the following formula:
wherein,is the highest order, +.>Is node->Is->Set of level neighbor nodes, +.>Is->Layer->A weight matrix of the order; />Is node->In->Layer updated node characteristics; />For node->Neighbor node of->In->Node characteristics of the layers.
Further, in step 5, a new node characteristic representation is obtained using the following formula:
is node->In->New node feature representations of the layers; />Is->A dynamic aggregation function of the layer, which is responsible for aggregating the weighted characteristics of the input; />Is->Node in layer->For its neighbor node->Is determined by an attention mechanism; />For node->Neighbor node of->In->Layer updated node characteristics.
Further, in step 6, the following formula is used to obtain the classification probability distribution:
Wherein,is node->In the scale->The following classification probability distribution; />Is the scale->A weight matrix of a classification layer of (a); />Is node->In the last layer->Is a new node characteristic representation of (a); />Is a hierarchical graph neural network; />Is the scale->Is included in the classification layer.
Further, the overall change map in step 7 is calculated using the following formula:
wherein,is a comprehensive variation graph; />Is a fusion function; />Is the time point->Is a diagram of (1); />Is the time point->Is (are) a picture of->Is a time interval.
Further, in step 8, the following formula is used to calculate the variation feature vector:
wherein,is node->In the scale->The following change feature vector; />Is the scale->The lower convolution graph neural network is responsible for learning the change feature vector; />Is node->In the overall change diagram->Is represented by the features of (a).
Further, in step 9, the change significance score is calculated using the following formula:
wherein,node->Is a change significance score of (2); />Is an activation function, a sigmoid function, for converting the sum of weighted features into varying probability scores; />Is the scale->The weight below indicates the scale +.>The size of the contribution of the underlying scale feature to the change saliency score; / >Is the total number of dimensions.
Further, in step 9, the decision boundary is smoothed using the following formula:
wherein,is node->Whether a binary change of change is experienced, 1 representing a change, 0 representing no change; />Is a nonlinear function used to smooth decision boundaries and map weighted sums to decision variables; />Is the scale->A weight factor representing the degree of contribution of the scale change significance score to the final decision; />Is a regularization term for adjusting the nonlinear intensity of each scale score; />Is a global threshold for determining whether the significance of the change is sufficient to be marked as a change.
The aerial photo ground feature classification and change detection method based on the graph neural network has the following beneficial effects: by constructing a space diagram model and applying a Graph Neural Network (GNN), the invention can fully consider the space relation between the ground features and realize more accurate ground feature extraction. In conventional pixel-level classification, each pixel is usually considered independently, the relationships among the pixels are ignored, and GNNs can effectively capture the relationships, provide more comprehensive information for classification decision, and further improve the accuracy and robustness of classification. The prior art often cannot effectively capture timing variation information when processing time-series aerial photographs. The invention can effectively learn the change characteristics in the time dimension through the time sequence diagram model and the diagram rolling network. The ground feature states at different time points can be captured by the model and compared, so that the change detection is more accurate, and the monitoring of the dynamic change environment is effectively supported. By introducing the multi-scale map neural network structure, the invention can simultaneously extract and fuse the ground feature features under different scales. The design ensures that the model can pay attention to local detail change while processing wide-range unified characteristics, thereby achieving better effects in classification and change detection. Compared with the traditional change detection method, the method can detect whether the change occurs or not, and can further analyze the type and degree of the change. This is of great importance for understanding the environmental factors behind the changes, the influence of human activities, etc., and provides more detailed information for decision support. The graph-based representation can make the results of classification and change detection more visual, and is helpful for users to understand the judgment basis of the model. In addition, by adjusting the attention points of each layer of the graph neural network, classification and detection strategies can be optimized aiming at different application scenes, so that the output of the model meets the actual requirements better. The GNN model of the present invention may rely less on a large amount of annotation data than conventional methods due to its strong feature learning capabilities. In practical applications, for large-scale aerial image data, the feature of reducing the dependence on labeling data is particularly valuable, and is helpful for reducing the workload and the cost.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a method for classifying aerial photo features and detecting changes based on a graphic neural network according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1: referring to fig. 1, an aerial photo feature classification and change detection method based on a graph neural network, the method comprising:
Step 1: constructing a graph, and defining a pixel area or an image block in the aerial image as a node in the graph; defining edges between nodes based on spatial proximity; each node is described by its multiscale features in the aerial image; in this step, the graph is constructed by defining each pixel region or image block in the aerial image as a node in the graph, thereby converting the image data into graph data. The key in this process is how the nodes and edges between them are defined. The edges between nodes are based on spatial proximity, meaning that if two pixel regions are spatially adjacent, they are connected by one edge in the figure. The characteristics of a node are described by its multi-scale features in an aerial image, which typically involve information of color, texture, shape, etc. extracted from the original image data. The core function of step 1 is to convert conventional pixel-based image data into image data, which allows image data to be processed using algorithms such as graph theory and graph neural networks. By constructing the graph, edges between nodes maintain spatial connectivity between image regions, which is critical to understanding and analyzing image structures. The multi-scale feature representation of the nodes allows the model to capture image features ranging from coarse to fine granularity, which is particularly important for distinguishing complex terrain categories. By defining only the critical areas as nodes, the amount of data to be processed can be reduced, thereby reducing the computational complexity, especially for high resolution aerial images. While conventional pixel-based processing methods have high computational complexity in processing large-scale or high-resolution image data, the construction method of the map not only reduces the amount of data that needs to be processed, but also allows the model to capture and utilize structural information of the image data. The method has important significance for improving the efficiency and accuracy of aerial image analysis.
Step 2: for each node, first processing its corresponding pixel or predefined region using a convolutional neural network for the current scale to extract spatial features; then, processing the feature vector under the previous scale by using the deep neural network to obtain abstract features; finally, splicing the space features and the abstract features to obtain node features under the current scale; in this step, a Convolutional Neural Network (CNN) is first used to extract its spatial features for each defined node (representing a pixel region or image block). CNNs are able to identify various spatial modes, such as edges, corner points, textures, etc., which are critical to the understanding of the image. The deep neural network (another CNN or other form of deep learning model) then further processes these features, extracting higher level abstract features. Finally, the features are stitched together to form a composite feature vector for each node at the current scale. The use of CNNs enables capturing important visual patterns and structures of the corresponding areas of nodes, which is critical for understanding the image content. The depth network is able to learn a more abstract representation from the extracted spatial features, which helps the model capture more complex image attributes, such as semantic categories of objects. The concatenation of spatial features with abstract features combines local visual information with global or more abstract information, providing a more comprehensive representation of nodes, which is critical to subsequent classification and change detection tasks.
Step 3: calculating Euclidean distance of two node features under the same scale, and converting the Euclidean distance by a Gaussian function to obtain a similarity weight based on feature similarity; the similarity weight is adjusted by the relation weight among the node types, and the relation weight is determined according to priori knowledge among different ground object types; in this step, the similarity between nodes at the same scale is calculated based on the feature vector of each node (image block or pixel region). This is typically achieved by calculating the euclidean distance between the feature vectors. These distances are converted into similarity weights using a gaussian function (also known as a kernel function). The gaussian function can translate smaller distances into larger weights (meaning high similarity) and larger distances into smaller weights (meaning low similarity). In addition, these weights are adjusted by the relational weights between the nodes, which are determined from a priori knowledge between the different surface feature types, reflecting the typical relevance between the different surface features. Through conversion of Euclidean distance and Gaussian function, similarity between nodes can be accurately measured, and weight is assigned to edges in a graph structure, so that the method is important for information flow and aggregation in the subsequent graph convolution process. By incorporating a priori knowledge of relationships into the similarity weights, the model can be directed to focus more on those feature types that are typically associated in the real world, thereby improving classification accuracy and sensitivity of change detection.
Step 4: applying a higher-order graph convolution process to each node to aggregate information from each node and its neighbor nodes and update node characteristics of the node; in step 4, a high-order graph convolution operation is used, the purpose of which is to propagate and update the information of the nodes in the graph. By higher order, it is meant herein that the graph convolution not only considers the immediate neighbors of a node, but also includes neighbors of neighbors, etc., so that broader context information can be captured. Each node updates its characteristics by aggregating information from other nodes in its neighborhood, a neighborhood aggregation mechanism. Mathematically, this can be accomplished by defining a convolution kernel that specifies how to integrate neighbor node information onto the current node. Gao Jietu convolution enables each node to effectively aggregate information from its broad neighborhood, capturing more rich context information in each node's feature representation. Conventional image processing methods are limited to feature extraction and processing at the local or pixel level. In the step, by adopting higher-order graph convolution, information can be effectively extracted and synthesized in the whole graph range, so that the expression capability of single node characteristics is increased, and the capability of capturing and understanding complex spatial relationships of the whole model is enhanced. This approach is of significant advantage for understanding complex structures and patterns in images, especially those that require large area information to be integrated for identification.
Step 5: calculating dynamic weights between each node and neighbor nodes thereof through an attention mechanism, and weighting node characteristics of the neighbor nodes by using the dynamic weights to obtain weighted characteristics; the weighted features are aggregated through a dynamic aggregation function to form a new node feature representation; the dynamic attention mechanism is adopted in the step 5, which is an important innovation in the graph neural network. This mechanism allows the model to aggregate features not just from neighbor nodes fixedly, but to dynamically assign different weights according to the importance of each neighbor. "dynamic" here means that the weights are not predefined, but rather are learned by the model from relationships and features between nodes. In this way, each node will be given different degrees of attention to their information depending on the relative importance of its surrounding nodes.
In particular, the attention mechanism dynamically assigns weights by calculating a relationship score between nodes. The features of neighboring nodes are weighted with these weights to facilitate aggregation. These weighted features are then integrated by an aggregation function (e.g., summation, averaging, etc.) to form a new feature representation of the node. The attention mechanism allows each node to individually aggregate information according to the importance of its neighbors, which is more efficient than simple averaging or summing. Due to the introduction of the attention mechanism, the model can better distinguish the contribution of different nodes in the neighborhood to the current node, so that the distinguishing capability of the model is improved, and particularly under the condition of complex internal structure of the neighborhood.
Compared with the traditional neighborhood aggregation method, the dynamic attention mechanism brings higher flexibility and adaptability to the graph neural network. In handling the tasks of classification of aerial photo features and change detection, certain neighboring nodes provide more valuable information than others. For example, in an aerial photograph, nodes of one building need to pay more attention to adjacent road nodes than to distant water nodes. The dynamic attention mechanism enables the network to automatically learn such relationships and optimize the aggregation of features accordingly, thereby allowing for more accurate consideration of the impact of the surrounding environment in performing classification and change detection. The mechanism improves the understanding capability of the network on complex ground object relations in the aerial photo, and is an innovative method for processing spatial data in the graphic neural network.
Step 6: processing the new node characteristic representation of each node through the hierarchical graph neural network and transmitting the new node characteristic representation to the classification layer; the classification layer uses the weight and the bias parameter to generate a classification score of the node, and finally the classification score is converted into a classification probability distribution through a softmax function;
in step 6, a hierarchical neural network is employed to further process the new feature representation of each node. A hierarchical graph neural network is a graph network structure that is capable of capturing different hierarchical features. These feature representations are then fed into a classification layer, typically a fully connected neural network of one or more layers, which uses weights and bias parameters to generate a classification score for the node. These scores are finally converted into a classification probability distribution by a softmax function. The softmax function is a standard logic function that normalizes the output of a vector to a probability distribution.
The hierarchical network is able to learn abstract features from low to high levels through its multi-layer structure. The classification layer converts the extracted features into classification decisions, i.e. the nodes are classified into corresponding categories. By means of the softmax function, the output of the model can be interpreted as a probability, representing the confidence that the node belongs to each category. In the ground object classification task of aerial images, spatially adjacent pixel points often have correlation, and the traditional pixel-level method can ignore the spatial dependence. The hierarchical neural network is capable of effectively capturing such spatial information and extracting more complex spatial features through the hierarchical structure.
Step 7: firstly, calculating the difference of the graphs between two time points so as to establish a change graph; the differences are defined by differences in attributes of nodes or edges of the graph; integrating the change maps by using a fusion function to obtain a comprehensive change map; by constructing the change graph, the method can fuse information from different time points and describe the change of the ground object through the graph structure. This allows not only to capture significant changes, but also to observe finer changes that are ignored in conventional pixel level comparisons.
Step 8: the convolutional graph neural network receives the comprehensive change graph as input, and learns the change feature vectors of the nodes under each scale on the basis of the input; in step 8, the method involves the use of a convolutional neural graph network (ConvGNN), which is a deep learning model specifically designed to process graph data. Like a conventional Convolutional Neural Network (CNN), convGNN may capture local structure information, but it is adapted to adapt to the structure of a graph, where data points (i.e., nodes in the graph) may have any number of neighbors, as opposed to regular grids of pixels.
Here, convGNN receives the built multi-time-sequence change detection graphAs input and for each node on the graph +.>Learning a vector representing the change characteristics +.>。/>Representing that the network operates on a particular scale or hierarchy, because in a graph network, each convolution operation involves neighbors of neighbors, and so on, the range of information captured increases exponentially with the number of hierarchies.
Step 9: summarizing the change feature vectors under a plurality of scales, calculating the weighted sum of each scale, and finally obtaining the change significance score of each node through an activation function; weighting and nonlinear adjustment are carried out on the change significance scores under each scale, and a decision boundary is smoothed through a nonlinear function; this process will produce a score that will determine that a node has changed if it exceeds a set global threshold. Step 9 involves synthesizing the multi-scale varying feature vectors based on an assumption that: the accuracy of change detection can be improved by observing the target on multiple scales High, because different scales reveal different varying features. In this step, the feature vector of each node is changedIs summarized and passed through the weight +.>The importance of each dimension is considered. Doing so may increase the flexibility of the model because it allows the model to distinguish those changes that are significant over multiple dimensions from those that are significant over only a single dimension.
By activating a function (usually a sigmoid function)Is used to convert the weighted sum into a change saliency score between 0 and 1 +.>
Example 2: the diagram constructed in step 1 isThe method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>Is node set, ++>Is a collection of edges, +.>Is a multidimensional tensor of node characteristics, +.>Is a collection of node types; in the step 2, the following formula is used to splice the spatial feature and the abstract feature to obtain the node feature under the current scale:
wherein,is the scale->Lower node->Node characteristics of->Is the scale->Is a convolutional neural network, ">Is the scale->Deep neural network, < >>Representing a stitching operation; />Is the scale->Lower node->Is a node characteristic of (a).
Specifically, convolutional Neural Network (CNN): CNNs are dedicated to processing image data. The method can capture the spatial hierarchical structure of the image, extract local features through the convolution layer, reduce the spatial size of the features through the sub-sampling layer (such as the pooling layer), retain important information and reduce the computational complexity. Deep Neural Network (DNN): DNNs can handle more abstract data that learns advanced features through deep network structures. In the formula Representing in scale->Convolutional neural network for down-processing, which processes node +.>Is->And outputs spatial features at that scale. />Is a deep neural network used at the same scale, which processes the previous scale +.>Node characteristics of->. The two types of features are operated by concatenation +.>In combination, a richer characterization is formed>
The combination of spatial features and abstract features allows the model to capture higher level data abstractions while preserving the original spatial information. The features generated by the different networks are combined by a stitching operation to take advantage of the advantages of CNN in processing image space features and the advantages of DNN in processing non-space abstract features.
Conventional image processing methods often rely on feature extraction on a single type of feature or on a single scale. In this embodiment, the features of CNN and DNN are combined, so that not only the sensitivity of CNN to spatial features is utilized, but also abstract understanding of DNN to non-spatial structures in data is included. The combination mode provides richer and comprehensive representation for the multi-scale features of the image, and improves the accuracy and the robustness of the classification and the change detection of the ground features.
Example 3: in the step 3, the following formula is used to calculate the similarity weight based on the feature similarity:
wherein,is the scale->Lower node->And->Weights of the edges between->Is node type->And->A relationship weight between the two; />Is the standard deviation of the Gaussian function, and controls the rate of weight change along with the distance; />Is node->And->Squaring the Euclidean distance of the node features between the nodes; />Is a natural exponential function.
Specifically, through knowledge of domain experts, the relationship weights between different node types can be defined. For example, in a land cover type scenario, the adjacency of waters to vegetation areas is given a high weight, as the two types are often adjacent in natural environments. And measuring the similarity of the characteristics among the nodes by using Euclidean distance. Euclidean distance is a method of measuring the linear distance between two points, where it is used to calculate the difference between feature vectors. The euclidean distance is converted into a similar weight by a gaussian kernel, and the similar weight reflects the intimacy degree of the two nodes in the feature space. The shape of the Gaussian function is determined by standard deviationControl, which determines the rate of decay of the weight with increasing distance. />Is a weight adjustment factor based on the node type. This factor takes into account the inherent association that exists between different types of nodes, as some feature types are typically adjacent or related.
The connection strength between nodes in the graph is allowed to reflect the similarity of the actual features. And a priori knowledge is introduced through the node type relation weight, so that the connection structure of the graph is improved, and the actual relation among the ground object types is better reflected. Conventional image processing and graph analysis often rely solely on feature similarity to construct graphs. Here, not only feature similarity is used, but also relationships between node types are combined, which provides a more detailed and accurate basis for establishing a connection of the graph. Such a method may significantly improve the performance of subsequent classification and change detection tasks, especially in scenarios with complex terrain-type interactions. According to the calculation method, by combining the feature similarity and the node type relation weight, not only is the difference of the features among the nodes considered, but also the model is allowed to adjust the weight according to the prior relation among different ground object types. This may improve the quality of the representation of the graph, thereby helping to improve the accuracy of classification and change detection of subsequent steps.
Example 4: the step 4 specifically comprises the following steps: for each nodeCalculate all +.>A weighted sum of node characteristics of the rank neighbor nodes; />The order neighbor node is by->The neighbor nodes of the next iteration neighbor nodes are obtained; the information of these neighbor nodes is weighted and aggregated, wherein the weight is +. >To the power and by a normalization constantNormalizing; the weighted node characteristics are passed through a weight matrix which is dependent on the order +.>Converting, and finally obtaining node representation of the next layer through a nonlinear activation function; the node characteristics are updated using the following formula:
wherein,is the highest order, +.>Is node->Is->Set of level neighbor nodes, +.>Is->Layer->A weight matrix of the order; />Is node->In->Layer updated node characteristics; />For node->Neighbor node of->In->Node characteristics of the layers.
Specifically, node feature updatesIs indicated at +.>After the layer graph rolling operation, node +.>Is updated. This updating process combines the node's neighbor information with the node's own original characteristics to better represent the nodeThe location and context of the points in the figure.
ReLU activation function: it is a nonlinear function defined as. This function can help solve the problem of gradient extinction and increase the nonlinearity of the model so that the model can learn more complex feature representations. Neighbor node set->All and node are included->Connected->The rank neighbor nodes. In the graph structure data, +. >The order neighbor is by->Reachable neighbors. This set allows the model to take into account distant neighbor information when updating node features, which helps to capture a wider context.
Weighted neighbor features: nodeFeatures of->Multiplied by->Is->The power and weight matrix->. Weight->Is->The power is to adjust the contribution of the neighbor node characteristics, along with +.>The contribution of the far neighbors will gradually decrease, which helps the model concentrate on the more relevant neighbors.
Normalization constantThis is a normalization factor for ensuring node +.>Is not affected by an uneven distribution of the number of neighbors. Normalization is a common technique in graph convolution to avoid numerical calculation problems such as too large values or too small gradients. Weight matrix->This is a trainable weight matrix for at +.>Layer transition->Features of the rank neighbors. This matrix enables the model to learn how to do so according to +.>Order neighbor information to adjust node->Is characterized by the following.
The process of step 4 updates the feature representation of the node by weighting the aggregation of neighbor features based on the principle of graph convolution. This approach enables capturing local and high-level structural information of the node. The new features of each node not only reflect its own attributes, but also integrate the information of neighboring nodes, including direct neighbors and more distant neighbors. Such new features incorporating neighbor information help to enhance learning tasks of the graph structure data, such as classification, prediction, clustering, and the like. By the method, deep learning of the image data can be realized, so that strong performance can be provided when complex data in the fields of aerial images, social network analysis, bioinformatics and the like are processed.
Example 5: in step 5, a new node characteristic representation is obtained using the following formula:
is node->In->New node feature representations of the layers; />Is->A dynamic aggregation function of the layer, which is responsible for aggregating the weighted characteristics of the input; />Is->Node in layer->For its neighbor node->Is determined by an attention mechanism; />For node->Neighbor node of->In->Layer updated node characteristics.
In particular, the process of step 5 is based on a key machine learning concept—an attention mechanism that allows the model to focus on more important information. In this step, the new feature representation of each node is obtained by aggregating the features of its neighbors, where the feature contribution of each neighbor is different, depending on the dynamic weights calculated by the attention mechanism.
This attention-based feature aggregation approach makes the model more flexible and efficient in processing graph data, as it can capture complex dependencies between nodes. For example, in social network analysis, the model may be more concerned about close friends of the user than distant connections; in protein-protein interaction networks, the model will be more concerned about interactions with key biological functions. Through such dynamic feature aggregation, the model can learn more fine-grained and differentiated node representations, and provides stronger capability for analysis and prediction of graph data.
New node characteristic representationIs indicated at +.>After the layer, node->Is updated again. Unlike previous updates, this update focuses specifically on how to aggregate neighbor information by one dynamic weight, rather than relying solely on a fixed oneA fixed weight or neighbor structure.
Dynamic aggregation functionThis is a function specifically designed to aggregate neighbor features, which takes into account the +.>The contributions of (2) are different. This dynamics means that the model can learn to weight the features of different neighbors differently in different contexts.
Attention weightingThese are weights calculated by the attention mechanism for determining the importance of each neighbor node in the aggregation process. In this way, the model does not consider the information of all neighbors equally important, but rather dynamically adjusts their contribution according to the actual situation of each neighbor node.
Update features of neighbor nodesThese are nodes +.>Neighbor node of (2) is at%>The characteristics of the layers represent that these characteristics are used to update the node +.>The feature representation of (c) is weighted by the attention weight.
Is provided withFor the number of attention heads +.>For the attention vector parameter, +. >For transforming the weight matrix>Is a bias term. Dynamic aggregation function->Can be expressed as:
LeakyReLU
wherein the attention weightCalculated by the following formula, it determines the node +.>For its neighbor node->Is the attention weight of (2):
in this context,representing the join operation of vectors, leakyReLU is a nonlinear activation function that allows small negative gradients to flow through to improve the nonlinear characteristics of the model and avoid gradient vanishing problems. By->The weight matrix transforms the features and learns different feature representations for different heads. The use of a multi-headed attentiveness mechanism allows the model to capture information in different presentation subspaces. Using non-wiresSex-activated functions increase model complexity and expressive power. The feature representation output by each head is subjected to a connection operation, fusing information from different attention heads.
Example 6: in step 6, the classification probability distribution is obtained using the following formula:
wherein,is node->In the scale->The following classification probability distribution; />Is the scale->A weight matrix of a classification layer of (a); />Is node->In the last layer->Is a new node characteristic representation of (a); />Is a hierarchical graph neural network; />Is the scale- >Is included in the classification layer.
Specifically, the drawingsNeural Network (GNN) feature extraction: in this step, the graph neural network is used to process the graph data, and a complex feature representation is generated for each node by the aggregation and transformation operations of the previous layers (as described in the previous steps), noted as
Hierarchical feature extraction (H-GNN): this is a specially designed graph neural network structure that is capable of capturing graph structural features at different scales. For example, it may consider local neighbor features and more macroscopic community structure features.
Weight matrix and bias vector: weight matrix for classificationAnd offset vector->Features extracted from the neural network are converted into a predictive vector, each element of which corresponds to a class score.
Softmax function: finally, these scores are converted into probability distributions by applying a softmax function. The Softmax function is able to convert any real vector into an effective probability distribution, where the probability of each class is non-negative and the sum of all probabilities is 1.
Feature conversion: by a weight matrixThe extracted node features are transformed into a space suitable for classification.
Probability mapping: the classification score of each node is converted into a probability distribution through softmax function mapping, so that the category with the highest probability can be conveniently selected as the predictive label of the node.
Scale-aware classification: the model is allowed to classify nodes at different scales (resolution or granularity) so that the model can be adapted to the multi-scale data analysis requirements.
First layer (local feature extraction):
in this context,is the weight matrix of the layer, +.>Nonlinear activation function (e.g. ReLU),>is node->Neighbor node set,/->Is node->Is described.
Second layer (intermediate level feature extraction):
at this layer, the same graph neural network operation is applied again, but the range of neighbors may be enlargedTo include more distant nodes such as second order neighbors.
First, theLayer (global feature extraction):
this layer may contain global information for the graph, obtained by increasing the scope of neighbors or by specific global pooling operations.
Example 7: the overall change map in step 7 is calculated using the following formula:
wherein,is a comprehensive variation graph; />Is a fusion function; />Is the time point->Is a diagram of (1); />Is the time point->Is (are) a picture of- >Is a time interval.
In particular, the method comprises the steps of,
example 7 describes how a comprehensive change map is created by calculating the amount of change in different time maps. The purpose of this process is to capture the dynamic change situation of the graph structure over time. In particular, embodiment 7 provides a method to quantify the changes in the graph between different time points and integrate these changes into a single representation, i.e., a comprehensive change graph. This representation can be used to further analyze the evolution of the graph over timeOr as input features for other machine learning tasks.
Indicated at the time point +.>Possibly including nodes, edges and related features. />Is shown at the time pointIn (2), wherein->Is->A time interval. By calculating->The difference between the two graphs at the corresponding time points is obtained. Such differences may include increases or decreases in nodes, creation or removal of edges, and changes in node or edge characteristics. Fusion function->Is an operation of synthesizing the map change amounts at a plurality of time points into a comprehensive change map +.>. The specific form of the fusion function depends on the type of variation and the information required. For example, it may be a simple averaging operation, or it may be a more complex machine learning model, as described above.
Example 8: in step 8, the following formula is used to calculate the variation feature vector:
wherein,is node->In the scale->The following change feature vector; />Is the scale->The lower convolution graph neural network is responsible for learning the change feature vector; />Is node->In the overall change diagram->Is represented by the features of (a).
Example 9: in step 9, a change significance score is calculated using the following formula:
wherein,node->Is a change significance score of (2); />Is an activation function, a sigmoid function, for converting the sum of weighted features into varying probability scores; />Is the scale->The weight below indicates the scale +.>The size of the contribution of the underlying scale feature to the change saliency score; />Is the total number of dimensions.
Specifically, to obtain a change saliency score, a change feature vector at all scales is first calculatedIs a weighted sum of (c). Every dimension +.>The variable feature vector of (2) will be based on its weight +.>To weight. Weighted sum is +.>And processing, converting the score into a score between 0 and 1. This score may be interpreted as a varying probability score.
By combining features of different scales, the variation of multiple layers can be comprehensively considered, and a more comprehensive variation significance score can be obtained. The Sigmoid function ensures that the score is a legal probability value, which is easy to understand and apply.
Example 10: in step 9, the decision boundary is smoothed using the following formula:
wherein,is node->Whether a binary change of change is experienced, 1 representing a change, 0 representing no change; />Is a nonlinear function used to smooth decision boundaries and map weighted sums to decision variables; />Is the scale->A weight factor representing the degree of contribution of the scale change significance score to the final decision; />Is a regularization term for adjusting the nonlinear intensity of each scale score; />Is a global threshold for determining whether the significance of the change is sufficient to be marked as a change.
Specifically, the significance of change scoreIs further processed by a weighted nonlinear combination using regularization factor +.>To adjust the nonlinear effects of each scale score. Function->For smoothing decision boundaries, the weighted sum is mapped onto a new value. This helps reduce noise and overfitting in the decision process. Global threshold->Is used to ultimately determine whether the node has undergone significant changes. If the value processed by the nonlinear function exceeds/>A change is determined to have occurred. The method smoothes the decision process by introducing regularization and nonlinear mapping, thereby improving the accuracy of identifying the change. The global threshold provides a simple and efficient way to control decision variability
Is a nonlinear function that smoothes decision boundaries and maps weighted sums to decision variables, the following is one possible expression:
the present invention has been described in detail above. The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to facilitate an understanding of the method of the present invention and its core ideas. It should be noted that it will be apparent to those skilled in the art that various modifications and adaptations of the invention can be made without departing from the principles of the invention and these modifications and adaptations are intended to be within the scope of the invention as defined in the following claims.

Claims (10)

1. The method for classifying and detecting the changes of the ground features of the aerial photo based on the graph neural network is characterized by comprising the following steps:
step 1: constructing a graph, and defining a pixel area or an image block in the aerial image as a node in the graph; defining edges between nodes based on spatial proximity; each node is described by its multiscale features in the aerial image;
step 2: for each node, first processing its corresponding pixel or predefined region using a convolutional neural network for the current scale to extract spatial features; then, processing the feature vector under the previous scale by using the deep neural network to obtain abstract features; finally, splicing the space features and the abstract features to obtain node features under the current scale;
Step 3: calculating Euclidean distance of two node features under the same scale, and converting the Euclidean distance by a Gaussian function to obtain a similarity weight based on feature similarity; the similarity weight is adjusted by the relation weight among the node types, and the relation weight is determined according to priori knowledge among different ground object types;
step 4: applying a higher-order graph convolution process to each node to aggregate information from each node and its neighbor nodes and update node characteristics of the node;
step 5: calculating dynamic weights between each node and neighbor nodes thereof through an attention mechanism, and weighting node characteristics of the neighbor nodes by using the dynamic weights to obtain weighted characteristics; the weighted features are aggregated through a dynamic aggregation function to form a new node feature representation;
step 6: processing the new node characteristic representation of each node through the hierarchical graph neural network and transmitting the new node characteristic representation to the classification layer; the classification layer uses the weight and the bias parameter to generate a classification score of the node, and finally the classification score is converted into a classification probability distribution through a softmax function;
step 7: firstly, calculating the difference of the graphs between two time points so as to establish a change graph; the differences are defined by differences in attributes of nodes or edges of the graph; integrating the change maps by using a fusion function to obtain a comprehensive change map;
Step 8: the convolutional graph neural network receives the comprehensive change graph as input, and learns the change feature vectors of the nodes under each scale on the basis of the input;
step 9: summarizing the change feature vectors under a plurality of scales, calculating the weighted sum of each scale, and finally obtaining the change significance score of each node through an activation function; weighting and nonlinear adjustment are carried out on the change significance scores under each scale, and a decision boundary is smoothed through a nonlinear function; this process will produce a score that will determine that a node has changed if it exceeds a set global threshold.
2. The method for classifying and detecting changes in aerial photo features based on graphic neural network as set forth in claim 1, wherein the map constructed in step 1 isThe method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>Is node set, ++>Is a collection of edges, +.>Is a multidimensional tensor of node characteristics, +.>Is a collection of node types; in the step 2, the following formula is used to splice the spatial feature and the abstract feature to obtain the node feature under the current scale:
wherein,is the scale->Lower node->Node characteristics of->Is the scale->Is a convolutional neural network, ">Is the scale- >Deep neural network, < >>Representing a stitching operation; />Is the scale->Lower node->Node characteristics of (a); />Is node->Corresponding pixel areas or image blocks.
3. The method for classifying and detecting changes in aerial photo features based on a graphic neural network as claimed in claim 2, wherein in step 3, the similarity weight based on feature similarity is calculated by using the following formula:
wherein,is the scale->Lower node->And->Weights of the edges between->Is node type->And->A relationship weight between the two; />Is the standard deviation of the Gaussian function, and controls the rate of weight change along with the distance; />Is node->And->Squaring the Euclidean distance of the node features between the nodes; />Is a natural exponential function.
4. The method for classifying and detecting changes in aerial photo features based on a graphic neural network as set forth in claim 3, wherein the step 4 specifically includes: for each nodeCalculate all +.>Node characteristics of the order neighbor nodeA weighted sum; />The order neighbor node is by->The neighbor nodes of the next iteration neighbor nodes are obtained; the information of these neighbor nodes is weighted and aggregated, wherein the weight is +.>To the power, and by a normalization constant +. >Normalizing; the weighted node characteristics are passed through a weight matrix which is dependent on the order +.>Converting, and finally obtaining node representation of the next layer through a nonlinear activation function; the node characteristics are updated using the following formula:
wherein,is the highest order, +.>Is node->Is->Set of level neighbor nodes, +.>Is->Layer->A weight matrix of the order; />Is node->In->Layer updated node characteristics; />For node->Neighbor node of->In->Node characteristics of the layers.
5. The method for classifying and detecting changes in aerial photo features based on a graphic neural network as claimed in claim 4, wherein in step 5, a new node characteristic representation is obtained by using the following formula:
is node->In->New node feature representations of the layers; />Is->A dynamic aggregation function of the layer, which is responsible for aggregating the weighted characteristics of the input; />Is->Node in layer->For its neighbor node->Is determined by an attention mechanism; />For node->Neighbor node of->In->Layer updated node characteristics.
6. The method for classifying and detecting changes in aerial photo features based on a graphic neural network as claimed in claim 5, wherein in step 6, classification probability distribution is obtained by using the following formula:
Wherein,is node->In the scale->The following classification probability distribution; />Is the scale->A weight matrix of a classification layer of (a); />Is node->In the last layer->Is a new node characteristic representation of (a); />Is a hierarchical graph neural network; />Is the scale->Is included in the classification layer.
7. The method for classifying and detecting changes in aerial photo features based on graphic neural network as claimed in claim 6, wherein the overall change map in step 7 is calculated by using the following formula:
wherein,is a comprehensive variation graph; />Is a fusion function; />Is the time point->Is a diagram of (1); />Is the time point->In the figures of (a),is a time interval.
8. The method for classifying and detecting changes in aerial photo features based on graphic neural network as claimed in claim 7, wherein in step 8, the following formula is used to calculate the change feature vector:
wherein,is node->In the scale->The following change feature vector; />Is the scale->The lower convolution graph neural network is responsible for learning the change feature vector; />Is node->In the overall change diagram->Is represented by the features of (a).
9. The method for classifying and detecting changes in aerial photo features based on a graphic neural network as claimed in claim 8, wherein in step 9, the change significance score is calculated by using the following formula:
Wherein,node->Is a change significance score of (2); />Is an activation function, is a sigmoid function,for converting the sum of the weighted features into a varying probability score; />Is the scale->The weight below indicates the scale +.>The size of the contribution of the underlying scale feature to the change saliency score; />Is the total number of dimensions.
10. The method for classifying and detecting changes in aerial photo features based on graphic neural network as claimed in claim 9, wherein in step 9, the decision boundary is smoothed using the following formula:
wherein,is node->Whether a binary change of change is experienced, 1 representing a change, 0 representing no change; />Is a nonlinear function used to smooth decision boundaries and map weighted sums to decision variables; />Is the scale->Weight of (2)A heavy factor representing the degree of contribution of the scale change significance score to the final decision; />Is a regularization term for adjusting the nonlinear intensity of each scale score;is a global threshold for determining whether the significance of the change is sufficient to be marked as a change.
CN202311766572.6A 2023-12-21 2023-12-21 Aerial photo ground object classification and change detection method based on graph neural network Active CN117437234B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311766572.6A CN117437234B (en) 2023-12-21 2023-12-21 Aerial photo ground object classification and change detection method based on graph neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311766572.6A CN117437234B (en) 2023-12-21 2023-12-21 Aerial photo ground object classification and change detection method based on graph neural network

Publications (2)

Publication Number Publication Date
CN117437234A true CN117437234A (en) 2024-01-23
CN117437234B CN117437234B (en) 2024-02-23

Family

ID=89548337

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311766572.6A Active CN117437234B (en) 2023-12-21 2023-12-21 Aerial photo ground object classification and change detection method based on graph neural network

Country Status (1)

Country Link
CN (1) CN117437234B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109919910A (en) * 2019-01-25 2019-06-21 合肥工业大学 The SAR image change detection of level set is merged and improved based on disparity map
CN113487088A (en) * 2021-07-06 2021-10-08 哈尔滨工业大学(深圳) Traffic prediction method and device based on dynamic space-time diagram convolution attention model
CN114723037A (en) * 2022-02-25 2022-07-08 上海理工大学 Heterogeneous graph neural network computing method for aggregating high-order neighbor nodes
CN115618296A (en) * 2022-10-26 2023-01-17 河海大学 Dam monitoring time sequence data anomaly detection method based on graph attention network
US20230025826A1 (en) * 2021-07-12 2023-01-26 Servicenow, Inc. Anomaly Detection Using Graph Neural Networks
CN115760835A (en) * 2022-12-02 2023-03-07 天津工业大学 Medical image classification method of graph convolution network
WO2023087558A1 (en) * 2021-11-22 2023-05-25 重庆邮电大学 Small sample remote sensing image scene classification method based on embedding smoothing graph neural network
CN116310826A (en) * 2023-03-20 2023-06-23 中国科学技术大学 High-resolution remote sensing image forest land secondary classification method based on graphic neural network
CN116402509A (en) * 2023-04-13 2023-07-07 东北大学 Ethernet fraud account detection device and method based on graphic neural network
CN117079815A (en) * 2023-08-21 2023-11-17 哈尔滨工业大学 Cardiovascular disease risk prediction model construction method based on graph neural network
CN117253093A (en) * 2023-10-16 2023-12-19 湖州师范学院 Hyperspectral image classification method based on depth features and graph annotation force mechanism

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109919910A (en) * 2019-01-25 2019-06-21 合肥工业大学 The SAR image change detection of level set is merged and improved based on disparity map
CN113487088A (en) * 2021-07-06 2021-10-08 哈尔滨工业大学(深圳) Traffic prediction method and device based on dynamic space-time diagram convolution attention model
US20230025826A1 (en) * 2021-07-12 2023-01-26 Servicenow, Inc. Anomaly Detection Using Graph Neural Networks
WO2023087558A1 (en) * 2021-11-22 2023-05-25 重庆邮电大学 Small sample remote sensing image scene classification method based on embedding smoothing graph neural network
CN114723037A (en) * 2022-02-25 2022-07-08 上海理工大学 Heterogeneous graph neural network computing method for aggregating high-order neighbor nodes
CN115618296A (en) * 2022-10-26 2023-01-17 河海大学 Dam monitoring time sequence data anomaly detection method based on graph attention network
CN115760835A (en) * 2022-12-02 2023-03-07 天津工业大学 Medical image classification method of graph convolution network
CN116310826A (en) * 2023-03-20 2023-06-23 中国科学技术大学 High-resolution remote sensing image forest land secondary classification method based on graphic neural network
CN116402509A (en) * 2023-04-13 2023-07-07 东北大学 Ethernet fraud account detection device and method based on graphic neural network
CN117079815A (en) * 2023-08-21 2023-11-17 哈尔滨工业大学 Cardiovascular disease risk prediction model construction method based on graph neural network
CN117253093A (en) * 2023-10-16 2023-12-19 湖州师范学院 Hyperspectral image classification method based on depth features and graph annotation force mechanism

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
XU TANG等: "An Unsupervised Remote Sensing Change Detection Method Based on Multiscale Graph Convolutional Network and Metric Learning", 《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》, vol. 60, 1 September 2021 (2021-09-01), pages 1 - 15, XP011899061, DOI: 10.1109/TGRS.2021.3106381 *
刘娜: "基于深度学习的交通流量预测算法的研究与实现", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》, no. 7, 15 July 2023 (2023-07-15), pages 034 - 529 *
李佳玮: "基于图神经网络的配电网故障定位方法", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》, no. 7, 15 July 2023 (2023-07-15), pages 042 - 49 *
胡慧芳等: "基于图神经网络的井间注采动态响应研究", 《油气地质与采收率》, vol. 30, no. 4, 31 July 2023 (2023-07-31), pages 130 - 136 *

Also Published As

Publication number Publication date
CN117437234B (en) 2024-02-23

Similar Documents

Publication Publication Date Title
Zhang et al. VPRS-based regional decision fusion of CNN and MRF classifications for very fine resolution remotely sensed images
Zhao et al. Spectral–spatial feature extraction for hyperspectral image classification: A dimension reduction and deep learning approach
Lei et al. Multiscale superpixel segmentation with deep features for change detection
US7983486B2 (en) Method and apparatus for automatic image categorization using image texture
CN110135354B (en) Change detection method based on live-action three-dimensional model
CN110033007B (en) Pedestrian clothing attribute identification method based on depth attitude estimation and multi-feature fusion
CN110569901A (en) Channel selection-based countermeasure elimination weak supervision target detection method
Liu et al. Remote sensing image change detection based on information transmission and attention mechanism
CN108052966A (en) Remote sensing images scene based on convolutional neural networks automatically extracts and sorting technique
Jiang et al. Hyperspectral image classification with spatial consistence using fully convolutional spatial propagation network
Zhang et al. Road recognition from remote sensing imagery using incremental learning
Huang et al. Research on optimization methods of ELM classification algorithm for hyperspectral remote sensing images
CN113592894B (en) Image segmentation method based on boundary box and co-occurrence feature prediction
Chen et al. Hyperspectral remote sensing image classification based on dense residual three-dimensional convolutional neural network
Guo et al. Using multi-scale and hierarchical deep convolutional features for 3D semantic classification of TLS point clouds
Li et al. A review of deep learning methods for pixel-level crack detection
Hu et al. Scale-sets image classification with hierarchical sample enriching and automatic scale selection
Kim et al. A shape preserving approach for salient object detection using convolutional neural networks
CN115375951A (en) Small sample hyperspectral image classification method based on primitive migration network
CN110135435B (en) Saliency detection method and device based on breadth learning system
Luo et al. Infrared and visible image fusion based on Multi-State contextual hidden Markov Model
CN107610136A (en) Well-marked target detection method based on the sequence of convex closure structure center query point
Shi et al. Improved metric learning with the CNN for very-high-resolution remote sensing image classification
CN111126155A (en) Pedestrian re-identification method for generating confrontation network based on semantic constraint
CN115019163A (en) City factor identification method based on multi-source big data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant