CN118097293A - Small sample data classification method and system based on residual graph convolution network and self-attention - Google Patents

Small sample data classification method and system based on residual graph convolution network and self-attention Download PDF

Info

Publication number
CN118097293A
CN118097293A CN202410309314.3A CN202410309314A CN118097293A CN 118097293 A CN118097293 A CN 118097293A CN 202410309314 A CN202410309314 A CN 202410309314A CN 118097293 A CN118097293 A CN 118097293A
Authority
CN
China
Prior art keywords
image
residual
self
network
attention
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410309314.3A
Other languages
Chinese (zh)
Inventor
李艳玲
司海平
朱正明
李飞涛
林燕
张博翔
赵雨洋
费尔南多.巴桑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Agricultural University
Original Assignee
Henan Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan Agricultural University filed Critical Henan Agricultural University
Priority to CN202410309314.3A priority Critical patent/CN118097293A/en
Publication of CN118097293A publication Critical patent/CN118097293A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the technical field of small sample data detection, and provides a small sample data classification method and system based on a residual map convolution network and self-attention. In the method, a graph convolution neural network is improved based on a residual network and an improved self-attention mechanism, so that an improved residual graph convolution network model is obtained; extracting the image high-dimensional characteristics of the image to be classified as the input node characteristics of the improved residual image convolution network model, and updating the input node characteristics to obtain the output node characteristics of the image to be classified; and determining the class probability of the image to be classified according to the characteristics of the output nodes based on the activation function of the last full-connection layer of the improved residual graph convolution network model. The function of the graph rolling network in the small sample data learning is effectively exerted, and the stability of the classification model is improved; local and global features of the image are effectively captured, and accurate identification of the model under a complex background is ensured.

Description

Small sample data classification method and system based on residual graph convolution network and self-attention
Technical Field
The application relates to the technical field of small sample data detection, in particular to a small sample data classification method, a small sample data classification system, a computer readable storage medium and electronic equipment based on a residual graph convolution network and self-attention.
Background
In recent years, with the wide application of the graph neural network in various fields, a small sample data learning method based on the graph neural network gradually becomes an important research direction in the small sample data learning field. For example, small sample data learning is used as a supervised message transfer task, the task uses a graph neural network to perform end-to-end training, labeled sample information is transferred to an unlabeled query sample, and feature prediction of unlabeled nodes is completed by using node feature information. For another example, the edge label graph neural network learns and predicts edge labels instead of node labels on the graph by directly utilizing similar in class and different between classes to iteratively update the edge labels, thereby realizing the evolution of explicit clustering; for another example, tags are propagated from marked instances to unmarked test instances through a transduction propagation network, and transduction reasoning is performed with the entire query set to alleviate the problem of insufficient data volume. In addition, a dual complete graph network is created, the model comprises a dot graph and a complete graph of a distribution graph, the two complete graphs are respectively used for modeling an instance level representation and a distribution level representation of each sample, and label information is led to be better propagated in the graph through similarity distribution between unlabeled data and labeled data.
Small sample data classification methods based on graph neural networks are generally classified into feature extraction modules and graph network propagation modules. In the feature extraction module, convolutional neural networks are used to extract image features, while in the graph network module, node features and edge features are updated alternately to obtain information of image labels. However, the stability of the graph rolling network in the node updating process is relatively low, only local features in a small range are concerned, but the focus on global information is insufficient, and particularly in the case of a small number of samples, the calculation similarity is easy to deviate, so that model precision deviation and generalization capability are reduced.
Thus, there is a need to provide a solution to the above-mentioned deficiencies of the prior art.
Disclosure of Invention
It is an object of the present application to provide a small sample data classification method, system, computer readable storage medium and electronic device based on residual graph convolution network and self-attention to solve or alleviate the above-mentioned problems in the prior art.
In order to achieve the above object, the present application provides the following technical solutions:
The application provides a small sample data classification method based on a residual map convolution network and self-attention, which comprises the following steps: step S101, based on a residual network and an improved self-attention mechanism, improving a graph convolution neural network to obtain an improved residual graph convolution network model; step S102, extracting image high-dimensional characteristics of an image to be classified as input node characteristics of the improved residual image convolution network model, and updating the input node characteristics based on the improved residual image convolution network model to obtain output node characteristics of the image to be classified; and step S103, determining the class probability of the image to be classified according to the output node characteristics based on an activation function of the last full-connection layer of the improved residual graph convolution network model.
Preferably, in step S101, based on the residual network, performing an adjacent matrix update on the graph rolling network to obtain a residual graph rolling network; and improving the residual map convolution network based on a self-attention mechanism added into a sliding window to obtain the improved residual map convolution network model.
Preferably, the updating of the adjacency matrix for the graph rolling network specifically includes: a shortcut connection is added between the input adjacency matrix and the output adjacency matrix of the graph rolling network.
Preferably, step S102 includes: performing graph rolling operation on the node input features based on the improved residual graph rolling network model so as to update the node input features to obtain first image features; and outputting the first image characteristic to the improved self-attention mechanism substitution convolutional neural network for information transfer operation, and obtaining the output node characteristic of the image to be classified.
Preferably, the outputting the first image feature to the convolutional neural network for information transfer operation is specifically: image division is carried out on the first image features to obtain non-overlapping image blocks; through a linear embedding layer, each non-overlapping image block is subjected to linear transformation and mapped to a high latitude feature space, so that a second image feature is obtained; capturing the relevance between the non-overlapping image blocks in the second image features mapped to the high latitude feature space based on a window self-attention calculation module to obtain a third image feature; and performing downsampling operation on the third image feature, reducing the width and the height of the third image feature, and increasing the number of channels of the third image feature to obtain the output node feature.
Preferably, step S102 further includes: and carrying out feature extraction on the image to be classified based on a convolutional neural network to obtain the image high-dimensional features of the image to be classified.
Preferably, step S103 includes: converting the two-dimensional output node characteristics into one-dimensional characteristic vectors based on the last full-connection layer of the improved residual graph convolution network model; based on the activation function, the following formula:
Determining the class probability of the image to be classified; wherein S j represents a class probability that the image to be classified belongs to a j-th class; a j、ak is the j-th element and the k-th element in the one-dimensional feature vector respectively; where j=1, 2, …, N is a positive integer, representing the number of categories of images.
The embodiment of the application also provides a small sample data classification system based on a residual graph convolution network and self-attention, which comprises the following steps: a model improvement unit configured to improve the graph convolution neural network based on the residual network and the improved self-attention mechanism, resulting in an improved residual graph convolution network model; the node updating unit is configured to extract the image high-order characteristics of the image to be classified as the input node characteristics of the improved residual image convolution network model, and update the input node characteristics based on the improved residual image convolution network model to obtain the output node characteristics of the image to be classified; and the class probability unit is configured to determine the class probability of the image to be classified according to the output node characteristics based on an activation function of the last full-connection layer of the improved residual graph convolution network model.
Embodiments of the present application also provide a computer readable storage medium having stored thereon a computer program which, when executed, implements a small sample data classification method based on a residual graph convolution network and self-attention as described in any of the above.
The embodiment of the application also provides electronic equipment, which comprises: a memory having stored thereon a computer program for a small sample data classification method based on a residual graph convolution network and self-attention as described in any of the above; and the processor is used for calling the computer program stored in the memory and executing the computer program.
The beneficial effects are that:
The small sample data classification method based on the residual graph convolution network and the self-attention provided by the embodiment of the application comprises the steps of firstly, improving a graph convolution neural network based on the residual graph convolution network and an improved self-attention mechanism to obtain an improved residual graph convolution network model; then, the extracted image high-dimensional characteristics of the image to be classified are input node characteristics of an improved residual image convolution network model, and the input node characteristics are updated based on the improved residual image convolution network model to obtain output node characteristics of the image to be classified; and finally, determining the class probability of the image to be classified according to the characteristics of the output nodes based on the activation function of the most one full-connection layer of the improved residual diagram convolution network model. Therefore, the residual error network is introduced to improve the graph rolling network, so that the effect of the graph rolling network in small sample data learning is more effectively exerted, and the stability of the classification model is improved; through the improvement of the self-attention mechanism, the information of different scales is processed more flexibly, the local and global features of the image are captured effectively, and the model is ensured to be capable of being accurately identified under a complex background.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application. Wherein:
FIG. 1 is a flow chart of a small sample data classification method based on a residual graph convolution network and self-attention according to some embodiments of the present application;
FIG. 2 is a network architecture of an improved residual map convolutional network model provided in accordance with some embodiments of the present application;
FIG. 3 is a schematic diagram of a node update provided in accordance with some embodiments of the application;
FIG. 4 is a schematic diagram of an adjacency matrix update provided in accordance with some embodiments of the present application;
FIG. 5 is a network block diagram of an improved self-attention mechanism provided in accordance with some embodiments of the present application;
FIG. 6 is a schematic diagram of a small sample data classification system based on a residual graph convolution network and self-attention, according to some embodiments of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to some embodiments of the present application;
Fig. 8 is a hardware configuration diagram of an electronic device according to some embodiments of the present application.
Detailed Description
The application will be described in detail below with reference to the drawings in connection with embodiments. The examples are provided by way of explanation of the application and not limitation of the application. Indeed, it will be apparent to those skilled in the art that modifications and variations can be made in the present application without departing from the scope or spirit of the application. For example, features illustrated or described as part of one embodiment can be used on another embodiment to yield still a further embodiment. All other embodiments, which are derived by a person skilled in the art based on the embodiments of the present application, shall fall within the scope of protection of the embodiments of the present application.
At present, small sample data classification methods based on graph neural networks are generally divided into a feature extraction module and a graph network propagation module. In the feature extraction module, convolutional neural networks are used to extract image features, while in the graph network module, node features and edge features are updated alternately to obtain information of image labels. However, the stability of the graph rolling network in the node updating process is relatively low, only local features in a small range are concerned, but the focus on global information is insufficient, and particularly in the case of a small number of samples, the calculation similarity is easy to deviate, so that model precision deviation and generalization capability are reduced.
Based on the method, the application provides a small sample data classification method based on a residual graph convolution network and self-attention, and a certain proportion of upper layer adjacency matrix is introduced in the adjacency matrix updating process of the graph convolution network by combining with the residual thought so as to effectively enhance the stability of the graph convolution network in the node updating process; after the graph rolling operation, a self-attention mechanism adopting a multi-scale hierarchical design and a windowing strategy is introduced, information interaction between pixel blocks is increased, and local information and global information of an image are better extracted, so that a model can more accurately identify and classify.
As shown in fig. 1 to 5, the small sample data classification method based on the residual graph convolution network and the self-attention comprises the following steps:
And step S101, improving the graph convolution neural network based on the residual network and the improved self-attention mechanism to obtain an improved residual graph convolution network model.
In a conventional graph-rolling network, updating of nodes is calculated by using an adjacency matrix, and then a new adjacency matrix is calculated by using updated node information so as to complete information transfer between nodes. However, in the node updating process, the node updating process may be affected by accidental factors, so that deviation occurs in the calculation of the adjacency matrix, and the information propagation effect is affected. In order to solve the problem, in the application, the idea of residual connection is introduced into a residual graph convolution network, and in the process of calculating and updating an adjacent matrix, the residual graph convolution network is added with an upper adjacent matrix with a certain weight so as to carry out node information transfer stability through the hug nationality convolution operation.
Specifically, first, the residual graph rolling network is obtained by updating the adjacency matrix of the graph rolling network based on the residual network. And then, based on a self-attention mechanism added into the sliding window, improving the residual map convolution network to obtain an improved residual map convolution network model.
In combination with the residual thought, a certain proportion of upper adjacent matrix is introduced in the adjacent matrix updating process of the graph rolling network, specifically, a shortcut connection is added between the input adjacent matrix and the output adjacent matrix of the graph rolling network, so that the stability of the graph rolling network in the node updating process is effectively enhanced, and the problems of over fitting and the like when the iteration times of classification models with different layers are increased are reduced.
In the small sample data residual diagram convolution classification network, a feature extraction network splices feature vectors obtained by extracting sample information with labels to obtain nodes of the diagram convolution network. Then, a complete graph is constructed using Euclidean distances between node features as weights for the edges. Wherein the Euclidean distance between node features can be obtained by calculating the absolute value of the vector difference between two node features. Specifically, the formula is as follows:
The Euclidean distance between two node features is measured. Wherein x i is the node characteristic of node i in the input image; l i is the neighbor node of node i; phi is an aggregation function; phi (x i) is an aggregation function for aggregating (e.g., weighted averaging) the feature vector x i and its neighbor nodes; h (l i) the eigenvector of node l i; x i is the feature vector of node i; x j is the node characteristic of node j in the input image; l j is the neighbor node of node i; phi (x j) is an aggregation function for aggregating (e.g., weighted averaging) the feature vector x j and its neighbor nodes; h (l j) the eigenvector of node l j; x j is the feature vector of node l j; The degree of association between node i and node j; /(I) Is the mth element on feature vector X i of node i; /(I)An mth element on feature vector X j for neighbor node j of node i; i. j and m are positive integers.
In a specific application scenario, the following formula is:
And updating the adjacency matrix of the graph rolling network. Wherein, Representing the adjacency matrix of the current layer (i.e. the output adjacency matrix),/>The adjacency matrix (i.e., input adjacency matrix) representing the upper layer, a is a weight parameter for controlling the degree of influence of the upper layer adjacency matrix.
In the graph rolling operation, adjacency matricesAnd information transfer between nodes is controlled by introducing a normalization processing sum matrix. A k-th layer node feature matrix H k (each row represents a node feature vector) and an adjacency matrix/>Multiplying, weighting, and obtaining the characteristic expression H k+1 of the next layer through the point-by-point nonlinear transformation and the action of a weight matrix W k from the k layer to the k+1th layer; wherein k is a positive integer.
Specifically, the formula is as follows:
Determining an adjacency matrix Is characterized by H k+1. Wherein D is an adjacency matrix/>Carrying out normalization processing and introducing a degree matrix of the device; w k is the weight matrix of the k-th layer to the k+1th layer in the graph roll-up network, and is used for converting the node characteristics of the k-th layer into the node characteristics of the k+1th layer. By this, the information of the nodes is effectively propagated and integrated in the graph structure through the graph rolling operation, so that the richer characteristic representation is extracted. Here, the weight matrix W k starts by random initialization and then is adjusted by the back propagation algorithm in the training process to minimize the loss function.
The residual graph rolling network performs weighted addition on the adjacent matrix of the previous layer and the adjacent matrix of the current layer to obtain a weighted sum, and the weighted sum is used for updating the computing nodes and generating a new adjacent matrix. By introducing the weight of the upper connection matrix, the residual graph convolution network can maintain certain stability in the node updating process, and the influence of accidental factors is reduced. The residual connection mechanism is beneficial to improving the robustness and stability of the graph rolling network, so that the information transfer between the nodes is more reliable and accurate.
The graph convolution network updates node characteristics according to the adjacency matrix and convolution operation, then improves the residual graph convolution network based on the self-attention mechanism added into the sliding window to obtain an improved residual graph convolution network model, and outputs updated nodes to the self-attention mechanism of the sliding window self-attention mechanism to continue information transfer operation.
Step S102, extracting the image high-dimensional characteristics of the image to be classified as the input node characteristics of the improved residual image convolution network model, and updating the input node characteristics based on the improved residual image convolution network model to obtain the output node characteristics of the image to be classified.
In the application, based on a convolutional neural network, feature extraction is carried out on an image to be classified, so that the image high-dimensional feature of the image to be classified is obtained and is used as the input node feature of a residual image convolutional network model. Here, feature extraction is performed based on a neural network model of the depth residual network architecture. In the depth residual network architecture, the image to be classified passes through the first convolution layer, a convolution kernel of 3×3 is adopted, the step size is 1, the filling is 1, and no offset item exists. The size of the input image to be classified is 3×256×256; the size of the output image is 16×256×256; then through a batch normalization layer, the output size is not changed, and a ReLu activation function is used; then, through the first residual block group, including 3 residual blocks, the step size is 1, the size of the input image is 16×256×256, and the size of the output image is also 16×256×256; then passing through a second residual block group, wherein the second residual block group comprises 3 residual blocks, the step length is 4, and the size of an output image is 32 multiplied by 64; finally, the image high-dimensional characteristics of the image to be classified are obtained through a residual block group, which comprises 3 residual blocks, the step length is 4, and the size of the output image is 64 multiplied by 16.
Based on an improved residual graph convolution network model, performing graph convolution operation on the input node characteristics to update the node input characteristics to obtain first image characteristics; and then, outputting the first image characteristic to the improved self-attention mechanism to replace the convolutional neural network for information transfer operation, and obtaining the wanted output node characteristic to be classified.
Specifically, the information transfer operation is performed in a Swin transducer that outputs the first image feature to a modified self-attention mechanism instead of a convolutional neural network. Firstly, carrying out image division on the first image features to obtain non-overlapping image blocks; then, each non-overlapping image block is subjected to linear transformation through a linear embedding layer and mapped to a high latitude feature space to obtain a second image feature; and then, capturing the relevance between non-overlapping image blocks in the second image features mapped to the high-latitude feature space based on the window self-attention calculation module to obtain a third image feature. That is, in each non-overlapping image block of the second image feature, a window-based self-attention calculation is introduced, capturing the relevance of different locations in the image by a self-attention mechanism.
And then, carrying out downsampling operation on the third image features, reducing the width and the height of the third image features, and increasing the number of channels of the third image features to obtain the output node features of the images to be classified. By means of repeated downsampling operation, deep features of the images to be classified are gradually extracted, global and local information of the images to be classified can be captured on different levels, and feature extraction of the images and gradual improvement of the deep features are achieved.
In the present application, the first image feature is segmented into fixed-size tiles (non-overlapping image blocks), each of which is considered as a position-coded input sequence, and the vector representation (embedding) of each of which is capable of interacting with other tiles in the sequence by means of a self-attention mechanism. In the self-attention mechanism, the vector representation of each tile (the vector resulting from extracting and encoding pixel information within the tile) is dynamically adjusted by the similarity between the query information of each tile and the key information of the other tiles. Specifically, first, an input image is divided into a plurality of tiles; then, mapping the pixel values of the image or the feature representation of the image to a feature space of a lower dimension by linear embedding to reduce computational complexity; then adding position codes; finally, self-attention between the vector representations is calculated.
Here, a query is a vector representation that measures the relationship between each tile and other tiles. In the self-attention mechanism, for each tile, its original vector representation (the input eigenvector representation) is multiplied by a query matrix to obtain the query information for that tile, from which correlations between that tile and other tiles are calculated. A key is a key used to provide additional information about each tile in order to introduce more context in computing the attention weight, and for each tile, its original representation (the characteristic representation of the input) is multiplied by a key matrix to obtain the key vector for that tile.
Step S103, determining the class probability of the image to be classified according to the characteristics of the output nodes based on the activation function of the last full-connection layer of the improved residual graph convolution network model.
In the application, the output of classification prediction is standardized through an activation function, and the two-dimensional output node characteristics are converted into one-dimensional characteristic vectors through the last full-connection layer of the improved residual error graph convolution network model. The sequence of output node features is then changed to probabilities by an activation function. Specifically, the formula is as follows:
And determining the class probability of the image to be classified. Wherein S j represents a class probability that the image to be classified belongs to the j-th class; a j、ak is the j-th and k-th elements in the one-dimensional feature vector respectively; where j=1, 2, …, N is a positive integer, representing the number of categories of images. And then, sorting the N class probabilities of the calculated images to be classified according to the size, and dividing the images to be classified into classification classes corresponding to the maximum class probability, thereby realizing classification division of the images.
According to the application, the residual error network is introduced to improve the graph rolling network, so that the effect of the graph rolling network in small sample data learning is more effectively exerted, and the stability of the classification model is improved; through the improvement of the self-attention mechanism, the information of different scales is processed more flexibly, the local and global features of the image are captured effectively, and the model is ensured to be capable of being accurately identified under a complex background.
The embodiment of the application also provides a small sample data classification system based on a residual graph rolling network and self-attention, as shown in fig. 6, the small sample data classification system of the residual graph rolling network and the self-attention comprises:
A model improvement unit configured to improve the graph convolution neural network based on the residual network and the improved self-attention mechanism, resulting in an improved residual graph convolution network model;
The node updating unit is configured to extract the high-order image characteristics of the image to be classified as the input node characteristics of the improved residual image convolution network model, and update the input node characteristics based on the improved residual image convolution network model to obtain the output node characteristics of the image to be classified;
And the class probability unit is configured to determine class probability of the image to be classified according to the output node characteristics based on an activation function of the last full-connection layer of the improved residual graph convolution network model.
The small sample data classification system based on the residual graph rolling network and the self-attention provided by the embodiment of the application can realize the steps and the flow of the small sample data classification method based on the residual graph rolling network and the self-attention described in any embodiment, and achieve the same technical effects, and are not repeated herein.
Fig. 7 is a schematic structural diagram of an electronic device according to some embodiments of the present application; as shown in fig. 7, the electronic device includes:
One or more processors 701;
A computer readable medium may be configured to store one or more programs 702, the one or more processors 701, when executing the one or more programs 702, implement the steps of: based on a residual network and an improved self-attention mechanism, improving the graph convolution neural network to obtain an improved residual graph convolution network model; extracting the image high-dimensional characteristics of the image to be classified as the input node characteristics of the improved residual image convolution network model, and updating the input node characteristics based on the improved residual image convolution network model to obtain the output node characteristics of the image to be classified; and determining the class probability of the image to be classified according to the characteristics of the output nodes based on the activation function of the last full-connection layer of the improved residual graph convolution network model.
Fig. 8 is a hardware structure of an electronic device provided according to some embodiments of the application; as shown in fig. 8, the hardware structure of the electronic device may include: a processor 801, a communication interface 802, a computer readable medium 803, and a communication bus 804.
Wherein the processor 801, the communication interface 802, and the computer-readable storage medium 803 communicate with each other via a communication bus 804.
Alternatively, the communication interface 802 may be an interface of a communication module, such as an interface of a GSM module.
The processor 801 may be specifically configured to: based on a residual network and an improved self-attention mechanism, improving the graph convolution neural network to obtain an improved residual graph convolution network model; extracting the image high-dimensional characteristics of the image to be classified as the input node characteristics of the improved residual image convolution network model, and updating the input node characteristics based on the improved residual image convolution network model to obtain the output node characteristics of the image to be classified; and determining the class probability of the image to be classified according to the characteristics of the output nodes based on the activation function of the last full-connection layer of the improved residual graph convolution network model.
The processor 801 may be a general purpose processor including a central processing unit (centralprocessingunit, CPU for short), a network processor (NetworkProcessor, NP for short), etc., as well as a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The electronic device of the embodiments of the present application exists in a variety of forms including, but not limited to:
(1) A mobile communication device: such devices are characterized by mobile communication capabilities and are primarily aimed at providing voice, data communications. Such terminals include: smart phones (e.g., iPhone), multimedia phones, functional phones, and low-end phones, etc.
(2) Ultra mobile personal computer device: such devices are in the category of personal computers, having computing and processing functions, and generally also having mobile internet access characteristics. Such terminals include: PDA, MID, and UMPC devices, etc., such as iPad.
(3) Portable entertainment device: such devices may display and play multimedia content. The device comprises: audio, video players (e.g., iPod), palm game consoles, electronic books, and smart toys and portable car navigation devices.
(4) And (3) a server: the configuration of the server includes a processor, a hard disk, a memory, a system bus, and the like, and the server is similar to a general computer architecture, but is required to provide highly reliable services, and thus has high requirements in terms of processing capacity, stability, reliability, security, scalability, manageability, and the like.
(5) Other electronic devices with data interaction function.
It should be noted that, according to implementation requirements, each component/step described in the embodiments of the present application may be split into more components/steps, and two or more components/steps or part of operations of the components/steps may be combined into new components/steps, so as to achieve the purposes of the embodiments of the present application.
The above-described methods according to embodiments of the present application may be implemented in hardware, firmware, or as software or computer code storable in a recording medium such as a CDROM, RAM, floppy disk, hard disk, or magneto-optical disk, or as computer code originally stored in a remote recording medium or a non-transitory machine storage medium and to be stored in a local recording medium downloaded through a network, so that the methods described herein may be processed by such software on a recording medium using a general purpose computer, a special purpose processor, or programmable or dedicated hardware such as an ASIC or FPGA. It is appreciated that a computer, processor, microprocessor controller, or programmable hardware includes a memory component (e.g., RAM, ROM, flash memory, etc.) that can store or receive software or computer code that, when accessed and executed by the computer, processor, or hardware, implements the residual graph-based network and self-attention-based small sample data classification methods described herein. Furthermore, when a general purpose computer accesses code for implementing the methods illustrated herein, execution of the code converts the general purpose computer into a special purpose computer for performing the methods illustrated herein.
Those of ordinary skill in the art will appreciate that the elements and method steps of the examples described in connection with the embodiments disclosed herein can be implemented as electronic hardware, or as a combination of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present application.
It should be noted that, in the present specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment is mainly described in a different point from other embodiments. In particular, for the apparatus and system embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, with reference to the description of the method embodiments in part.
The above-described apparatus and system embodiments are merely illustrative, in which elements that are not explicitly described may or may not be physically separated, and elements that are not explicitly described may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The above description is only of the preferred embodiments of the present application and is not intended to limit the present application, but various modifications and variations can be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. A small sample data classification method based on a residual graph convolution network and self-attention, comprising:
step S101, based on a residual network and an improved self-attention mechanism, improving a graph convolution neural network to obtain an improved residual graph convolution network model;
step S102, extracting image high-dimensional characteristics of an image to be classified as input node characteristics of the improved residual image convolution network model, and updating the input node characteristics based on the improved residual image convolution network model to obtain output node characteristics of the image to be classified;
And step S103, determining the class probability of the image to be classified according to the output node characteristics based on an activation function of the last full-connection layer of the improved residual graph convolution network model.
2. The method for classifying small sample data based on residual graph convolution network and self-attention as claimed in claim 1, wherein in step S101,
Based on the residual error network, updating an adjacency matrix of the graph rolling network to obtain a residual error graph rolling network;
and improving the residual map convolution network based on a self-attention mechanism added into a sliding window to obtain the improved residual map convolution network model.
3. The small sample data classification method based on residual graph rolling network and self-attention according to claim 2, wherein the updating of the adjacency matrix for the graph rolling network is specifically: a shortcut connection is added between the input adjacency matrix and the output adjacency matrix of the graph rolling network.
4. The small sample data classification method based on residual graph convolution network and self-attention as claimed in claim 1, wherein step S102 comprises:
performing graph rolling operation on the node input features based on the improved residual graph rolling network model so as to update the node input features to obtain first image features;
And outputting the first image characteristic to the improved self-attention mechanism substitution convolutional neural network for information transfer operation, and obtaining the output node characteristic of the image to be classified.
5. The residual-map-convolution network and self-attention-based small sample data classification method according to claim 4, wherein said outputting said first image feature into said modified self-attention-mechanism-substituted convolutional neural network performs a message passing operation, in particular:
Image division is carried out on the first image features to obtain non-overlapping image blocks;
Through a linear embedding layer, each non-overlapping image block is subjected to linear transformation and mapped to a high latitude feature space, so that a second image feature is obtained;
capturing the relevance between the non-overlapping image blocks in the second image features mapped to the high latitude feature space based on a window self-attention calculation module to obtain a third image feature;
And performing downsampling operation on the third image feature, reducing the width and the height of the third image feature, and increasing the number of channels of the third image feature to obtain the output node feature.
6. The small sample data classification method based on residual graph convolution network and self-attention of claim 1, wherein step S102 further comprises:
And carrying out feature extraction on the image to be classified based on a convolutional neural network to obtain the image high-dimensional features of the image to be classified.
7. The small sample data classification method based on residual graph convolution network and self-attention as claimed in claim 1, wherein step S103 comprises:
Converting the two-dimensional output node characteristics into one-dimensional characteristic vectors based on the last full-connection layer of the improved residual graph convolution network model;
Based on the activation function, the following formula:
Determining the class probability of the image to be classified; wherein S j represents a class probability that the image to be classified belongs to a j-th class; a j、ak is the j-th element and the k-th element in the one-dimensional feature vector respectively; where j=1, 2, …, N is a positive integer, representing the number of categories of images.
8. A small sample data classification system based on a residual graph convolution network and self-attention, comprising:
A model improvement unit configured to improve the graph convolution neural network based on the residual network and the improved self-attention mechanism, resulting in an improved residual graph convolution network model;
The node updating unit is configured to extract the image high-order characteristics of the image to be classified as the input node characteristics of the improved residual image convolution network model, and update the input node characteristics based on the improved residual image convolution network model to obtain the output node characteristics of the image to be classified;
and the class probability unit is configured to determine the class probability of the image to be classified according to the output node characteristics based on an activation function of the last full-connection layer of the improved residual graph convolution network model.
9. A computer readable storage medium having stored thereon a computer program, wherein the computer program when executed implements the residual map convolution network and self-attention based small sample data classification method according to any one of claims 1-7.
10. An electronic device, comprising:
a memory having stored thereon a computer program for a small sample data classification method based on a residual graph convolution network and self-attention as claimed in any one of claims 1-7;
And the processor is used for calling the computer program stored in the memory and executing the computer program.
CN202410309314.3A 2024-03-19 2024-03-19 Small sample data classification method and system based on residual graph convolution network and self-attention Pending CN118097293A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410309314.3A CN118097293A (en) 2024-03-19 2024-03-19 Small sample data classification method and system based on residual graph convolution network and self-attention

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410309314.3A CN118097293A (en) 2024-03-19 2024-03-19 Small sample data classification method and system based on residual graph convolution network and self-attention

Publications (1)

Publication Number Publication Date
CN118097293A true CN118097293A (en) 2024-05-28

Family

ID=91163519

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410309314.3A Pending CN118097293A (en) 2024-03-19 2024-03-19 Small sample data classification method and system based on residual graph convolution network and self-attention

Country Status (1)

Country Link
CN (1) CN118097293A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118365974A (en) * 2024-06-20 2024-07-19 山东省水利科学研究院 Water quality class detection method, system and equipment based on hybrid neural network

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118365974A (en) * 2024-06-20 2024-07-19 山东省水利科学研究院 Water quality class detection method, system and equipment based on hybrid neural network

Similar Documents

Publication Publication Date Title
CN109522942B (en) Image classification method and device, terminal equipment and storage medium
CN110636445B (en) WIFI-based indoor positioning method, device, equipment and medium
US11714921B2 (en) Image processing method with ash code on local feature vectors, image processing device and storage medium
CN108197666A (en) Image classification model processing method and device and storage medium
CN113298096B (en) Method, system, electronic device and storage medium for training zero sample classification model
CN118097293A (en) Small sample data classification method and system based on residual graph convolution network and self-attention
CN113657087B (en) Information matching method and device
CN112651467B (en) Training method and system and prediction method and system for convolutional neural network
WO2023020214A1 (en) Retrieval model training method and apparatus, retrieval method and apparatus, device and medium
CN113468330A (en) Information acquisition method, device, equipment and medium
CN114613450A (en) Method and device for predicting property of drug molecule, storage medium and computer equipment
CN110717405A (en) Face feature point positioning method, device, medium and electronic equipment
CN117765363A (en) Image anomaly detection method and system based on lightweight memory bank
CN116824609B (en) Document format detection method and device and electronic equipment
CN113435531A (en) Zero sample image classification method and system, electronic equipment and storage medium
CN114758130B (en) Image processing and model training method, device, equipment and storage medium
CN112507137B (en) Small sample relation extraction method based on granularity perception in open environment and application
CN117010480A (en) Model training method, device, equipment, storage medium and program product
CN115114483A (en) Method for processing graph data
US20240104915A1 (en) Long duration structured video action segmentation
WO2024012171A1 (en) Binary quantization method, neural network training method, device and storage medium
CN118114123B (en) Method, device, computer equipment and storage medium for processing recognition model
CN114329006B (en) Image retrieval method, apparatus, device, and computer-readable storage medium
WO2024212236A1 (en) Three-dimensional medical image registration method and apparatus, electronic device, and storage medium
KR20230154601A (en) Method and device for obtaining pixel information of table

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination