CN116012353A - Digital pathological tissue image recognition method based on graph convolution neural network - Google Patents

Digital pathological tissue image recognition method based on graph convolution neural network Download PDF

Info

Publication number
CN116012353A
CN116012353A CN202310073714.4A CN202310073714A CN116012353A CN 116012353 A CN116012353 A CN 116012353A CN 202310073714 A CN202310073714 A CN 202310073714A CN 116012353 A CN116012353 A CN 116012353A
Authority
CN
China
Prior art keywords
neural network
image
graph
digital pathological
pathological
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310073714.4A
Other languages
Chinese (zh)
Inventor
何国田
马腾云
林远长
陈琳
廖俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Institute of Green and Intelligent Technology of CAS
Original Assignee
Chongqing Institute of Green and Intelligent Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Institute of Green and Intelligent Technology of CAS filed Critical Chongqing Institute of Green and Intelligent Technology of CAS
Priority to CN202310073714.4A priority Critical patent/CN116012353A/en
Publication of CN116012353A publication Critical patent/CN116012353A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a digital pathological tissue image recognition method based on a graph convolution neural network, and belongs to the field of medical image processing. The method comprises the following steps: s1: obtaining pathological tissue slice images and preprocessing; s2: constructing a digital pathological tissue image recognition network model comprising a convolutional neural network module, a graph convolutional neural network module and a feature fusion module, and carrying out model training by adopting the preprocessed digital pathological image; s3: and correspondingly identifying category attributes of pathological tissues in the digital pathological images by using the trained digital pathological image identification model. And the training set is expanded, so that the fitting of the convolutional network training is effectively avoided, and better robustness and generalization performance are realized. And the characteristic which can be learned by the deep learning model is increased by utilizing the fusion of the image characteristic and the label dependent characteristic, so that the classification and identification capacity of the deep learning model on the digital pathological image is improved.

Description

Digital pathological tissue image recognition method based on graph convolution neural network
Technical Field
The invention belongs to the field of medical image processing, and relates to a digital pathological tissue image recognition method based on a graph convolution neural network.
Background
With the continued development of computer vision technology, computer aided diagnosis is increasingly being combined with AI. The calculation pathology plays an important role in assisting a pathologist in carrying out digital pathology diagnosis, and the diagnosis accuracy and the diagnosis speed are greatly improved. Meanwhile, the pathological quantitative analysis has extremely important significance for diagnosing tumors such as breast cancer, lung cancer and the like. The pathological image is the basis of pathological analysis, and most morphological attributes of tissue pathology are obtained by deducing and analyzing the pathological image. The accurate analysis of the pathological image is realized in the digital pathological image analysis and quantification work, and the reference value of auxiliary diagnosis can be greatly improved.
Specifically, the task of identifying digital pathology images mainly utilizes the visual characteristics of the digital pathology images to identify a plurality of tissue categories, belonging organs and even disease states with high precision. However, because of high density distribution of tissue primitives and adhesion overlapping between tissues in the digital pathological image, accurate recognition generally depends on a deep learning method, rather than just manually constructing image features to perform modeling prediction, and by means of a convolutional neural network, a deep learning model can extract deep features of the digital pathological image, perform regression training on the image features to predicted values, and improve recognition accuracy and robustness.
But under the deep learning model, fully utilizing the characteristics of the digital pathological image is very important to improve the recognition effect. Pathological tissues often appear simultaneously in digital pathological images, such as connective tissue and epithelial tissue, with a high degree of correlation between different classes. Meanwhile, pathological tissues have hierarchical classifications, e.g., epithelial tissue subcategories have simple epithelium, stratified epithelium, etc., and also have relevant characteristics when classifying against consideration of the occurrence of various subcategories. Therefore, the tissue class correlation and the dependency relationship are obviously the characteristics of the digital pathological image, but the common convolutional neural network cannot model the related information, only the characteristics of the image can be modeled, and the classifier effect is difficult to further improve. Modeling the correlation between tissue classes and efficiently extracting the features inherent in the tags is therefore a key issue.
Disclosure of Invention
In view of the above, the present invention aims to provide a digital pathological tissue image recognition method based on a graph convolution neural network. Carrying out high-dimensional modeling on the digital pathological image tag dependent information by using a graph convolution neural network module, and extracting digital pathological image features by using a convolution neural network; and the feature fusion module fuses the tag modeling features and the image features, and guides the image feature classification through the tag dependent information, so that the digital pathological image recognition classification precision is remarkably improved, the problem that the graph relationship can not be extracted by the convolutional neural network is solved, the graph relationship modeling is realized, and the regression training is carried out together with the convolutional neural network.
In order to achieve the above purpose, the present invention provides the following technical solutions:
a digital pathological tissue image recognition method based on a graph convolution neural network comprises the following steps:
s1: obtaining pathological tissue slice images and preprocessing;
s2: constructing a digital pathological tissue image recognition network model comprising a convolutional neural network module, a graph convolutional neural network module and a feature fusion module, and carrying out model training by adopting the preprocessed digital pathological image;
s3: and correspondingly identifying category attributes of pathological tissues in the digital pathological images by using the trained digital pathological image identification model.
Optionally, the step S1 specifically includes the following steps:
s11: the method comprises the steps of collecting pathological sections, enhancing pathological tissue contrast by using a pathological common dyeing method, and scanning a glass section by using a digital microscope to obtain full-field digital pathological image data;
s12: cutting the full-view digital pathological image data to obtain a smaller image with a size suitable for a computer, and manually marking pathological tissue types contained in the image;
s13: and carrying out normalization, scale scaling, filling, random cutting and horizontal overturning or vertical overturning image preprocessing on the marked digital pathological image data.
Optionally, in the step S2, performing model training by using the preprocessed digital pathology image data specifically includes the following steps:
s21: extracting image features of the preprocessed digital pathological image data by using a convolutional neural network;
s22: modeling the dependence and simultaneous occurrence relation among each pathological tissue category in the statistical pathological image data by using a graph convolution neural network;
s23: carrying out feature fusion and prediction classification scores on the image features extracted by the convolutional neural network and the label information extracted by the graph convolutional neural network by utilizing a feature fusion module;
s24: and training the output subjected to the feature fusion through a loss function to classify the prediction scores.
Optionally, the step S21 specifically includes the following steps:
s211: construction of convolutional neural network to extract tensor image features from preprocessed digital pathological image data
Figure BDA0004065461110000021
Wherein->
Figure BDA0004065461110000022
Representing the characteristic diagram in real space, H and W represent the length and width of the characteristic diagram, and C represents the characteristic diagram channelA number;
s212: global pooling is carried out on the extracted image features through a convolution layer to obtain converted feature vectors
Figure BDA0004065461110000023
Optionally, the step S22 specifically includes the following steps:
s221: for training set labels of digital pathology images, counting the co-occurrence times of label categories, and representing the co-occurrence times as a matrix
Figure BDA0004065461110000024
Figure BDA0004065461110000025
Wherein K represents the total type number of pathological tissues, and each element M in the matrix ij Representing class L i Category L when present in sample image j The number of simultaneous occurrences;
s222: constructing a conditional probability matrix P from the co-occurrence matrix M ij =P(L j |L i )=M ij /N i Wherein N is i Representing tag class L i The total occurrence times in the training set is obtained to obtain P ij =P(L j |L i ) Representing the current verification tag L i When present L j Conditional probabilities appearing in the sample image;
s223: the conditional probability P is paired by the super-parameter threshold tau ij Binarization is carried out to filter noise in the tag statistical information, so that a co-occurrence matrix which is closer to real distribution is obtained, and the co-occurrence matrix is expressed as:
Figure BDA0004065461110000031
wherein A is ij Representing a binarized co-occurrence matrix;
s224, pair of binarization co-occurrence matrix A ij Weighting according to the consideration degree p of surrounding nodes to solve the problem of too small difference of label co-occurrence information caused by excessive smooth training, which is expressed as
Figure BDA0004065461110000032
Wherein p is a super parameter, when p is 1, the graph node characteristic considers the peripheral nodes, and when p is 0, the graph node characteristic considers the self point characteristic, A' ij Namely a weighted co-occurrence matrix;
s225: with a weighted co-occurrence matrix A' ij Performing graph convolution calculation as an adjacency matrix to form each layer in the graph convolution neural network:
Figure BDA0004065461110000033
wherein the method comprises the steps of
Figure BDA0004065461110000034
Expressed as a graph roll-up neural network input feature or an output of a graph roll-up neural network of a previous layer, and sigma represents any activation function; />
Figure BDA0004065461110000035
Representing a weight matrix that can be learned in each layer of the network; d (D) l The dimension parameters used for the weight matrix are typically 1024, D l+1 Representing the desired dimension of the next layer, and D l D of the last layer network l+1 The number of channels is the same as the number of channels C of the feature map; />
Figure BDA0004065461110000036
The weighted co-occurrence matrix A' which is regularized by the graph convolution is represented by the following specific steps:
Figure BDA0004065461110000037
Figure BDA0004065461110000038
wherein I is the identity matrix of the matrix,
Figure BDA0004065461110000039
the method comprises the following steps:
Figure BDA00040654611100000310
obtaining a graph convolution neural network calculation mode; single-hot coding of network with pathological tissue labels
Figure BDA00040654611100000311
A co-occurrence matrix a' input; g output As the output of the graph convolution neural network module, the dimension of the output layer is +.>
Figure BDA00040654611100000312
To correspond to the feature dimensions in the convolutional neural network.
Optionally, the step S23 specifically includes:
image features extracted by convolutional neural network module
Figure BDA00040654611100000313
Tag feature extracted by graph convolution neural network module>
Figure BDA0004065461110000041
Multiplying to obtain a classification prediction score value, and normalizing by using a Sigmoid function:
Figure BDA0004065461110000042
Figure BDA0004065461110000043
representing a Sigmoid function.
Optionally, the step S24 specifically includes:
training the feature fused output for classification prediction scores by a loss function, which is a weighted binary cross entropy loss function, expressed as
Figure BDA0004065461110000044
Wherein y is i E {0,1} represents tag y i Whether or not it is present in the sample image, the label y indicates the real situation,
Figure BDA0004065461110000045
representing the predicted condition, w i =N/n i Representing various classes of weights, where N is the training lumped sample number, N i The number of occurrences for a particular category in the training set. />
The invention has the beneficial effects that:
step S1 expands the training set, so that the fitting of the convolutional network training is effectively avoided, the training set is further expanded, and better robustness and generalization performance are realized.
And S2, utilizing the fusion of the image features and the tag-dependent features, increasing the learnable features of the deep learning model, and improving the utilization degree of training data information, thereby improving the classification recognition capability of the deep learning model on the digital pathological images.
The step S21 extracts the characteristics of the digital pathological image, ensures the robustness of the model to the pathological tissue image by using the convolutional neural network, and provides a basis for feature fusion and classifier training.
Step S22 is to model the relevant dependent information of the label by using a graph convolution neural network, model the co-occurrence matrix into a graph structure, and transmit learning through the graph convolution network to obtain high-dimensional modeling information, so that the label characteristics are extracted. The problem that tag pair information cannot be processed in deep learning is solved.
In step S23, the image features and the label features are combined by matrix multiplication, and the association between the two features is established, so that different label information is effectively fused, and the data information can be fully utilized to calculate the prediction category score.
The step S24 trains the prediction score by using a weighted binary cross entropy loss function, so as to avoid the situation that the learning model is too focused on common categories due to the overlarge occurrence frequency difference of different categories in the digital pathological image.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objects and other advantages of the invention may be realized and obtained by means of the instrumentalities and combinations particularly pointed out in the specification.
Drawings
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in the following preferred detail with reference to the accompanying drawings, in which:
FIG. 1 is a flow chart of the present invention;
fig. 2 is a schematic diagram of a digital pathological tissue image recognition network model in an embodiment of the invention.
Detailed Description
Other advantages and effects of the present invention will become apparent to those skilled in the art from the following disclosure, which describes the embodiments of the present invention with reference to specific examples. The invention may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present invention. It should be noted that the illustrations provided in the following embodiments merely illustrate the basic idea of the present invention by way of illustration, and the following embodiments and features in the embodiments may be combined with each other without conflict.
Wherein the drawings are for illustrative purposes only and are shown in schematic, non-physical, and not intended to limit the invention; for the purpose of better illustrating embodiments of the invention, certain elements of the drawings may be omitted, enlarged or reduced and do not represent the size of the actual product; it will be appreciated by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numbers in the drawings of embodiments of the invention correspond to the same or similar components; in the description of the present invention, it should be understood that, if there are terms such as "upper", "lower", "left", "right", "front", "rear", etc., that indicate an azimuth or a positional relationship based on the azimuth or the positional relationship shown in the drawings, it is only for convenience of describing the present invention and simplifying the description, but not for indicating or suggesting that the referred device or element must have a specific azimuth, be constructed and operated in a specific azimuth, so that the terms describing the positional relationship in the drawings are merely for exemplary illustration and should not be construed as limiting the present invention, and that the specific meaning of the above terms may be understood by those of ordinary skill in the art according to the specific circumstances.
A medical digital pathological tissue image recognition method based on a graph convolution neural network comprises the following steps:
s1, acquiring pathological tissue slice images and preprocessing;
s2, constructing a digital pathological tissue image recognition network model comprising a convolutional neural network module, a graph convolutional neural network module and a feature fusion module, and performing model training by adopting the preprocessed digital pathological image;
s3, correspondingly identifying category attributes of pathological tissues in the digital pathological image by using the trained digital pathological image identification model.
Further, the step S1 specifically includes the following sub-steps:
s11, collecting pathological sections, enhancing pathological tissue contrast by using a pathological common dyeing method, and scanning the glass sections by using a digital microscope to obtain full-field digital pathological image data;
s12, cutting the full-view digital pathological image data to obtain a smaller image with a size suitable for a computer, and manually marking pathological tissue types contained in the image;
and S13, carrying out normalization, scale scaling, filling, random cutting and horizontal overturning or vertical overturning image preprocessing on the marked digital pathological image data.
Further, the step S2 of performing model training by using the preprocessed digital pathology image data specifically includes the following sub-steps:
s21, extracting image features of the preprocessed digital pathological image data by using a convolutional neural network;
s22, modeling and counting dependence and simultaneous occurrence relations among pathological tissue categories in pathological image data by using a graph convolution neural network;
s23, performing feature fusion and prediction classification score on the image features extracted by the convolutional neural network and the label information extracted by the graph convolution neural network by using a feature fusion module;
and S24, training the output subjected to the feature fusion into a classification prediction score through a loss function.
Further, the step S21 specifically includes the following sub-steps:
s211, constructing a convolutional neural network to extract tensor image features from the preprocessed digital pathological image data
Figure BDA0004065461110000061
Wherein->
Figure BDA0004065461110000067
The characteristic diagram is expressed in a real space, H and W represent the length and width of the characteristic diagram, and C represents the channel number of the characteristic diagram;
s212, carrying out global pooling on the extracted image features through a convolution layer to obtain converted feature vectors
Figure BDA0004065461110000062
Further, the step S22 specifically includes the following sub-steps:
s221, counting the common occurrence times of label categories aiming at training set labels of digital pathological images, wherein the common occurrence times are expressed as a matrix
Figure BDA0004065461110000063
Figure BDA0004065461110000064
Wherein K represents the total type number of pathological tissues, and each element M in the matrix ij Representing class L i Category L when present in sample image j The number of simultaneous occurrences;
s222, constructing a conditional probability matrix P according to the co-occurrence matrix M ij =P(L j |L i )=M ij /N i Wherein N is i Representing tag class L i The total occurrence times in the training set can obtain P ij =P(L j |L i ) Representing the current verification tag L i When present L j Conditional probabilities appearing in the sample image;
s223, utilizing the super parameter threshold tau to pair the conditional probability P ij Binarization is carried out to filter noise in the tag statistical information, so that a co-occurrence matrix which is closer to real distribution is obtained, and the co-occurrence matrix is expressed as:
Figure BDA0004065461110000065
wherein A is ij Representing a binarized co-occurrence matrix;
s224, pair of binarization co-occurrence matrix A ij Weighting according to the consideration degree p of surrounding nodes to solve the problem of too small difference of label co-occurrence information caused by excessive smooth training, which is expressed as
Figure BDA0004065461110000066
Wherein p is a super parameter, when p is 1, the graph node characteristic considers the peripheral nodes, and when p is 0, the graph node characteristic considers the self point characteristic, A' ij Namely a weighted co-occurrence matrix;
s225, using a weighted co-occurrence matrix A' ij Graph convolution calculation as adjacency matrix
Figure BDA0004065461110000071
Wherein the method comprises the steps of
Figure BDA0004065461110000072
Expressed as a graph roll-up neural network input feature or an output of a graph roll-up neural network of a previous layer, and sigma represents any activation function; />
Figure BDA0004065461110000073
Representing a weight matrix that can be learned in each layer of the network; d (D) l The dimension parameters used for the weight matrix are typically 1024, D l+1 Representing the desired dimension of the next layer, generally with D l D of the same but last layer network l+1 The number of channels is the same as the number of channels C of the feature map; />
Figure BDA0004065461110000074
The weighted co-occurrence matrix A' which is regularized by the graph convolution is represented by the following specific steps:
Figure BDA0004065461110000075
Figure BDA0004065461110000076
wherein I is an identity matrix
Figure BDA0004065461110000077
The method comprises the following steps:
Figure BDA0004065461110000078
the calculation mode of the graph convolution neural network can be obtained; single-hot coding of network with pathological tissue labels
Figure BDA0004065461110000079
And co-occurrence matrix a' inputs. G output As the output of the graph convolution neural network module, the dimension of the output layer is +.>
Figure BDA00040654611100000710
To correspond to the feature dimensions in the convolutional neural network.
Further, the step S23 specifically includes:
image features extracted by convolutional neural network module
Figure BDA00040654611100000716
Tag feature extracted by graph convolution neural network module>
Figure BDA00040654611100000711
Multiplying to obtain classified prediction score value, and normalizing by Sigmoid function
Figure BDA00040654611100000712
Figure BDA00040654611100000713
Representing a Sigmoid function.
Further, the step S24 specifically includes:
training the feature fused output for classification prediction scores by a loss function, which is a weighted binary cross entropy loss function, expressed as
Figure BDA00040654611100000714
Wherein y is i E {0,1} represents tag y i Whether or not it is present in the sample image, the label y indicates the real situation,
Figure BDA00040654611100000715
representing the predicted condition, w i =N/n i Representing various classes of weights, where N is a trainingTraining the total number of samples, n i The number of occurrences for a particular category in the training set.
The invention has better effect on the hierarchical classification pathological tissue data set, constructs a pathological tissue recognition model based on a graph neural network aiming at the embodiment of the hierarchical pathological tissue data ADP, realizes a model training flow according to the graph 1, has a model network architecture shown in the graph 2, trains the model on a GPU by using a TensorFlow architecture, trains 80 rounds by using a random gradient descent optimizer, has a momentum parameter of 0.9 and a weight attenuation coefficient of 0.0005 in the training process, and obtains a final model by training the model according to the batch size 32.
The improved model of the invention is compared with other classification recognition models trained under the same parameter conditions. The experimental results are shown in Table 1, and VGG16, resNet18, and acceptance-V3 were used as other comparative models in the experiment.
Table 1 ADP dataset experimental comparison
Figure BDA0004065461110000081
In Table 1, bold indicates that the classification performance index is better than the comparative model, and the numbers following the GCN+ResNet model represent the values of the super-parametric threshold τ used in the neural network portion of the graph in the model. The addition of the neural network in the table promotes the classification performance of the VGG16, resNet18, acceptance-V3 classification network, including classification sensitivity (recall) TPR, specificity TNR, accuracy ACC, and F1 score.
The test result shows that for the common convolutional neural network, the improvement of the graph neural network can improve the recognition accuracy, the F1 fraction and the like performance indexes of the recognition method, and the improvement method has certain universality for the existing classification method.
Finally, it is noted that the above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made thereto without departing from the spirit and scope of the present invention, which is intended to be covered by the claims of the present invention.

Claims (7)

1. A digital pathological tissue image recognition method based on a graph convolution neural network is characterized by comprising the following steps of: the method comprises the following steps:
s1: obtaining pathological tissue slice images and preprocessing;
s2: constructing a digital pathological tissue image recognition network model comprising a convolutional neural network module, a graph convolutional neural network module and a feature fusion module, and carrying out model training by adopting the preprocessed digital pathological image;
s3: and correspondingly identifying category attributes of pathological tissues in the digital pathological images by using the trained digital pathological image identification model.
2. The digital pathological tissue image recognition method based on the graph roll-up neural network according to claim 1, wherein the method comprises the following steps of: the step S1 specifically comprises the following steps:
s11: the method comprises the steps of collecting pathological sections, enhancing pathological tissue contrast by using a pathological common dyeing method, and scanning a glass section by using a digital microscope to obtain full-field digital pathological image data;
s12: cutting the full-view digital pathological image data to obtain a smaller image with a size suitable for a computer, and manually marking pathological tissue types contained in the image;
s13: and carrying out normalization, scale scaling, filling, random cutting and horizontal overturning or vertical overturning image preprocessing on the marked digital pathological image data.
3. The digital pathological tissue image recognition method based on the graph roll-up neural network according to claim 2, wherein the method comprises the following steps of: in the step S2, model training using the preprocessed digital pathology image data specifically includes the following steps:
s21: extracting image features of the preprocessed digital pathological image data by using a convolutional neural network;
s22: modeling the dependence and simultaneous occurrence relation among each pathological tissue category in the statistical pathological image data by using a graph convolution neural network;
s23: carrying out feature fusion and prediction classification scores on the image features extracted by the convolutional neural network and the label information extracted by the graph convolutional neural network by utilizing a feature fusion module;
s24: and training the output subjected to the feature fusion through a loss function to classify the prediction scores.
4. A digital pathological tissue image recognition method based on a graph roll-up neural network according to claim 3, wherein: the step S21 specifically includes the following steps:
s211: construction of convolutional neural network to extract tensor image features from preprocessed digital pathological image data
Figure FDA0004065461100000011
Wherein->
Figure FDA0004065461100000012
The characteristic diagram is expressed in a real space, H and W represent the length and width of the characteristic diagram, and C represents the channel number of the characteristic diagram;
s212: global pooling is carried out on the extracted image features through a convolution layer to obtain converted feature vectors
Figure FDA0004065461100000013
5. The method for identifying digital pathological tissue images based on a graph roll-up neural network according to claim 4, wherein the method comprises the following steps of: the step S22 specifically includes the following steps:
s221: for training set labels of digital pathology images, counting the co-occurrence times of label categories, and representing the co-occurrence times as a matrix
Figure FDA0004065461100000014
Figure FDA0004065461100000021
Wherein K represents the total type number of pathological tissues, and each element M in the matrix ij Representing class L i Category L when present in sample image j The number of simultaneous occurrences;
s222: constructing a conditional probability matrix P from the co-occurrence matrix M ij =P(L j |L i )=M ij /N i Wherein N is i Representing tag class L i The total occurrence times in the training set is obtained to obtain P ij =P(L j |L i ) Representing the current verification tag L i When present L j Conditional probabilities appearing in the sample image;
s223: the conditional probability P is paired by the super-parameter threshold tau ij Binarization is carried out to filter noise in the tag statistical information, so that a co-occurrence matrix which is closer to real distribution is obtained, and the co-occurrence matrix is expressed as:
Figure FDA0004065461100000022
wherein A is ij Representing a binarized co-occurrence matrix;
s224, pair of binarization co-occurrence matrix A ij Weighting according to the consideration degree p of surrounding nodes to solve the problem of too small difference of label co-occurrence information caused by excessive smooth training, which is expressed as
Figure FDA0004065461100000023
Wherein p is a super parameter, when p is 1, the graph node characteristic considers the peripheral nodes, and when p is 0, the graph node characteristic considers the self point characteristic, A' ij Namely a weighted co-occurrence matrix;
s225: with a weighted co-occurrence matrix A' ij Performing graph convolution calculation as an adjacency matrix to form each layer in the graph convolution neural network:
Figure FDA0004065461100000024
wherein the method comprises the steps of
Figure FDA0004065461100000025
Expressed as a graph roll-up neural network input feature or an output of a graph roll-up neural network of a previous layer, and sigma represents any activation function; />
Figure FDA0004065461100000026
Representing a weight matrix that can be learned in each layer of the network; d (D) l The dimension parameters used for the weight matrix are typically 1024, D l+1 Representing the desired dimension of the next layer, and D l D of the last layer network l+1 The number of channels is the same as the number of channels C of the feature map; />
Figure FDA0004065461100000027
The weighted co-occurrence matrix A' which is regularized by the graph convolution is represented by the following specific steps:
Figure FDA0004065461100000028
Figure FDA0004065461100000029
wherein I is the identity matrix of the matrix,
Figure FDA00040654611000000210
the method comprises the following steps:
Figure FDA00040654611000000211
obtaining a graph convolution neural network calculation mode; network toSingle thermal coding of pathological tissue tags
Figure FDA00040654611000000212
A co-occurrence matrix a' input; g output As the output of the graph convolution neural network module, the dimension of the output layer is +.>
Figure FDA00040654611000000213
To correspond to the feature dimensions in the convolutional neural network.
6. The method for identifying digital pathological tissue images based on a graph roll-up neural network according to claim 5, wherein the method comprises the following steps of: the step S23 specifically includes:
image features extracted by convolutional neural network module
Figure FDA0004065461100000031
Label features extracted by graph convolution neural network module
Figure FDA0004065461100000032
Multiplying to obtain a classification prediction score value, and normalizing by using a Sigmoid function:
Figure FDA0004065461100000033
Figure FDA0004065461100000034
representing a Sigmoid function.
7. The method for identifying digital pathological tissue images based on a graph roll-up neural network according to claim 6, wherein the method comprises the following steps: the step S24 specifically includes:
training the feature fused output for classification prediction scores by a loss function, which is a weighted binary cross entropy loss function, expressed as
Figure FDA0004065461100000035
Wherein y is i E {0,1} represents tag y i Whether or not it is present in the sample image, the label y indicates the real situation,
Figure FDA0004065461100000036
representing the predicted condition, w i =N/n i Representing various classes of weights, where N is the training lumped sample number, N i The number of occurrences for a particular category in the training set. />
CN202310073714.4A 2023-02-07 2023-02-07 Digital pathological tissue image recognition method based on graph convolution neural network Pending CN116012353A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310073714.4A CN116012353A (en) 2023-02-07 2023-02-07 Digital pathological tissue image recognition method based on graph convolution neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310073714.4A CN116012353A (en) 2023-02-07 2023-02-07 Digital pathological tissue image recognition method based on graph convolution neural network

Publications (1)

Publication Number Publication Date
CN116012353A true CN116012353A (en) 2023-04-25

Family

ID=86021080

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310073714.4A Pending CN116012353A (en) 2023-02-07 2023-02-07 Digital pathological tissue image recognition method based on graph convolution neural network

Country Status (1)

Country Link
CN (1) CN116012353A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116682576A (en) * 2023-08-02 2023-09-01 浙江大学 Liver cancer pathological prognosis system and device based on double-layer graph convolutional neural network
CN116883397A (en) * 2023-09-06 2023-10-13 佳木斯大学 Automatic lean method and system applied to anatomic pathology
CN117036811A (en) * 2023-08-14 2023-11-10 桂林电子科技大学 Intelligent pathological image classification system and method based on double-branch fusion network
CN117115117A (en) * 2023-08-31 2023-11-24 南京诺源医疗器械有限公司 Pathological image recognition method based on small sample, electronic equipment and storage medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116682576A (en) * 2023-08-02 2023-09-01 浙江大学 Liver cancer pathological prognosis system and device based on double-layer graph convolutional neural network
CN116682576B (en) * 2023-08-02 2023-12-19 浙江大学 Liver cancer pathological prognosis system and device based on double-layer graph convolutional neural network
CN117036811A (en) * 2023-08-14 2023-11-10 桂林电子科技大学 Intelligent pathological image classification system and method based on double-branch fusion network
CN117115117A (en) * 2023-08-31 2023-11-24 南京诺源医疗器械有限公司 Pathological image recognition method based on small sample, electronic equipment and storage medium
CN117115117B (en) * 2023-08-31 2024-02-09 南京诺源医疗器械有限公司 Pathological image recognition method based on small sample, electronic equipment and storage medium
CN116883397A (en) * 2023-09-06 2023-10-13 佳木斯大学 Automatic lean method and system applied to anatomic pathology
CN116883397B (en) * 2023-09-06 2023-12-08 佳木斯大学 Automatic lean method and system applied to anatomic pathology

Similar Documents

Publication Publication Date Title
CN110969626B (en) Method for extracting hippocampus of human brain nuclear magnetic resonance image based on 3D neural network
CN116012353A (en) Digital pathological tissue image recognition method based on graph convolution neural network
CN107133651B (en) The functional magnetic resonance imaging data classification method of subgraph is differentiated based on super-network
CN109871875B (en) Building change detection method based on deep learning
CN111985536A (en) Gastroscope pathological image classification method based on weak supervised learning
CN110633758A (en) Method for detecting and locating cancer region aiming at small sample or sample unbalance
CN111553127A (en) Multi-label text data feature selection method and device
Zanjani et al. Cancer detection in histopathology whole-slide images using conditional random fields on deep embedded spaces
CN113157678B (en) Multi-source heterogeneous data association method
CN112434718B (en) New coronary pneumonia multi-modal feature extraction fusion method and system based on depth map
CN114596467A (en) Multimode image classification method based on evidence deep learning
CN111694954B (en) Image classification method and device and electronic equipment
CN112819821A (en) Cell nucleus image detection method
CN114783604A (en) Method, system and storage medium for predicting sentinel lymph node metastasis of breast cancer
CN112183237A (en) Automatic white blood cell classification method based on color space adaptive threshold segmentation
CN111815582A (en) Two-dimensional code area detection method for improving background prior and foreground prior
CN114971294A (en) Data acquisition method, device, equipment and storage medium
CN109740669B (en) Breast cancer pathological image classification method based on depth feature aggregation
CN108805181B (en) Image classification device and method based on multi-classification model
CN114580501A (en) Bone marrow cell classification method, system, computer device and storage medium
CN114445356A (en) Multi-resolution-based full-field pathological section image tumor rapid positioning method
Narmatha et al. Skin cancer detection from dermoscopic images using Deep Siamese domain adaptation convolutional Neural Network optimized with Honey Badger Algorithm
CN112085742B (en) NAFLD ultrasonic video diagnosis method based on context attention
CN111611919B (en) Road scene layout analysis method based on structured learning
CN113298254A (en) Deskewing method and device for deep migration learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination