CN114864076A - Multi-modal breast cancer classification training method and system based on graph attention network - Google Patents

Multi-modal breast cancer classification training method and system based on graph attention network Download PDF

Info

Publication number
CN114864076A
CN114864076A CN202210489883.1A CN202210489883A CN114864076A CN 114864076 A CN114864076 A CN 114864076A CN 202210489883 A CN202210489883 A CN 202210489883A CN 114864076 A CN114864076 A CN 114864076A
Authority
CN
China
Prior art keywords
features
patient
pathological
pathological image
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210489883.1A
Other languages
Chinese (zh)
Inventor
章永龙
宋明宇
李斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yangzhou University
Original Assignee
Yangzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yangzhou University filed Critical Yangzhou University
Priority to CN202210489883.1A priority Critical patent/CN114864076A/en
Publication of CN114864076A publication Critical patent/CN114864076A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/60ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Public Health (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Epidemiology (AREA)
  • Databases & Information Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Primary Health Care (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Pathology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Multimedia (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The invention discloses a multi-modal breast cancer classification training method and system based on a graph attention network, wherein the method comprises the following steps: firstly, extracting pathological features and processing texts of electronic medical records to form medical record texts, and acquiring text features by using a pre-training model; meanwhile, performing high-order feature extraction on the pathological image set of the patient by using a graph attention network; then, fusing the obtained image, text and pathological features through a multi-modal adaptive gate control unit to obtain multi-modal fusion features of the patient; and finally, inputting the fused multi-modal characteristics into a multi-layer perceptron to perform classification prediction, and defining a cross entropy loss function training model. The method provided by the invention integrates the characteristics of the three modes of image, text and pathology to classify the breast cancer, the performance of the network structure provided by the invention is obviously superior to that of a single mode method, and the purpose of improving the breast cancer classification accuracy is achieved.

Description

Multi-modal breast cancer classification training method and system based on graph attention network
Technical Field
The invention belongs to the field of deep learning and disease classification, and particularly relates to a multi-modal breast cancer classification training method and system based on a graph attention network.
Background
Breast cancer is one of the most serious diseases threatening human life and health, and is a medical health problem which is commonly concerned all over the world. According to data published in 2020 by international research center for cancer (IARC) under the World Health Organization (WHO), it is shown that up to 226 ten thousand new cases of breast cancer exceed 220 ten thousand of lung cancer, and the breast cancer replaces lung cancer and becomes the first cancer in the world. Breast cancer can develop in both men and women, and more than about 98% of breast cancer patients are women. The incidence of breast cancer is high in the top worldwide, and the annual rise in incidence and the trend toward younger age have severely affected women's health worldwide. In clinical medicine, compared with images such as X-ray, nuclear magnetic resonance and the like, pathological images are still the best standard for breast cancer diagnosis. The early recognition of the benign and malignant classification of the breast cancer tumor pathological image has important significance for the clinician to formulate a personalized treatment scheme. In the traditional method, the classification method of the breast cancer pathological images based on the manual work adopts the manual work to extract the characteristics, and the classification is finished by using classifiers such as a support vector machine, a random forest and the like based on the characteristics. The method has the defects of high requirement of professional knowledge, time consumption for extracting the characteristics, difficulty in extracting the high-quality characteristics and the like.
At present, due to the complexity and the workload of breast cancer pathological image classification based on manual work, the task is time-consuming and labor-consuming, the result is very easy to be influenced by subjective human factors of pathologists, and the generalization capability of a classification model is poor in practical application. In recent years, methods of deep learning have shown increasing advantages in various medical image analysis tasks. Compared with the pathological image classification based on the manual work, the method reduces the requirement on professional knowledge, can continuously learn the image characteristics by utilizing the network and classify the pathological images into benign and malignant types, can improve the diagnosis efficiency, and can provide more objective and accurate diagnosis results for doctors. However, these methods still have some disadvantages: (1) a patient may contain multiple pathological images of various parts of breast cancer with some interaction between the images. Using a single pathological image, the interaction that exists between images is discarded. (2) In the existing research, pathological images are mostly used as input of a convolutional neural network, but the classification of breast cancer benign and malignant by only considering single-mode image data is difficult to meet the requirement of clinical diagnosis. (3) Data of different modes are associated, and the complementarity between the modes can not be fully exerted by a simple fusion method.
Disclosure of Invention
The purpose of the invention is as follows: in view of the problems in the prior art, an object of the present invention is to provide a multi-modal breast cancer classification training method and system based on a graph attention network, which comprehensively consider pathological features of a patient, pathological text description features, and features of multiple pathological images, and consider the relationship among the modal features for adaptive fusion, so as to improve the accuracy of breast cancer classification.
The technical scheme is as follows: in order to achieve the purpose, the invention adopts the following technical scheme: a breast cancer classification training method based on a graph attention network comprises the following steps:
step 1, extracting representative pathological features from an electronic medical record EMR of a patient, digitizing the features, and performing text description to obtain a medical record text;
step 2, extracting the characteristics of a single pathological image of a patient to obtain the characteristics of the pathological image node level, forming a full-connected graph of the pathological image set of the patient, taking the characteristics of the pathological image node level as initial characteristics, and acquiring the high-order characteristics of the pathological image node by using a graph attention network; respectively carrying out average pooling on the initial features and the high-order features of the pathological image nodes, and then splicing to obtain final pathological image features of the patient;
step 3, extracting the diagnosis text characteristics of the patient from the medical record text formed by EMR by using a pre-training language model;
step 4, fusing pathological image features, text features and pathological features of the patient through a multi-mode self-adaptive gate control unit; the self-adaptive gate control unit fuses the three modal characteristics by using an attention gate, and performs weighted summation on the fused characteristics and pathological image characteristics to obtain final multi-modal fusion characteristics;
and 5, classifying and predicting the multi-modal fusion characteristics through a multilayer perceptron, and training a model by defining a cross entropy loss function.
Further, step 1 extracts representative pathological features from the patient's electronic medical record,
including age, sex, disease course type, individual tumor history, pectoral adhesions, family tumor history, orange peel appearance, previous treatment, breast deformation, neoadjuvant chemotherapy, dimple symptoms, skin redness and swelling, skin ulcers, tumors, axillary lymphadenectasis, nipple changes, nipple discharge, lymphadenectasis, tumor location, tenderness, number of tumors, tumor size, tumor texture, tumor boundaries, smooth surface, tumor morphology, activity, envelope, skin adhesion, and diagnosis; firstly, numerically expressing each feature; and then the clinical medical rule carries out text description on the extracted features to obtain a medical record text of the patient.
Further, the specific process of acquiring the pathological image features of the patient in step 2 includes:
step 2-1, let a breast cancer patient have k pathological images, and the pathological image set is represented as X ═ X i |i=1,2,3,...,k},x i ∈R P P is the dimension of each image, and the characteristic V of the node level of the pathological image is obtained through a DenseNet model i |i=1,2,3,...,k},v i ∈R F F is the dimension of the node level feature of each image;
step 2-2, forming a fully connected graph of the pathological image set of the patient to acquire the correlation between the pathological images; the vertex in the graph is a pathological image, and the initial characteristic of the pathological image is represented as the node level characteristic obtained in the step 2-1;
step 2-3, extracting high-order features of the patient image by using a graph attention network GAT, wherein the features V ═ V at the node level of the pathological image i |i=1,2,3,...,k},v i ∈R F And obtaining a final pathological image node high-order characteristic, namely V ' ═ { V ', through a multilayer GAT model as an input of GAT ' i |i=1,2,3,...,k},v′ i ∈R F′ (ii) a The detailed process is as follows:
first, the attention coefficient e of the feature of the node j to the node i is calculated ij
e ij =LeakyReLU(a T [Wv i ||Wv j ])
Where | | | is the splicing operation, a T ∈R 2F′ Is a parameterized weight vector realized by a full connection layer, LeakyReLU is a nonlinear activation function, and W represents a weight matrix;
then, the coefficient e is matched with a Softmax function ij And (3) carrying out normalization to obtain the attention weight of the node j to the node i:
Figure BDA0003631177880000031
wherein N is i Is the neighborhood of node i in the graph; finally using the normalized attention coefficient alpha ij Calculating the weighted sum of the associated features to obtain the final output feature of each node:
Figure BDA0003631177880000032
wherein ELU is a combination of Sigmoid and ReLU, which is a nonlinear activation function; w 1 Is a weight matrix; the features of the final graph level are summed by the features in the V' set and output after average pooling
Figure BDA0003631177880000033
Figure BDA0003631177880000034
The method is characterized in that the features at the node level of the pathological image of the patient are averaged and pooled with the features at the map level
Figure BDA0003631177880000035
Splicing to obtain the final characteristics G of the pathological image of the patient, wherein G belongs to R F′+F
Figure BDA0003631177880000036
Further, the specific process of fusing the features of the three modalities through the multi-modality adaptive gating unit in the step 4 includes:
step 4-1, firstly, according to the obtained pathological image characteristics G, the diagnosis text characteristics T and the pathological characteristics C of the patient, calculating to obtain two weights:
g t =ReLU(W gt [G||T]+b t )
g c =ReLU(W gc [G||C]+b c )
wherein W gt And W gc Is a weight matrix, b t And b c Is a bias vector, | | represents a splicing operation, and the ReLU is a nonlinear activation function;
and 4-2, obtaining a vector H according to the two weights, the diagnosis text characteristic T and the diagnosis pathology characteristic C:
H=g t ·(W t T)+g c ·(W c C)+b H
wherein W t And W c Is a weight matrix, b H Is a bias vector;
and 4-3, finally, obtaining the final multi-modal fusion feature M of the patient through weighting and summing the pathological image feature G and the vector H:
M=G+αH
Figure BDA0003631177880000041
wherein β is a hyperparameter initialized randomly by a model, | G | | Y 2 And H Y 2 L represents G and H, respectively 2 And (4) norm.
Further, the classification prediction in step 5 specifically includes:
step 5-1, predicting benign and malignant two categories of the breast cancer by using a Softmax layer, namely:
Figure BDA0003631177880000042
wherein
Figure BDA0003631177880000043
Linear represents the output of the fully connected layer.
Step 5-2, calculating a loss function of the breast cancer benign and malignant binary classification task by using cross entropy:
Figure BDA0003631177880000044
where t is the total number of patients in the data set, P n And
Figure BDA0003631177880000045
the actual and predicted values for the nth patient are indicated, respectively.
A graph attention network based multimodal breast cancer classification system comprising:
the pretreatment module is used for extracting representative pathological features from the electronic medical record EMR of the patient, digitizing the features and performing text description to obtain a medical record text;
the pathological image feature generation module is used for extracting the features of a single pathological image of a patient to obtain the features of the pathological image node level, forming a pathological image set of the patient into a full-connected graph, taking the features of the pathological image node level as initial features, and acquiring the high-order features of the pathological image nodes by using a graph attention network; respectively carrying out average pooling on the initial features and the high-order features of the pathological image nodes, and then splicing to obtain final pathological image features of the patient;
the text feature generation module is used for extracting diagnosis text features of the patient from a medical record text formed by EMR by using a pre-training language model;
the multi-mode feature fusion module is used for fusing pathological image features, text features and pathological features of the patient through the multi-mode self-adaptive gate control unit; the self-adaptive gate control unit fuses three modal characteristics by using an attention gate, and performs weighted summation on the fused characteristics and pathological image characteristics to serve as final multi-modal fusion characteristics;
and the training module is used for carrying out classification prediction on the breast cancer by the multi-mode fusion characteristics through a multilayer perceptron and training the model by defining a cross entropy loss function.
And the prediction module is used for inputting the pathological image set, the medical history text and the pathological features of the patient into the trained model to obtain a breast cancer classification prediction result.
A computer system comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the computer program when loaded into the processor implementing the steps of the graph attention network based multimodal breast cancer classification training method.
A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the graph attention network based multimodal breast cancer classification training method.
Has the advantages that: compared with the prior art, the invention has the following remarkable advantages: 1) the model provided by the invention integrates the characteristics of three modes of images, texts and pathology to classify the breast cancer, and the performance of the network structure is superior to that of a single-mode method; 2) the invention adopts a graph attention network (GAT), takes the pathological image of the patient as a node to form a graph, and combines the pathological image characteristics of the node level and the pathological image characteristics of the graph level, thereby improving the classification performance; 3) the invention provides a fusion method of a multi-mode self-adaptive door, which combines the characteristics of three modes to obtain the multi-mode characteristics which mainly take the characteristics of pathological images and self-adaptively superpose texts and pathological characteristics; 4) experiments show that the invention can obtain more accurate breast cancer classification results, and the classification accuracy can reach 93.62%.
Drawings
FIG. 1 is a schematic flow chart of an embodiment of the present invention;
FIG. 2 is a screenshot depicting the primary numerical version of patient S0000004;
FIG. 3 is a screenshot of a patient S0000004 his main diagnosis description text;
fig. 4 is a schematic diagram of a multi-modal adaptive door according to an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
With reference to fig. 1, a schematic flow chart of a first embodiment of the present invention provides a breast cancer classification training method based on a graph attention network, which mainly includes the following steps:
step 1, extracting representative pathological features from an electronic medical record EMR of a patient, digitizing the features, and processing the EMR to form a section of text description as a case history text;
step 2, extracting the characteristics of a single pathological image of a patient to obtain the characteristics of the pathological image node level, forming a full-connected graph of the pathological image set of the patient, taking the characteristics of the pathological image node level as initial characteristics, and acquiring the high-order characteristics of the pathological image node by using a graph attention network; respectively carrying out average pooling on the initial features and the high-order features of the pathological image nodes, and then splicing to obtain final pathological image features of the patient;
step 3, extracting the diagnosis text characteristics of the patient from the medical record text formed by EMR by using a pre-training language model;
step 4, fusing pathological image features, text features and pathological features of the patient through a multi-mode self-adaptive gate control unit; the adaptive gating unit fuses three modal characteristics by using an attention gate, and performs weighted summation on the fused characteristics and pathological image characteristics to obtain final multi-modal fusion characteristics.
And 5, classifying and predicting the fused multi-modal characteristics through a multilayer perceptron, and training a model by defining a cross entropy loss function.
Further, the process of step 1 is as follows:
step 1-1, 29 representative features are extracted from a patient's Electronic Medical Record (EMR). Specifically, the 29 characteristics included age, gender, type of disease course, individual tumor history, pectoral adhesions, family tumor history, orange peel appearance, pretreatment, breast deformity, neoadjuvant chemotherapy, dimple symptoms, skin redness, skin ulcers, tumors, axillary lymphadenectasis, nipple changes, nipple discharge, lymphadenectasis, tumor location, tenderness, number of tumors, tumor size, tumor texture, tumor boundaries, smooth surface, tumor morphology, activity, envelope, skin adhesions, and diagnosis. According to the actual situation, the data is quantized into specific numerical values. These features are closely related to the clinical medicine theory of breast cancer diagnosis and these structured data are used to describe the patient's condition. The patient S0000004 was selected and the main numerical type is described in fig. 2 below.
Step 1-2, performing text description on the 29 characteristics according to clinical medical rules to obtain a medical record text of the patient. Patient S0000004 his main diagnostic description is shown in figure 3 below.
Further, the step 2 of obtaining the high-order pathological image features includes:
step 2-1, let a breast cancer patient have k pathological images, and the pathological image set is represented as X ═ X i |i=1,2,3,...,k},x i ∈R P P is the dimension of each image, and the characteristic V of the node level of the pathological image is obtained through a DenseNet model i |i=1,2,3,...,k},v i ∈R F F is the dimension of the node level feature of each image;
step 2-2, forming a fully connected graph of the pathological image set of the patient to acquire the correlation between the pathological images; the vertex in the graph is a pathological image, and the initial characteristic of the pathological image is represented as the node level characteristic obtained in the step 2-1;
step 2-3, extracting high-order features of the patient image by using a graph attention network GAT, wherein the feature V of the pathological image node level is { V ═ V } i |i=1,2,3,...,k},v i ∈R F And obtaining a final pathological image node high-order characteristic, namely V ' ═ { V ', through a multilayer GAT model as an input of GAT ' i |i=1,2,3,...,k},v′ i ∈R F′ (ii) a The detailed process is as follows:
first, the attention coefficient e of the feature of the node j to the node i is calculated ij
e ij =LeakyReLU(a T [Wv i ||Wv j ])
Where | | | is the splicing operation, a T ∈R 2F′ Is a parameterized weight vector implemented by a fully-connected layer with LeakyReLU nonlinearity; e is an element of R t×t Is an attention coefficient matrix; t is the number of patients; w represents a weight matrix.
Then, the coefficient e is matched with a Softmax function ij And (3) carrying out normalization to obtain the attention weight of the node j to the node i:
Figure BDA0003631177880000071
wherein N is i Is the neighborhood of node i in the graph; finally using the normalized attention coefficient alpha ij Calculating the weighted sum of the associated features to obtain the final output feature of each node:
Figure BDA0003631177880000072
wherein ELU is a combination of Sigmoid and ReLU, which is a nonlinear activation function; w 1 Is a weight matrix; the features of the final graph level are output after the features in the V' set are summed and averaged and pooled
Figure BDA0003631177880000073
Figure BDA0003631177880000074
The method is characterized in that the node-level features of pathological images of patients are averaged and pooled with the graph-level features
Figure BDA0003631177880000075
Splicing to obtain the final characteristics G of the pathological image of the patient, wherein G belongs to R F′+F
Figure BDA0003631177880000076
Further, in step 3, the text feature obtaining includes:
step 3-1, using the Bert model, and taking the diagnosis text description I obtained in the step 1-2 of the patient as input to obtain the text characteristics of the patient medical record
Figure BDA0003631177880000077
F 1 Is the dimension of the medical record text after Bert.
In addition, 29 representative pathological features, defined as C, were selected from the patient EMR, with dimensions of 29X 1.
Further, in step 4, a schematic diagram of the multi-modal adaptive portal fusion is shown in fig. 4, and the specific process includes:
step 4-1, firstly, according to the obtained pathological image characteristics G, the diagnosis text characteristics T and the pathological characteristics C of the patient, calculating to obtain two weights:
g t =ReLU(W gt [G||T]+b t )
g c =ReLU(W gc [G||C]+b c )
wherein W gt ,W gc Is a weight matrix of text and pathological modalities, b t And b c Is a bias vector, | | represents splicing operation, and the ReLU is a nonlinear activation function;
and 4-2, obtaining a vector H according to the two weights, the diagnosis text characteristic T and the diagnosis pathology characteristic C:
H=g t ·(W t T)+g c ·(W c C)+b H
wherein W t And W c Weight matrices for text and pathology information, respectively, b H Is a bias vector;
and 4-3, finally, obtaining the final multi-modal fusion feature M of the patient through weighting and summing the pathological image feature G and the vector H:
M=G+αH
Figure BDA0003631177880000081
wherein β is a hyperparameter initialized randomly by a model, | G | | Y 2 And H Y 2 L represents G and H, respectively 2 And (4) norm.
Further, the classification prediction in step 5 specifically includes:
step 5-1, predicting benign and malignant two categories of the breast cancer by using a Softmax layer, namely:
Figure BDA0003631177880000082
wherein
Figure BDA0003631177880000083
Linear represents the output of the fully connected layer.
Step 5-2, calculating a loss function of the breast cancer benign and malignant binary classification task by using cross entropy:
Figure BDA0003631177880000084
where t is the total number of patients in the data set, P n And
Figure BDA0003631177880000085
the actual and predicted values for the nth patient are indicated, respectively.
The invention provides a multi-modal breast cancer classification training method based on a graph attention network, which adopts multi-level features to represent image features in a pathological image processing stage, captures the fine-grained features of a pathological image by combining node-level vectors and graph-level vectors of the pathological image of a breast cancer patient, and simultaneously considers the interaction between the image and the image. Furthermore, a fusion strategy of the multi-modal adaptive gate is provided, and the core idea is to adjust the representation of one modality by using displacement vectors obtained from other modalities. The characteristics of the three modes of image, text and pathology are fused, and the breast cancer is classified. The application of the breast cancer automatic classification algorithm in clinic becomes possible.
The effects and advantages of the present invention are illustrated by experiments as follows. The data set used in the present invention contained data from 185 breast cancer patients, of which 82 were benign and 103 were malignant. Each patient contained 2-97 pathological images for study. Finally, there are a total of 3764 pathology images of size pixels, each marked as benign or malignant (1332 benign, 2432 malignant), all acquired using a laika Aperio AT2 slide scanner. In addition to the patient pathology image, each patient also contains a text description of the diagnosis and a numerical description of the patient's condition. To systematically verify the validity of the proposed model, four variants thereof were tested: (1) only single modal characteristics based on texts are adopted for classification, and the classification accuracy is 74.47%; (2) only the characteristics of the pathological image node level are classified, and the classification accuracy is 82.98%; (3) the image features obtained by splicing the node level features and the graph level features of the pathological images of the patient after average pooling are classified, and the classification accuracy is 87.23%; (4) only 29 representative structured features extracted from the EMR data were classified with a classification accuracy of 65.96%. The obtained accuracy is less than 93.62% of the classification accuracy of the model proposed by the user.
Based on the same inventive concept, the embodiment of the invention discloses a multi-modal breast cancer classification system based on a graph attention network, which comprises: the pretreatment module is used for extracting representative pathological features from the EMR of the patient, digitizing the features and performing text description to obtain a case history text; the pathological image feature generation module is used for extracting the features of a single pathological image of a patient to obtain the features of the pathological image node level, forming a pathological image set of the patient into a full-connected graph, taking the features of the pathological image node level as initial features, and acquiring the high-order features of the pathological image nodes by using a graph attention network; respectively carrying out average pooling on the initial features and the high-order features of the pathological image nodes, and then splicing to obtain final pathological image features of the patient; the text feature generation module is used for extracting diagnosis text features of the patient from a medical record text formed by EMR by using a pre-training language model; the multi-mode feature fusion module is used for fusing pathological image features, text features and pathological features of the patient through the multi-mode self-adaptive gate control unit; the self-adaptive gate control unit fuses three modal characteristics by using an attention gate, and performs weighted summation on the fused characteristics and pathological image characteristics to obtain final multi-modal fusion characteristics; and the training module is used for carrying out classification prediction on the breast cancer by the multi-mode fusion characteristics through a multilayer perceptron and training the model by defining a cross entropy loss function. And the prediction module is used for inputting the pathological image set, the medical history text and the pathological features of the patient into the trained model to obtain a breast cancer classification prediction result.
For the specific working process of each module described above, reference may be made to the corresponding process in the foregoing method embodiment, which is not described herein again. The division of the modules is only one logical functional division, and in actual implementation, there may be another division, for example, a plurality of modules may be combined or may be integrated into another system.
Based on the same inventive concept, the embodiment of the present invention discloses a computer system, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the computer program is loaded into the processor to implement the steps of the graph attention network-based multimodal breast cancer classification training method.
Based on the same inventive concept, the embodiment of the present invention discloses a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of the graph attention network-based multimodal breast cancer classification training method.
The foregoing shows and describes the basic principles, principal steps and advantages of the present invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A multi-modal breast cancer classification training method based on a graph attention network is characterized by comprising the following steps:
step 1, extracting representative pathological features from an electronic medical record EMR of a patient, digitizing the features, and performing text description to obtain a medical record text;
step 2, extracting the characteristics of a single pathological image of a patient to obtain the characteristics of the pathological image node level, forming a full-connected graph of the pathological image set of the patient, taking the characteristics of the pathological image node level as initial characteristics, and acquiring the high-order characteristics of the pathological image node by using a graph attention network; respectively carrying out average pooling on the initial features and the high-order features of the pathological image nodes, and then splicing to obtain final pathological image features of the patient;
step 3, extracting the diagnosis text characteristics of the patient from the medical record text formed by EMR by using a pre-training language model;
step 4, fusing pathological image features, text features and pathological features of the patient through a multi-mode self-adaptive gate control unit; the self-adaptive gate control unit fuses three modal characteristics by using an attention gate, and performs weighted summation on the fused characteristics and pathological image characteristics to serve as final multi-modal fusion characteristics;
and 5, classifying and predicting the multi-modal fusion characteristics through a multilayer perceptron, and training a model by defining a cross entropy loss function.
2. The graph attention network-based multimodal breast cancer classification training method according to claim 1, wherein the representative pathological features extracted from the electronic medical record of the patient in step 1 include age, sex, disease course type, personal tumor history, pectoral adhesion, family tumor history, orange peel appearance, previous treatment, breast deformation, neoadjuvant chemotherapy, dimple symptoms, skin redness, skin ulcer, tumor, axillary lymphadenectasis, nipple changes, nipple discharge, lymphadenectasis, tumor location, tenderness, tumor number, tumor size, tumor texture, tumor boundary, surface smoothness, tumor morphology, activity, envelope, skin adhesion and diagnosis; firstly, numerically expressing each feature; and performing text description on the extracted features according to clinical medical rules to obtain a medical record text of the patient.
3. The multi-modal breast cancer classification training method based on the graph attention network as claimed in claim 1, wherein the specific process of obtaining the pathological image features of the patient in the step 2 comprises:
step 2-1, let a breast cancer patient have k pathological images, and the pathological image set is represented as X ═ X i |i=1,2,3,...,k},x i ∈R P P is the dimension of each image, and the characteristic V of the node level of the pathological image is obtained through a DenseNet model i |i=1,2,3,...,k},v i ∈R F And F is a node-level feature of each imageDimension (d);
step 2-2, forming a fully connected graph of the pathological image set of the patient to acquire the correlation between the pathological images; the vertex in the graph is a pathological image, and the initial characteristic of the pathological image is represented as the node level characteristic obtained in the step 2-1;
step 2-3, extracting high-order features of the patient image by using a graph attention network GAT, wherein the feature V of the pathological image node level is { V ═ V } i |i=1,2,3,...,k},v i ∈R F And obtaining a final pathological image node high-order characteristic, namely V ' ═ { V ', through a multilayer GAT model as an input of GAT ' i |i=1,2,3,...,k},v′ i ∈R F′ (ii) a F' is the dimension of the GAT output; the detailed process is as follows:
first, the attention coefficient e of the feature of the node j to the node i is calculated ij
e ij =LeakyReLU(a T [Wv i ||Wv j ])
Where | | | is the splicing operation, a T ∈R 2F′ Is a parameterized weight vector realized by a full connection layer, LeakyReLU is a nonlinear activation function, and W represents a weight matrix;
then, the coefficient e is matched with a Softmax function ij And (3) carrying out normalization to obtain the attention weight of the node j to the node i:
Figure FDA0003631177870000021
wherein N is i Is the neighborhood of node i in the graph;
finally using the normalized attention coefficient alpha ij Calculating the weighted sum of the associated features to obtain the final output feature of each node:
Figure FDA0003631177870000022
wherein ELU is a combination of Sigmoid and ReLU, which is a nonlinear activation function; w 1 Is a weight matrix(ii) a The features of the final graph level are summed by the features in the V' set and output after average pooling
Figure FDA0003631177870000027
Figure FDA0003631177870000023
Figure FDA0003631177870000024
The method is characterized in that the features at the node level of the pathological image of the patient are averaged and pooled with the features at the map level
Figure FDA0003631177870000025
Splicing to obtain the final characteristics G of the pathological image of the patient, wherein G belongs to R F′+F
Figure FDA0003631177870000026
4. The multi-modal breast cancer classification training method based on the graph attention network as claimed in claim 1, wherein the specific process of fusing the features of the three modalities through the multi-modal adaptive gating unit in the step 4 comprises:
step 4-1, according to the obtained pathological image characteristics G, the diagnosis text characteristics T and the pathological characteristics C of the patient, calculating to obtain two weights:
g t =ReLU(W gt [G||T]+b t )
g c =ReLU(W gc [G||C]+b c )
wherein W gt And W gc Is a weight matrix, b t And b c Is a bias vector, | | represents a splicing operation, and the ReLU is a nonlinear activation function;
step 4-2, obtaining a vector H according to the two weights, the diagnosis text characteristic T and the pathological characteristic C:
H=g t ·(W t T)+g c ·(W c C)+b H
wherein W t And W c Is a weight matrix, b H Is a bias vector;
and 4-3, weighting and summing the pathological image features G and the vectors H to obtain the final multi-modal fusion features M of the patient:
M=G+αH
Figure FDA0003631177870000031
wherein β is a hyperparameter initialized randomly by a model, | G | | Y 2 And H Y 2 L represents G and H, respectively 2 And (4) norm.
5. The multi-modal breast cancer classification training method based on the graph attention network as claimed in claim 1, wherein the classification prediction of breast cancer is performed by using a multi-layered perceptron in step 5, and the specific process comprises:
step 5-1, predicting benign and malignant two categories of the breast cancer by using a Softmax layer, namely:
Figure FDA0003631177870000032
wherein
Figure FDA0003631177870000033
M is a multi-modal fusion feature, Linear represents the output of the fully connected layer;
step 5-2, calculating a loss function of the breast cancer benign and malignant binary classification task by using cross entropy:
Figure FDA0003631177870000034
where t is the total number of patients in the data set, P n And
Figure FDA0003631177870000035
the actual and predicted values for the nth patient are indicated, respectively.
6. A multimodal breast cancer classification system based on a graph attention network, comprising:
the pretreatment module is used for extracting representative pathological features from the electronic medical record EMR of the patient, digitizing the features and performing text description to obtain a medical record text;
the pathological image feature generation module is used for extracting the features of a single pathological image of a patient to obtain the features of the pathological image node level, forming a pathological image set of the patient into a full-connected graph, taking the features of the pathological image node level as initial features, and acquiring the high-order features of the pathological image nodes by using a graph attention network; respectively carrying out average pooling on the initial features and the high-order features of the pathological image nodes, and then splicing to obtain final pathological image features of the patient;
the text feature generation module is used for extracting diagnosis text features of the patient from a medical record text formed by EMR by using a pre-training language model;
the multi-mode feature fusion module is used for fusing pathological image features, text features and pathological features of the patient through the multi-mode self-adaptive gate control unit; the self-adaptive gate control unit fuses three modal characteristics by using an attention gate, and performs weighted summation on the fused characteristics and pathological image characteristics to serve as final multi-modal fusion characteristics;
and the training module is used for carrying out classification prediction on the breast cancer by the multi-mode fusion characteristics through a multilayer perceptron and training the model by defining a cross entropy loss function.
And the prediction module is used for inputting the pathological image set, the medical history text and the pathological features of the patient into the trained model to obtain a breast cancer classification prediction result.
7. The system of claim 6, wherein the pathological image feature generation module comprises:
a node level feature generation unit for obtaining the feature of the node level of the pathological image through the DenseNet model, wherein if a certain breast cancer patient has k pathological images, the pathological image set of the breast cancer patient is expressed as X ═ X i |i=1,2,3,...,k},x i ∈R P P is the dimension of each image, and the feature V ═ V at the node level of the pathological image i |i=1,2,3,...,k},v i ∈R F F is the dimension of the node level feature of each image;
the image high-order feature generation unit is used for extracting high-order features of the patient image by utilizing a graph attention network GAT; forming a full-connection graph by the pathological image set of the patient, wherein the vertex in the graph is a pathological image, and the initial characteristic of the pathological image is represented as the node level characteristic obtained by the node level characteristic generating unit; obtaining the final pathological image node high-order characteristic, namely V '═ { V' i |i=1,2,3,...,k},v′ i ∈R F′ (ii) a F' is the dimension of the GAT output; wherein the attention coefficient e of the characteristic of the node j to the node i ij
e ij =LeakyReLU(a T [Wv i ||Wv j ])
Where | | | is the splicing operation, a T ∈R 2F′ Is a parameterized weight vector realized by a full connection layer, LeakyReLU is a nonlinear activation function, and W represents a weight matrix; using Softmax function to pair coefficient e ij And (3) carrying out normalization to obtain the attention weight of the node j to the node i:
Figure FDA0003631177870000041
wherein N is i Is the neighborhood of node i in the graph; final output characteristics of each node:
Figure FDA0003631177870000042
wherein ELU is a combination of Sigmoid and ReLU, and is nonlinear laserA live function; w 1 Is a weight matrix;
an average pooling unit for summing the features in the V' set and performing average pooling to output
Figure FDA0003631177870000056
Figure FDA0003631177870000051
Figure FDA0003631177870000052
And a two-stage feature fusion unit for averagely pooling the node-level features of the pathological images of the patient with the graph-level features
Figure FDA0003631177870000053
Splicing to obtain the final characteristics G of the pathological image of the patient, wherein G belongs to R F′+F
Figure FDA0003631177870000054
8. The method for multi-modal breast cancer classification training based on graph attention network as claimed in claim 6, wherein the specific process of fusion of the multi-modal adaptive gating unit comprises:
according to the obtained pathological image characteristics G, the diagnosis text characteristics T and the pathological characteristics C of the patient, two weights are obtained through calculation:
g t =ReLU(W gt [G||T]+b t )
g c =ReLU(W gc [G||C]+b c )
wherein W gt And W gc Is a weight matrix, b t And b c Is a bias vector, | | represents a splicing operation, and the ReLU is a nonlinear activation function;
obtaining a vector H according to the two weights, the diagnosis text characteristic T and the pathological characteristic C:
H=g t ·(W t T)+g c ·(W c C)+b H
wherein W t And W c Is a weight matrix, b H Is a bias vector;
and weighting and summing the pathological image features G and the vector H to obtain the final multi-modal fusion features M of the patient:
M=G+αH
Figure FDA0003631177870000055
wherein β is a hyperparameter initialized randomly by a model, | G | | Y 2 And H Y 2 L represents G and H, respectively 2 And (4) norm.
9. A computer system comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the computer program when loaded into the processor implements the steps of the graph attention network based multimodal breast cancer classification training method according to any of claims 1-5.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the graph attention network based multimodal breast cancer classification training method according to any one of claims 1-5.
CN202210489883.1A 2022-05-07 2022-05-07 Multi-modal breast cancer classification training method and system based on graph attention network Withdrawn CN114864076A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210489883.1A CN114864076A (en) 2022-05-07 2022-05-07 Multi-modal breast cancer classification training method and system based on graph attention network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210489883.1A CN114864076A (en) 2022-05-07 2022-05-07 Multi-modal breast cancer classification training method and system based on graph attention network

Publications (1)

Publication Number Publication Date
CN114864076A true CN114864076A (en) 2022-08-05

Family

ID=82636305

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210489883.1A Withdrawn CN114864076A (en) 2022-05-07 2022-05-07 Multi-modal breast cancer classification training method and system based on graph attention network

Country Status (1)

Country Link
CN (1) CN114864076A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115171889A (en) * 2022-09-09 2022-10-11 紫东信息科技(苏州)有限公司 Small sample gastric tumor diagnosis system
CN115830017A (en) * 2023-02-09 2023-03-21 智慧眼科技股份有限公司 Tumor detection system, method, equipment and medium based on image-text multi-mode fusion
CN116452851A (en) * 2023-03-17 2023-07-18 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) Training method and device for disease classification model, terminal and readable storage medium
CN116502158A (en) * 2023-02-07 2023-07-28 北京纳通医用机器人科技有限公司 Method, device, equipment and storage medium for identifying lung cancer stage
CN116543918A (en) * 2023-07-04 2023-08-04 武汉大学人民医院(湖北省人民医院) Method and device for extracting multi-mode disease features
CN116994069A (en) * 2023-09-22 2023-11-03 武汉纺织大学 Image analysis method and system based on multi-mode information
CN117274185A (en) * 2023-09-19 2023-12-22 阿里巴巴达摩院(杭州)科技有限公司 Detection method, detection model product, electronic device, and computer storage medium

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115171889A (en) * 2022-09-09 2022-10-11 紫东信息科技(苏州)有限公司 Small sample gastric tumor diagnosis system
CN116502158A (en) * 2023-02-07 2023-07-28 北京纳通医用机器人科技有限公司 Method, device, equipment and storage medium for identifying lung cancer stage
CN116502158B (en) * 2023-02-07 2023-10-27 北京纳通医用机器人科技有限公司 Method, device, equipment and storage medium for identifying lung cancer stage
CN115830017A (en) * 2023-02-09 2023-03-21 智慧眼科技股份有限公司 Tumor detection system, method, equipment and medium based on image-text multi-mode fusion
CN116452851A (en) * 2023-03-17 2023-07-18 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) Training method and device for disease classification model, terminal and readable storage medium
CN116543918A (en) * 2023-07-04 2023-08-04 武汉大学人民医院(湖北省人民医院) Method and device for extracting multi-mode disease features
CN116543918B (en) * 2023-07-04 2023-09-22 武汉大学人民医院(湖北省人民医院) Method and device for extracting multi-mode disease features
CN117274185A (en) * 2023-09-19 2023-12-22 阿里巴巴达摩院(杭州)科技有限公司 Detection method, detection model product, electronic device, and computer storage medium
CN117274185B (en) * 2023-09-19 2024-05-07 阿里巴巴达摩院(杭州)科技有限公司 Detection method, detection model product, electronic device, and computer storage medium
CN116994069A (en) * 2023-09-22 2023-11-03 武汉纺织大学 Image analysis method and system based on multi-mode information
CN116994069B (en) * 2023-09-22 2023-12-22 武汉纺织大学 Image analysis method and system based on multi-mode information

Similar Documents

Publication Publication Date Title
CN114864076A (en) Multi-modal breast cancer classification training method and system based on graph attention network
CN110534192B (en) Deep learning-based lung nodule benign and malignant recognition method
WO2016192612A1 (en) Method for analysing medical treatment data based on deep learning, and intelligent analyser thereof
CN107977361A (en) The Chinese clinical treatment entity recognition method represented based on deep semantic information
CN109544518B (en) Method and system applied to bone maturity assessment
CN109559300A (en) Image processing method, electronic equipment and computer readable storage medium
Bhatt et al. State-of-the-art machine learning techniques for melanoma skin cancer detection and classification: a comprehensive review
CN112270666A (en) Non-small cell lung cancer pathological section identification method based on deep convolutional neural network
CN113344864A (en) Ultrasonic thyroid nodule benign and malignant prediction method based on deep learning
Rajput et al. An accurate and noninvasive skin cancer screening based on imaging technique
CN112801168A (en) Tumor image focal region prediction analysis method and system and terminal equipment
Singhal et al. Study of deep learning techniques for medical image analysis: A review
Chen et al. A deep-learning based ultrasound text classifier for predicting benign and malignant thyroid nodules
Alfifi et al. Enhanced artificial intelligence system for diagnosing and predicting breast cancer using deep learning
Yuan et al. ResD-Unet research and application for pulmonary artery segmentation
Poonguzhali et al. Automated brain tumor diagnosis using deep residual u-net segmentation model
Wang et al. Multiscale feature fusion for skin lesion classification
Patel An overview and application of deep convolutional neural networks for medical image segmentation
Umer et al. Breast cancer classification and segmentation framework using multiscale CNN and U‐shaped dual decoded attention network
Wang et al. Segment medical image using U-Net combining recurrent residuals and attention
CN112712895A (en) Data analysis method of multi-modal big data for type 2 diabetes complications
CN110992309B (en) Fundus image segmentation method based on deep information transfer network
CN117036288A (en) Tumor subtype diagnosis method for full-slice pathological image
CN111798455A (en) Thyroid nodule real-time segmentation method based on full convolution dense cavity network
CN116797817A (en) Autism disease prediction technology based on self-supervision graph convolution model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20220805