CN115131628A - Mammary gland image classification method and equipment based on typing auxiliary information - Google Patents

Mammary gland image classification method and equipment based on typing auxiliary information Download PDF

Info

Publication number
CN115131628A
CN115131628A CN202210773314.XA CN202210773314A CN115131628A CN 115131628 A CN115131628 A CN 115131628A CN 202210773314 A CN202210773314 A CN 202210773314A CN 115131628 A CN115131628 A CN 115131628A
Authority
CN
China
Prior art keywords
classification
typing
image
auxiliary information
auxiliary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210773314.XA
Other languages
Chinese (zh)
Inventor
谢元忠
聂生东
孙榕
李秀娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CENTRAL HOSPITAL OF TAIAN
University of Shanghai for Science and Technology
Original Assignee
CENTRAL HOSPITAL OF TAIAN
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CENTRAL HOSPITAL OF TAIAN, University of Shanghai for Science and Technology filed Critical CENTRAL HOSPITAL OF TAIAN
Priority to CN202210773314.XA priority Critical patent/CN115131628A/en
Publication of CN115131628A publication Critical patent/CN115131628A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast

Abstract

The invention relates to a breast image classification method and equipment based on typing auxiliary information, which are used for determining the histology grading of an image to be classified, and the method comprises the following steps: acquiring an image to be classified, and preprocessing the image to be classified; splitting the preprocessed image to be classified into a plurality of sequence data, respectively serving as the input of different classification models, fusing the classification results of the classification models, and obtaining a final classification label; the classification model comprises a main network and an auxiliary supervision branch based on a molecular classification auxiliary information vector, the main network comprises a multi-scale feature extraction layer, the auxiliary supervision branch adjusts the middle output features of the different scale feature extraction layers, and the classification results of the classification model are formed after the output results of the main network and the auxiliary supervision branch are weighted and fused. Compared with the prior art, the method has the advantages of high accuracy, high efficiency and the like.

Description

Mammary gland image classification method and equipment based on typing auxiliary information
Technical Field
The invention belongs to the technical field of medical image processing automation, relates to a medical image classification method, and particularly relates to a mammary gland image classification method and equipment based on typing auxiliary information.
Background
Even today with the explosive development of modern medical technology, breast cancer is still the first cancer worldwide over lung cancer. The pathological examination serving as the breast cancer diagnosis gold standard lays a foundation for the accurate diagnosis and treatment of the diseases and the establishment of an individualized treatment scheme. Among these, providing histological grading for morphological evaluation of tumors is of paramount importance, an independent prognostic factor that facilitates patient prognostic estimation and prediction of the degree of risk of relapse.
Currently, the histological classification of breast cancer can be divided into three levels according to the total score of three parameters of cancer cell glandular formation ratio, nuclear polymorphism and mitotic image counting by the international commonly used Scarff-Bloom-Richardson breast cancer histological classification system: grade I is divided into 3-5 points; grade II is 6-7 min; grade III is obtained in 8-9 points. Numerous studies have shown that histological grading of breast cancer based on histomorphological classification has some correlation with molecular typing, which reflects the status of gene expression: compared with the grades I and II, the high-grade breast cancer has lower differentiation degree, stronger cancer cell malignant spread risk, poor prognosis and less luminal epithelial subtype. Therefore, there is a need to explore the underlying rules between histological grading and molecular typing of breast cancer in order to improve the consistency of pathological diagnosis and clinical decision.
Currently, pathologists manually identify histological grades of breast cancer based on morphological information such as cellular structures observed from hematoxylin-eosin stained sections. Due to artificial subjectivity and complicated pathological image analysis, the identification precision and timeliness are difficult to ensure. With the development of digital pathology, and with the support of computer-aided diagnosis, a high-performance artificial intelligence model provides power for breast cancer histology grading, and the high-performance artificial intelligence model comprises an image omics algorithm and a highly parallelized deep learning algorithm which take feature engineering as main research means. Even so, it is premised that pathological sections are analyzed by needle biopsy before these methods are applied. In addition, the work load of the research is further increased by the operation steps of paraffin sectioning, hematoxylin-eosin staining and the like.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide the mammary gland image classification method and equipment based on the typing auxiliary information, which have high accuracy and high efficiency.
The purpose of the invention can be realized by the following technical scheme:
a breast image classification method based on typing auxiliary information is used for determining the histological classification of an image to be classified, and comprises the following steps:
acquiring an image to be classified, and preprocessing the image to be classified;
splitting the preprocessed image to be classified into a plurality of sequence data, respectively taking the sequence data as the input of different classification models, and fusing the classification results of the classification models to obtain a final classification label;
the classification model comprises a main network and an auxiliary supervision branch based on a molecular classification auxiliary information vector, the main network comprises a multi-scale feature extraction layer, the auxiliary supervision branch adjusts and processes intermediate output features of the different scale feature extraction layers, and output results of the main network and the auxiliary supervision branch are weighted and fused to form a classification result of the classification model.
Further, a training data set adopted during the training of the classification model comprises a sample image and histology grading and molecular typing label information corresponding to the sample image, and the molecular typing auxiliary information vector is constructed and obtained based on the training data set.
Further, the molecular typing auxiliary information vector is constructed by the steps of:
constructing a double-node information vector meeting Gaussian distribution based on the molecular typing label information of each sample in the training data set, and realizing information initialization;
and adding random Gaussian noise into the double-node information vector, and performing node value normalization to form the molecular typing auxiliary information vector.
Further, the molecular typing label information includes luminal epithelial type and non-luminal epithelial type.
Further, the preprocessing includes histogram equalization, clipping, and intensity normalization.
Further, the main network is constructed based on a two-dimensional convolution neural network model and comprises a convolution activating block based on octave convolution, a plurality of main modules with SEnet and SKnet excitation simultaneously and a Dense-ASP containing cavity convolution layers with different cavity rates 3 A module, a plurality of the master modules forming the multi-scale feature extraction layer.
Further, the auxiliary supervision branch comprises a plurality of GAP global average pooling layers respectively connected with the feature extraction layers with different scales and an MS attention module.
Further, when the classification model is trained, the loss function adopted by the main network is a cost sensitive loss function based on a class F1-score, wherein the class F1-score is a model performance evaluation index formed by harmonic averages of accuracy and recall; and the loss function adopted by the auxiliary supervision branch is a Kullback-Leibler divergence minimization loss function.
Further, when the classification results of the classification models are fused, the weights of the different classification models are determined based on the prediction accuracy of the classification models.
The present invention also provides an electronic device comprising one or more processors, memory, and one or more programs stored in the memory, the one or more programs including instructions for performing the method for classifying breast images based on typing assistance information as described above.
Compared with the prior art, the invention has the following beneficial effects:
1. the invention designs an auxiliary monitoring branch incorporated into molecular typing to promote the correlation characteristics of model learning and molecular typing estimation, effectively improves the performance of the model, improves the image classification precision and has high efficiency.
2. When the network model structure is designed, the invention provides a method named IOS 2 -a two-dimensional convolutional neural network model of DA net. The octave convolution replacing the traditional convolution layer weakens the attention of the model to low-frequency redundant information and enhances the identification rate of pathological classification with similar image characteristics. The double-core compression excitation module of SEnet and SKnet is adopted to simulate the human visual coding characteristic, thereby realizing the extraction of the final decision information; in addition, the Dense-ASP of the network 3 The module makes full use of dense multi-scale features to enhance the learning ability of the model.
3. Aiming at DCE-MRI images of different time sequences, the invention provides a cost sensitivity loss function based on a class F1-score, and the loss function fully utilizes the harmony of the accuracy rate and the sensitivity of a model to different classes so as to automatically weight the loss of a sample. Compared with the traditional cross entropy loss and focal loss, the loss function ensures that the classification recall rate is remarkably improved and the robustness is higher while the specificity of the model is ensured.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a diagram of a two-dimensional convolutional neural network IOS in an embodiment 2 -DA net structural schematic;
FIG. 3 is a diagram illustrating the structure of an octave convolution-based activated block in an embodiment;
fig. 4 is a schematic structural diagram of a main module SE _ inclusion _ SK in the embodiment;
FIG. 5 shows Dense-ASP in the example 3 A schematic structural diagram of the module;
FIG. 6 is a schematic diagram of the overall structure of a network with a type-based auxiliary supervision branch in the embodiment
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. The present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the scope of the present invention is not limited to the following embodiments.
As shown in fig. 1, the present embodiment provides a breast image classification method based on auxiliary information for typing, which is used for determining the histological classification of an image to be classified, and the method includes the following steps:
acquiring an image to be classified, and preprocessing the image to be classified, wherein the preprocessing comprises histogram equalization, cutting, intensity normalization and the like;
and splitting the preprocessed image to be classified into a plurality of sequence data, such as DCE-MRI TPs 1, TPs 2 and TPs 3 sequences, respectively serving as the input of different classification models, fusing the classification results of the classification models, and obtaining a final classification label.
As shown in fig. 6, the classification model used in this embodiment is a network with typing auxiliary monitoring branches, and includes a main network and auxiliary monitoring branches based on molecular typing auxiliary information vectors, where the main network includes a multi-scale feature extraction layer, the auxiliary monitoring branches adjust intermediate output features of different scale feature extraction layers, and output results of the main network and the auxiliary monitoring branches are weighted and fused to form a classification result of the classification model.
As shown in fig. 2, in this embodiment, the main network is constructed based on a two-dimensional convolutional neural network model, which is named IOS 2 DA net, Dense-ASP comprising a block of post-convolution activation based on octave convolution, a plurality of main modules with simultaneous SEnet and SKnet excitation, and a hole convolution layer with different hole rates 3 A module, a plurality of the master modules forming the multi-scale feature extraction layer. As shown in fig. 3, octave convolution is used to replace a conventional convolutional layer to construct a convolutional post-activation block, and the proportion of high and low frequency channels in the convolutional layer is controlled by a hyper-parameter α, so that the attention of the model to low-frequency redundant information is weakened, the recognition rate of pathological grading with similar image features is enhanced, and the consumption of memory and computational resources is reduced. As shown in FIG. 4, the dual core compression driver module using SEnet and SKnet is insertedTo the Incep to form IOS 2 The main modules SE _ inclusion _ SK of the DA net can respectively and adaptively select and combine the sizes of suitable receptive fields on different branches and the weight parameters of different convolution channels, and extract key information beneficial to final decision making. As shown in FIG. 5, the embodiment of the present invention also provides a Dense-ASP in combination with the characteristics of multi-degree information and dilation convolution 3 And the module is used for densely connecting the hole convolution layers with different hole rates (r is 1 and r is 2) to generate a multi-scale feature with more dense distribution and enhance the learning ability.
As shown in fig. 6, in this embodiment, the auxiliary monitoring branch includes a plurality of GAP global averaging pooling layers respectively connected to the feature extraction layers with different scales and an MS attention module. GAP Global average pooling layer will be from IOS 2 -DA net three multiscale IOS 2 And (3) connecting output characteristics of the layers in series, and adjusting the output characteristics of the intermediate layer by utilizing a Kullback-Leibler (KL) divergence minimized loss function so as to obtain a predicted value which is most relevant to the parting information vector.
The training data set adopted by the classification model in the model training stage comprises a sample image and histology grading and molecular typing label information corresponding to the sample image, and the molecular typing auxiliary information vector is constructed and obtained based on the training data set. The molecular typing auxiliary information vector utilized by the auxiliary supervision branch is obtained by the following steps:
(1) and (5) initializing information. Constructing a two-node information vector meeting Gaussian distribution based on the molecular typing marker of each sample in the breast cancer histology grading data set, wherein the calculation formula is as follows:
Figure BDA0003725174070000051
where x corresponds to a node in the information vector and s represents a molecular subtype (non-luminal epithelial type is 0 and luminal epithelial type is 1).
(2) Random noise is added. Random Gaussian noise is added into the initialized typing information vector to simulate different typing diagnosis results of the same lesion by radiologists with different radiographing experiences. As shown in the following formula:
Figure BDA0003725174070000052
I’ s (x)=I s (x)+r s (x) (3)
(3) and normalizing the node value. As shown in the following formula:
Figure BDA0003725174070000053
prediction values and IOS obtained through auxiliary supervision branch 2 -DA net preliminary prediction results weighting, i.e. obtaining a classification result of a classification model:
p=(1-γ)·p 1 +γ·p 2 ,0<γ<0.5 (5)
and integrating and fusing the prediction results based on the DCE-MRI TPs 1, TPs 2 and TPs 3 sequence images by a weighted average method to obtain the final classification label. The fusion process is as follows:
(1) and taking the prediction accuracy of each base model as a classification performance weight:
Figure BDA0003725174070000054
(2) the comprehensive prediction probability of the sample is obtained by weighted summation of prediction results of all the base learners:
Figure BDA0003725174070000055
(3) determining a prediction label of the integrated model according to the size of the P value:
Figure BDA0003725174070000061
in this embodiment, the classification model is constructed in the following manner in the training data set of the training phase. In this example, a total of 381 cases are diagnosed as breast cancer by pathological biopsy, and all cases are subjected to at least one preoperative MRI imaging examination before diagnosis, and finally 256 cases meeting the study inclusion criteria are obtained. The histology grading score of the above cases is given by a pathologist referring to a nottingham histology grading system, wherein the total of 9 grade I cases; 113 grams II; 133 grams III. Labeling is carried out according to the immunohistochemical characteristic expression of molecular typing of each case, including a luminal epithelial type and a non-luminal epithelial type. In view of the imbalance between the sample size of the low grade histological grading and the medium and high grade, in this example, the above described breast cancer histological grading method was applied to the histological grading prediction studies of grade I & II and grade III.
And (3) intercepting a focus area in the mammary gland DCE-MRI image, then carrying out image preprocessing, and then establishing a breast cancer histology grading data set. The pretreatment method comprises the following steps:
(1) after determining the image sequence segment containing the focus area, intercepting the image block of the region of interest in a multi-scale mode by taking the focus position as the center, and readjusting the block size of the square image block to 64 × 64 pixels by adopting a bilinear interpolation algorithm.
(2) Histogram equalization is used to enhance image contrast.
(3) And scaling the pixel value of the image block to 0,1 by adopting an intensity normalization operation, so that network processing and analysis are facilitated.
(4) After dividing a breast cancer histology grading data set through ten-fold cross validation, only performing real-time data enhancement operation (rotation, mirror image, scaling and the like) on a training set by using a data enhancement method carried by Keras of a Python deep learning library, and not processing a validation set.
Before the training of the classification model, the batch processing sample number 64 and the initial learning rate l are set r Each convolution kernel was regularized with l2 weight with a coefficient of 0.005, which was 0.002. And accelerating model parameter optimization by adopting an RMSprop optimizer during model training. Increasing the closing of the model to the difficult sample by adopting the cost sensitive loss function CFSL based on the class F1-scoreNote that the expression is:
Figure BDA0003725174070000062
wherein, F1-score is a model performance evaluation index composed of harmonic mean of precision and recall, and modulation term D (x) ═ x β Where β is a weighting factor that measures the impact of F1-score.
The initial training round number is 120, when 10 rounds of CFSL training of the model are not reduced, the learning rate is reduced to 0.2; when the CFSL has not converged after 30 fine-tuning, the model stops training. The model weights at the highest accuracy and lower loss currently under the validation set are saved.
In the model evaluation stage, the embodiments of the present invention first train a base learner model based on DCE-MRI TPs 1, TPs 2, and TPs 3 sequence images, respectively, and determine an optimal high-low frequency channel ratio in the network model structure by adjusting different hyper-parameters α, where a fixed training loss function is focal loss (α is 1, β is 2). The results of the experiment are shown in table 1.
TABLE 1 influence of different high and low frequency channel ratios in octave convolution on prediction results
Figure BDA0003725174070000071
As can be seen from table 1, when the hyper-parameter α of the base learner model based on the images of the TPs 1, TPs 2, and TPs 3 sequences is 0.5, 0.375, and 0.5, respectively, the recognition accuracy and the video memory occupation of the model are well balanced, and the classification accuracy is as high as 86.6%.
Next, based on the above experiment, the embodiment of the present invention tests the influence of different loss functions on model prediction, and the results are shown in table 2.
TABLE 2 evaluation of model Performance by different loss functions
Figure BDA0003725174070000072
Figure BDA0003725174070000081
As can be seen from table 2, when the classical cross entropy loss, focal loss, and CFSL are respectively selected as the loss function, a prediction model with balanced Sen and Spec should be selected as much as possible on the premise of ensuring that the model accuracy is high enough. When the CFSL with the weight factor beta of 0.3 is set, the prediction performance of a base learner model based on DCE-MRI TPs 1 sequence images is best, the AUC is as high as 0.902, and the F1-score is as high as 0.906. Similarly, in the basis learner model based on DCE-MRI TPs 3 sequence images, the embodiment of the present invention selects the training weight at the CFSL loss with the weight factor β of 2.0 as the optimal weight. For the basis learner model based on DCE-MRI TPs 2 sequence images, although the model performance during CFSL training with the optimal weight factor (β ═ 2.0) is slightly inferior to the focal loss and cross entropy loss with the optimal weight factor (β ═ 1.0), the difference between Sen and Spec of the model is minimal, and the performance is more stable. Therefore, the CFSL loss with the weight factor β of 2.0 is still considered to be applicable to the basis learner model based on DCE-MRI TPs 2 sequence images by the embodiments of the present invention.
Finally, the embodiment utilizes the molecular typing auxiliary information to construct the model auxiliary monitoring branch so as to reduce the false positive of the prediction result. The results of the experiment are shown in table 3.
TABLE 3 influence of the proportion of the auxiliary information for typing on the final prediction results
Figure BDA0003725174070000082
Figure BDA0003725174070000091
As can be seen from Table 3, after the molecular typing auxiliary supervision branch is introduced, the false positives of the model are reduced on the premise of maintaining good AUC and F1-score performance, that is, the model has a lower probability of misjudging the grade I & II as the grade III in the actual application process, and the method can effectively assist a clinician to formulate a more accurate treatment scheme.
According to the evaluation, the two-dimensional convolutional neural network is used for automatically extracting relevant characteristics of various histological grading in DCE-MRI imaging of the breast cancer, so that the aim of accurately predicting pathological grading can be fulfilled; compared with a method only using pathological images for analysis, the method reduces the tedious steps of puncture biopsy sampling and the like, has higher identification efficiency and precision, and simultaneously introduces molecular typing auxiliary information to further improve the prediction performance of the model.
The image classification method may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (10)

1. A breast image classification method based on typing auxiliary information is used for determining the histological classification of an image to be classified, and the method comprises the following steps:
acquiring an image to be classified, and preprocessing the image to be classified;
splitting the preprocessed image to be classified into a plurality of sequence data, respectively serving as the input of different classification models, fusing the classification results of the classification models, and obtaining a final classification label;
the classification model comprises a main network and an auxiliary supervision branch based on a molecular classification auxiliary information vector, the main network comprises a multi-scale feature extraction layer, the auxiliary supervision branch adjusts and processes intermediate output features of the different scale feature extraction layers, and output results of the main network and the auxiliary supervision branch are weighted and fused to form a classification result of the classification model.
2. The breast image classification method based on typing auxiliary information according to claim 1, wherein the training data set adopted in the classification model training includes sample images and histological grading and molecular typing label information corresponding to the sample images, and the molecular typing auxiliary information vector is constructed and obtained based on the training data set.
3. The breast image classification method based on typing auxiliary information according to claim 2, wherein the molecular typing auxiliary information vector is constructed by the following steps:
constructing a double-node information vector meeting Gaussian distribution based on the molecular typing label information of each sample in the training data set, and realizing information initialization;
and adding random Gaussian noise into the double-node information vector, and performing node value normalization to form the molecular typing auxiliary information vector.
4. The breast image classification method based on typing auxiliary information according to claim 2, wherein the molecular typing label information comprises luminal and non-luminal epithelial types.
5. The method of classifying breast images based on auxiliary information for typing according to claim 1, wherein the preprocessing comprises histogram equalization, cropping and intensity normalization.
6. The classification method for breast images based on typing auxiliary information according to claim 1, wherein the main network is constructed based on a two-dimensional convolution neural network model, and comprises a convolution activation block based on octave convolution, a plurality of main modules with SEnet and SKnet excitation at the same time, and a Dense-ASP containing cavity convolution layers with different cavity rates 3 A module, a plurality of the master modules forming the multi-scale feature extraction layer.
7. The mammary image classification method based on parting auxiliary information as claimed in claim 1, wherein the auxiliary supervision branch comprises a plurality of GAP global average pooling layers and MS attention modules respectively connected with different scale feature extraction layers.
8. The method for classifying mammary images based on auxiliary information for typing according to claim 1, wherein when the classification model is trained, the loss function adopted by the main network is a cost sensitive loss function based on a category F1-score, and the category F1-score is a model performance evaluation index consisting of a harmonic mean of accuracy and recall; and the loss function adopted by the auxiliary supervision branch is a Kullback-Leibler divergence minimization loss function.
9. The method for classifying breast images based on auxiliary information for typing according to claim 1, wherein the weights of different classification models are determined based on the prediction accuracy of each classification model when the classification results of each classification model are fused.
10. An electronic device comprising one or more processors, memory, and one or more programs stored in the memory, the one or more programs including instructions for performing the method of classifying breast images based on typing assistance information according to any one of claims 1 to 9.
CN202210773314.XA 2022-07-01 2022-07-01 Mammary gland image classification method and equipment based on typing auxiliary information Pending CN115131628A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210773314.XA CN115131628A (en) 2022-07-01 2022-07-01 Mammary gland image classification method and equipment based on typing auxiliary information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210773314.XA CN115131628A (en) 2022-07-01 2022-07-01 Mammary gland image classification method and equipment based on typing auxiliary information

Publications (1)

Publication Number Publication Date
CN115131628A true CN115131628A (en) 2022-09-30

Family

ID=83381070

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210773314.XA Pending CN115131628A (en) 2022-07-01 2022-07-01 Mammary gland image classification method and equipment based on typing auxiliary information

Country Status (1)

Country Link
CN (1) CN115131628A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116013449A (en) * 2023-03-21 2023-04-25 成都信息工程大学 Auxiliary prediction method for cardiomyopathy prognosis by fusing clinical information and magnetic resonance image

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116013449A (en) * 2023-03-21 2023-04-25 成都信息工程大学 Auxiliary prediction method for cardiomyopathy prognosis by fusing clinical information and magnetic resonance image
CN116013449B (en) * 2023-03-21 2023-07-07 成都信息工程大学 Auxiliary prediction method for cardiomyopathy prognosis by fusing clinical information and magnetic resonance image

Similar Documents

Publication Publication Date Title
CN113344849B (en) Microemulsion head detection system based on YOLOv5
WO2021203795A1 (en) Pancreas ct automatic segmentation method based on saliency dense connection expansion convolutional network
CN110633758A (en) Method for detecting and locating cancer region aiming at small sample or sample unbalance
CN112116605A (en) Pancreas CT image segmentation method based on integrated depth convolution neural network
Pan et al. Mitosis detection techniques in H&E stained breast cancer pathological images: A comprehensive review
CN112446891A (en) Medical image segmentation method based on U-Net network brain glioma
CN112215807A (en) Cell image automatic classification method and system based on deep learning
CN110751636A (en) Fundus image retinal arteriosclerosis detection method based on improved coding and decoding network
CN112132827A (en) Pathological image processing method and device, electronic equipment and readable storage medium
CN112633257A (en) Potato disease identification method based on improved convolutional neural network
CN112990214A (en) Medical image feature recognition prediction model
CN113657449A (en) Traditional Chinese medicine tongue picture greasy classification method containing noise labeling data
CN113643269A (en) Breast cancer molecular typing method, device and system based on unsupervised learning
CN115546605A (en) Training method and device based on image labeling and segmentation model
Razavi et al. Minugan: Dual segmentation of mitoses and nuclei using conditional gans on multi-center breast h&e images
CN115496720A (en) Gastrointestinal cancer pathological image segmentation method based on ViT mechanism model and related equipment
CN114445356A (en) Multi-resolution-based full-field pathological section image tumor rapid positioning method
CN115641345A (en) Multiple myeloma cell morphology fine segmentation method based on deep learning
CN114864075A (en) Glioma grade analysis method and device based on pathological image
CN115131628A (en) Mammary gland image classification method and equipment based on typing auxiliary information
Barrera et al. Automatic normalized digital color staining in the recognition of abnormal blood cells using generative adversarial networks
Zhang et al. Histopathological image recognition of breast cancer based on three-channel reconstructed color slice feature fusion
CN113420793A (en) Improved convolutional neural network ResNeSt 50-based gastric ring cell carcinoma classification method
CN113781387A (en) Model training method, image processing method, device, equipment and storage medium
CN116883341A (en) Liver tumor CT image automatic segmentation method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination