CN114171197B - Breast cancer HER2 state prediction method and related equipment - Google Patents

Breast cancer HER2 state prediction method and related equipment Download PDF

Info

Publication number
CN114171197B
CN114171197B CN202111338891.8A CN202111338891A CN114171197B CN 114171197 B CN114171197 B CN 114171197B CN 202111338891 A CN202111338891 A CN 202111338891A CN 114171197 B CN114171197 B CN 114171197B
Authority
CN
China
Prior art keywords
her2
expression probability
image
classification model
probability
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111338891.8A
Other languages
Chinese (zh)
Other versions
CN114171197A (en
Inventor
刘碧华
谢晓彤
黄炳升
樊雅恒
王铭宇
林楚旋
潘浩瑜
郭媛
唐文洁
陈思义
胡闻珂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Dongguan Peoples Hospital
Original Assignee
Shenzhen University
Dongguan Peoples Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University, Dongguan Peoples Hospital filed Critical Shenzhen University
Priority to CN202111338891.8A priority Critical patent/CN114171197B/en
Publication of CN114171197A publication Critical patent/CN114171197A/en
Application granted granted Critical
Publication of CN114171197B publication Critical patent/CN114171197B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Epidemiology (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Pathology (AREA)
  • Primary Health Care (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The method comprises the steps of respectively inputting a mammary gland MR image to be predicted into a first classification model and a second classification model, and controlling the first classification model and the second classification model to determine that the HER2 type corresponding to the mammary gland MR image is a HER2 high expression type, a HER2 low expression type or a HER2 zero expression type. According to the method, the first classification model and the second classification model are adopted to learn the breast MR images carrying breast lesion characteristics in different aspects, so that the focus information mined in different aspects is correlated, the HER2 type is determined to be the HER2 high expression type, the HER2 low expression type or the HER2 zero expression type through the correlated focus information, and the accuracy of identifying the HER2 type to be the HER2 high expression type, the HER2 low expression type or the HER2 zero expression type is further improved.

Description

Breast cancer HER2 state prediction method and related equipment
Technical Field
The application relates to the technical field of biomedical engineering, in particular to a method for predicting the HER2 state of breast cancer and related equipment.
Background
Breast Cancer (BC) is a heterogeneous disease that has now surpassed lung Cancer and is the first Cancer worldwide, while Breast Cancer is also a leading cause of Cancer death in women. Human Epidermal Growth Factor Receptor 2 (Human Epidermal Growth Factor Receptor 2, her2) is a protooncogene encoding a tyrosine kinase active Epidermal Growth Factor Receptor, is a unique predictive marker for targeted treatment of breast tumors, and HER2 status plays a crucial role in selection of breast cancer treatment protocols and prognosis evaluation, and BC is divided into high HER2 expression and HER2 negative according to the definitions of the ASCO/CAP guidelines, and HER2 negative can be divided into low HER2 expression and zero HER2 expression by standardized central pathology determination. HER2 high expressing BC patients may benefit from anti-HER 2 therapy, while two HER 2-directed Antibody Drug Conjugates (ADCs) with chemotherapeutic drugs, trastuzumab deglutition (T-DXd) and trastuzumab dacarbazine (SYD 985), showed very promising therapeutic activity in HER2 low expressing BC patients. Therefore, objectively and accurately identifying different patients with high HER2 expression, low HER2 expression and zero HER2 expression has important significance for the treatment and prognosis of BC patients.
Currently, HER2 expression is routinely assessed clinically using pathology. According to 2019 edition of Chinese breast cancer HER2 detection guidelines, when a patient is subjected to HER2 pathological evaluation, an Immunohistochemistry (IHC) method is recommended to be adopted to detect the expression level of HER2 protein, the IHC firstly needs to carry out invasive examination, then doctors evaluate the staining degree of infiltrating cancer cells by experience, and finally, the evaluation result of HER2 is obtained. But invasive examination cannot be carried out for multiple times, HER2 level change cannot be observed dynamically, the subjectivity of the evaluation result is strong, and quantitative and objective evaluation results cannot be obtained. Therefore, finding a non-invasive method to achieve HER2 status assessment is a clinical challenge to be solved.
Therefore, in the prior art, the traditional imaging omics are used for evaluating the HER2 state, the internal relation among data cannot be sufficiently mined, the HER2 state cannot be accurately predicted, and the problem of prediction error occurs.
Thus, the prior art has yet to be improved and enhanced.
Disclosure of Invention
In order to solve the above technical problem, a first aspect of the embodiments of the present application provides a method for predicting breast cancer HER2 status, the method including:
respectively inputting a mammary gland MR image to be predicted into a first classification model and a second classification model;
controlling the first classification model and the second classification model to determine a HER2 category corresponding to the mammary MR image, wherein the HER2 category comprises HER2 high expression, HER2 low expression or HER2 zero expression.
The method for predicting breast cancer HER2 status, wherein the controlling the first classification model and the second classification model to determine the HER2 category corresponding to the breast MR image specifically includes:
controlling the first classification model to determine the HER2 high expression probability and the HER2 negative expression probability corresponding to the mammary gland MR image;
controlling the second classification model to determine a predicted HER2 low expression probability and a predicted HER2 zero expression probability corresponding to the mammary gland MR image;
determining a HER2 category corresponding to the mammary MR image based on the HER2 high expression probability, the HER2 negative expression probability, the predicted HER2 low expression probability and the predicted HER2 zero expression probability.
The method for predicting breast cancer HER2 state, wherein the determining of the HER2 category corresponding to the breast MR image based on the HER2 negative expression probability, the HER2 low expression probability and the HER2 zero expression probability specifically comprises the following steps:
respectively calculating the products of the HER2 negative expression probability and the predicted HER2 low expression probability and the HER2 negative expression probability and the predicted HER2 zero expression probability to obtain a HER2 low expression probability and a HER2 zero expression probability;
determining a HER2 category corresponding to the breast MR image based on the high HER2 expression probability, the low HER2 expression probability and the zero HER2 expression probability.
The method for predicting the HER2 state of the breast cancer, wherein the first classification model comprises a shadowmics module, a semantic segmentation module and a prediction module; the controlling the first classification model to determine the HER2 high expression probability and the HER2 negative expression probability corresponding to the breast MR image specifically includes:
controlling the second imagery omics module to determine imagery characteristics corresponding to the mammary gland MR image, and determining a first HER2 high expression probability and a first HER2 negative expression probability corresponding to the mammary gland MR image based on the imagery characteristics;
controlling the semantic segmentation module to determine a second HER2 high expression probability and a second HER2 negative expression probability corresponding to the breast MR image;
controlling the prediction module to determine a HER2 high expression probability and a HER2 negative expression probability corresponding to the mammary MR image based on the first HER2 high expression probability, the first HER2 negative expression probability, the second HER2 high expression probability and the second HER2 negative expression probability.
The breast cancer HER2 state prediction method comprises the steps that the imagery omics module comprises a focus delineation unit, an imagery omics feature extraction unit, a first feature screening unit and a first prediction unit; the controlling the second omics module to determine the omics characteristics corresponding to the breast MR image, and determining the first HER2 high expression probability and the first HER2 negative expression probability corresponding to the breast MR image based on the omics characteristics specifically comprises:
controlling the focus delineation unit to extract a region of interest in the breast MR image;
controlling the imaging omics feature extraction unit to extract imaging features of the region of interest so as to obtain candidate imaging omics features;
controlling the first feature screening unit to perform feature screening on the candidate imagery omics features so as to obtain imagery omics features;
and controlling the first prediction unit to determine a first HER2 high expression probability and a first HER2 negative expression probability corresponding to the mammary gland MR image based on the iconic features.
The method for predicting the HER2 state of the breast cancer comprises a semantic segmentation module, a semantic segmentation module and a semantic segmentation module, wherein the semantic segmentation module comprises a feature extraction unit, a second feature screening unit and a second prediction unit; the controlling the semantic segmentation module to determine the second HER2 high expression probability and the second HER2 negative expression probability corresponding to the breast MR image specifically includes:
controlling the feature extraction unit to determine candidate semantic features corresponding to the mammary gland MR image;
controlling the second feature screening unit to screen the candidate semantic features to obtain semantic features corresponding to the mammary gland MR image;
and controlling the second prediction unit to determine a second HER2 high expression probability and a second HER2 negative expression probability corresponding to the mammary gland MR image based on the semantic features.
The method for predicting the breast cancer HER2 state, wherein the step of controlling the feature extraction unit to determine the candidate semantic features corresponding to the breast MR image specifically comprises the following steps:
controlling the feature extraction unit to determine a plurality of slice feature maps based on the mammary gland MR image, wherein the plurality of slice feature maps correspond to a plurality of slice images included in the mammary gland MR image one by one;
dividing the plurality of slice characteristic graphs into two characteristic clusters based on a clustering mode, and removing the characteristic clusters with less slice characteristic graphs from the two characteristic clusters to obtain a plurality of target slice characteristics;
and fusing the target slice features to obtain candidate semantic features corresponding to the breast MR image.
The method for predicting breast cancer HER2 state is characterized in that the model structure of the second classification model is the same as that of the first classification model, the second classification model is different from the first classification model in that a first feature screening unit in the first classification model screens candidate imagery omics features by adopting Lasso regression and chi-square test, and a second feature screening unit in the first classification model screens candidate semantic features by adopting Lasso regression and mutual information; and the second feature screening unit in the second classification model screens the candidate semantic features by adopting Lasso regression and chi-square test.
A second aspect of embodiments of the present application provides a computer readable storage medium having one or more programs stored thereon, the one or more programs being executable by one or more processors to implement the steps in the method for predicting breast cancer HER2 status as set forth in any of the above.
A third aspect of the embodiments of the present application provides a terminal device, including: a processor, a memory, and a communication bus; the memory has stored thereon a computer readable program executable by the processor;
the communication bus realizes connection communication between the processor and the memory;
the processor when executing the computer readable program carries out the steps of the method for predicting breast cancer HER2 status as defined in any one of the above.
Has the advantages that: compared with the prior art, the method and the related equipment have the advantages that the breast cancer HER2 state prediction method and the related equipment are provided, the breast MR images to be predicted are respectively input into the first classification model and the second classification model, and the first classification model and the second classification model are controlled to determine that the HER2 type corresponding to the breast MR images is a HER2 high expression type, a HER2 low expression type or a HER2 zero expression type. According to the method, the first classification model and the second classification model are adopted to learn the mammary gland MR images with mammary gland lesion characteristics in different aspects, so that the focus information mined in different aspects is correlated, the HER2 type is determined to be the HER2 high expression type, the HER2 low expression type or the HER2 zero expression type through the correlated focus information, and the accuracy of identifying the HER2 type to be the HER2 high expression type, the HER2 low expression type or the HER2 zero expression type is further improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without any inventive work.
Fig. 1 is a flow chart of a method of predicting breast cancer HER2 status as provided herein.
Figure 2 is a schematic representation of HER2 class in the methods for predicting HER2 status of breast cancer provided herein.
Fig. 3 is a model structure diagram of a first classification model in the method for predicting breast cancer HER2 status provided in the present application.
Fig. 4 is a schematic model structure diagram of a semantic segmentation model in the method for predicting breast cancer HER2 status provided in the present application.
Fig. 5 is a flowchart of a clustering algorithm in the method for predicting breast cancer HER2 status provided by the present application.
Fig. 6 is a schematic structural diagram of a terminal device provided in the present application.
Detailed Description
The present application provides a method and related apparatus for predicting breast cancer HER2 status, and in order to make the purpose, technical solution and effect of the present application clearer and clearer, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
It will be understood by those within the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
It should be understood that, the sequence numbers and sizes of the steps in this embodiment do not mean the execution sequence, and the execution sequence of each process is determined by its function and inherent logic, and should not constitute any limitation on the implementation process of this embodiment.
The inventor has found that Breast Cancer (BC) is a heterogeneous disease, now surpassing lung Cancer and becoming the first Cancer worldwide, and is also the leading cause of Cancer death in women. Human Epidermal Growth Factor Receptor 2 (Human Epidermal Growth Factor Receptor 2, HER2) is a protooncogene encoding a tyrosine kinase activity Epidermal Growth Factor Receptor, is a unique predictive marker for targeted treatment of breast tumors, and HER2 status plays a crucial role in selection of breast cancer treatment schemes and prognosis evaluation, wherein BC is divided into HER2 high expression and HER2 negative according to the definition of ASCO/CAP guidelines, and HER2 negative can be divided into HER2 low expression and HER2 zero expression by standardized central pathological determination. HER 2-high expressing BC patients may benefit from anti-HER 2 therapy, while two HER 2-directed Antibody Drug Conjugates (ADCs) with chemotherapeutic drugs, trastuzumab dirutinkang (T-DXd) and trastuzumab docazine (SYD 985), showed very promising therapeutic activity in HER 2-low expressing BC patients. Therefore, objectively and accurately identifying different patients with high HER2 expression, low HER2 expression and zero HER2 expression has important significance for the treatment and prognosis of BC patients.
Currently, HER2 expression is routinely evaluated clinically using pathology. According to 2019 edition of Chinese breast cancer HER2 detection guidelines, when a patient is subjected to HER2 pathological evaluation, an Immunohistochemistry (IHC) method is recommended to be adopted to detect the expression level of HER2 protein, the IHC firstly needs to carry out invasive examination, then doctors evaluate the staining degree of infiltrating cancer cells by experience, and finally, the evaluation result of HER2 is obtained. But invasive examination cannot be carried out for multiple times, HER2 level change cannot be observed dynamically, the subjectivity of the evaluation result is strong, and quantitative and objective evaluation results cannot be obtained. Therefore, finding a non-invasive method to achieve HER2 status assessment is a clinically urgent challenge.
For this reason, the conventional imaging group is used to evaluate the HER2 state, but the conventional imaging group needs manual design features, which are not enough to fully mine the internal connection between data, and cannot accurately evaluate the HER2 state, so that misdiagnosis is easily caused.
In order to solve the above problem, in the embodiment of the present application, a breast MR image to be predicted is respectively input into a first classification model and a second classification model, and the first classification model and the second classification model are controlled to determine that a HER2 category corresponding to the breast MR image is a HER2 high expression category, a HER2 low expression category or a HER2 zero expression category. According to the method, the first classification model and the second classification model are adopted to learn the breast MR images carrying breast lesion characteristics in different aspects, so that the focus information mined in different aspects is correlated, the HER2 type is determined to be the HER2 high expression type, the HER2 low expression type or the HER2 zero expression type through the correlated focus information, and the accuracy of identifying the HER2 type to be the HER2 high expression type, the HER2 low expression type or the HER2 zero expression type is further improved.
The following further describes the content of the application by describing the embodiments with reference to the attached drawings.
The present embodiment provides a method for predicting breast cancer HER2 status, as shown in fig. 1, the method comprising:
and S10, respectively inputting the mammary gland MR image to be predicted into a first classification model and a second classification model.
Specifically, the breast MR (Magnetic Resonance) image to be predicted is a breast MR image of a breast cancer patient acquired in advance by an MR image acquisition device, and the breast MR image carries a breast cancer region. The first classification model and the second classification model are preset, the first classification model and the second classification model are parallel, input items of the first classification model and the second classification model are breast MR images, the first classification model is used for predicting high expression probability and negative expression probability of HER2 state of a breast tumor region in the breast MR images, and the second classification model is used for predicting low expression probability and predicting HER2 zero expression probability of HER2 state of the breast tumor region in the breast MR images.
S20, controlling the first classification model and the second classification model to determine the HER2 category corresponding to the mammary gland MR image.
Specifically, HER2 is Human Epidermal Growth Factor Receptor 2 (Human Epidermal Growth Factor Receptor 2, her2) which is a protooncogene encoding a tyrosine kinase-active Epidermal Growth Factor Receptor, located at chromosome 17, q21, and is the only predictive marker for HER2 targeted therapy. The HER2 categories comprise HER2 high expression, HER2 low expression or HER2 zero expression, wherein the HER2 high expression, the HER2 low expression and the HER2 zero expression respectively represent different states of HER2, and the HER2 state of a breast tumor region can be determined through predicting the obtained HER2 categories, so that a doctor can select a treatment mode based on the predicted HER2 categories. Among them, immunohistochemistry (IHC) can convert IHC0-1+, IHC2+, IHC3+. IHC3+ or IHC2+ and ISH (in situ hybridization) + reads high expression, IHC1+ or IHC2+ and ISH-reads low expression of HER2, IHC0 reads zero expression of HER 2.
In an implementation manner of this embodiment, the controlling the first classification model and the second classification model to determine the HER2 category corresponding to the breast MR image specifically includes:
s21, controlling the first classification model to determine the HER2 high expression probability and the HER2 negative expression probability corresponding to the mammary gland MR image;
s22, controlling the second classification model to determine the low expression probability of the predicted HER2 and the zero expression probability of the predicted HER2 corresponding to the mammary gland MR image;
s23, determining the HER2 category corresponding to the mammary gland MR image based on the HER2 high expression probability, the HER2 negative expression probability, the predicted HER2 low expression probability and the predicted HER2 zero expression probability.
In particular, the first classification model is used for HER2 high expression probability and HER2 negative expression probability based on a breast tumor region in the breast MR image, wherein the sum of the HER2 high expression probability and the HER2 negative expression probability equals 1. The second classification model is used for predicting HER2 under-expression probability and HER2 zero-expression probability based on the breast MR image, wherein the sum of the predicted HER2 under-expression probability and the predicted HER2 zero-expression probability is equal to 1. Furthermore, as shown in figure 2, the HER2 low expression and HER2 zero expression only occur when the HER2 status is in the negative state and not when the HER2 status is in the high expression, so that the HER2 category includes HER2 high expression, HER2 low expression and HER2 zero expression. Thus, the probability of HER2 under-expression of the breast MR image is determined based on the predicted probability of HER2 under-expression and the probability of HER2 negative expression, and the probability of HER2 zero-expression is determined based on the predicted probability of HER2 zero-expression and the probability of HER2 negative expression.
Based on this, in an implementation manner of this embodiment, the determining, based on the HER2 negative expression probability, the predicted HER2 low expression probability, and the predicted HER2 zero expression probability, the HER2 category corresponding to the mammary MR image specifically includes:
calculating the products of the HER2 negative expression probability and the predicted HER2 low expression probability and the HER2 negative expression probability and the predicted HER2 zero expression probability respectively to obtain a HER2 low expression probability and a HER2 zero expression probability;
determining a HER2 category corresponding to the mammary MR image based on the HER2 high expression probability, the HER2 low expression probability and the HER2 zero expression probability.
In particular, the HER2 under-expression probability may be the product of the HER2 negative expression probability and the predicted HER2 under-expression probability, and the HER2 zero-expression probability may be the product of the HER2 negative expression probability and the predicted HER2 zero-expression probability, which may ensure that the sum of the HER2 high expression probability, the HER2 low expression probability, and the HER2 zero-expression probability is equal to 1. In addition, after determining the HER2 high expression probability, the HER2 low expression probability, and the HER2 zero expression probability, the HER2 category corresponding to the maximum probability value among the HER2 high expression probability, the HER2 low expression probability, and the HER2 zero expression probability may be used as the HER2 category corresponding to the breast MR image, or the HER2 high expression probability, the HER2 low expression probability, and the HER2 zero expression probability may be compared with a preset threshold, and the HER2 category corresponding to the probability greater than the preset threshold may be used as the HER2 category corresponding to the breast MR image, and the like.
In an implementation manner of this embodiment, as shown in fig. 3, the first classification model includes an imagery omics module, a semantic segmentation module, and a prediction module, and the controlling the first classification model to determine the HER2 high expression probability and the HER2 negative expression probability corresponding to the breast MR image specifically includes:
controlling the second imagery omics module to determine imagery characteristics corresponding to the mammary gland MR image, and determining a first HER2 high expression probability and a first HER2 negative expression probability corresponding to the mammary gland MR image based on the imagery characteristics;
controlling the semantic segmentation module to determine a second HER2 high expression probability and a second HER2 negative expression probability corresponding to the breast MR image;
controlling the prediction module to determine a HER2 high expression probability and a HER2 negative expression probability corresponding to the mammary MR image based on the first HER2 high expression probability, the first HER2 negative expression probability, the second HER2 high expression probability and the second HER2 negative expression probability.
Specifically, the imagery omics feature is used for obtaining an imagery omics feature of the mammary gland MR image, predicting a first HER2 high expression probability and a first HER2 negative expression probability corresponding to the mammary gland MR image based on the obtained influential omics feature, and the semantic feature module is used for obtaining a semantic feature of the mammary gland MR image and determining a second HER2 high expression probability and a second HER2 negative expression probability corresponding to the mammary gland MR image based on the obtained semantic image. The prediction module is configured to fuse the first HER2 high expression probability, the first HER2 negative expression probability, the second HER2 high expression probability, and the second HER2 negative expression probability to obtain a HER2 high expression probability and a HER2 negative expression probability. It can be understood that the HER2 high expression probability and the HER2 negative expression probability are determined based on the imagery omic features and the semantic features, that is, the first classification model learns the imagery omic features and the semantic features in the breast MR image, and fully mines the image features in the image MR image, thereby improving the accuracy of the HER2 high expression probability and the HER2 negative expression probability. In one implementation, the average of the first HER2 high expression probability and the second HER2 high expression probability is taken as the HER2 high expression probability and the average of the second HER2 high expression probability and the second HER2 negative expression probability is taken as the HER2 negative expression probability. Of course, in practical applications, the prediction module may use the first HER2 high expression probability and the second HER2 high expression probability weighted value as the HER2 high expression probability, and use the second HER2 high expression probability and the second HER2 negative expression probability weighted value as the HER2 negative expression probability, and so on.
In one implementation manner of this embodiment, the omics module includes a lesion delineation unit, an omics feature extraction unit, a first feature screening unit, and a first prediction unit; the controlling the second omics module to determine the omics characteristics corresponding to the breast MR image, and the determining the first HER2 high expression probability and the first HER2 negative expression probability corresponding to the breast MR image based on the omics characteristics specifically includes:
controlling the focus delineation unit to extract a region of interest in the breast MR image;
controlling the imaging omics feature extraction unit to extract imaging features of the region of interest so as to obtain candidate imaging omics features;
controlling the first feature screening unit to perform feature screening on the candidate imagery omics features to obtain imagery omics features;
and controlling the first prediction unit to determine a first HER2 high expression probability and a first HER2 negative expression probability corresponding to the mammary gland MR image based on the iconic features.
In particular, the lesion delineation unit is configured to extract a region of interest in the breast MR image, wherein the region of interest includes an image tumor region. The mammary gland MR image is obtained by scanning the whole mammary gland, the lesion area only occupies a small part of the mammary gland MR image, and in order to reduce redundant information of the mammary gland MR image, the mammary gland MR image needs to be preprocessed after the mammary gland MR image is obtained so as to extract the lesion area of the mammary gland in the mammary gland MR image and remove the redundant information in the mammary gland MR image, so that the accuracy of the subsequent first HER2 high expression probability and the first HER2 negative expression probability can be improved. In one implementation, the process of extracting the region of interest by the lesion delineation unit may be: and inputting the mammary gland MR image into a focus delineation unit, delineating and extracting a mammary gland tumor region in the mammary gland MR image based on the cross section, the coronal plane and the sagittal plane of the mammary gland MR image by the focus delineation unit so as to obtain a region of interest corresponding to the mammary gland MR image.
The cineomics feature extraction unit is configured to extract the cineomics features in the region of interest, as shown in fig. 3, the cineomics features may be determined based on the region of interest, a derivative map of the region of interest generated by wavelet filtering, and a derivative map of the region of interest generated by gaussian filtering, where the wavelet filtering generates 8 derivative images and the gaussian filtering generates 3 derivative images. Furthermore, as shown in table 1, the imagery omics features may include 1130 imagery features consisting of first-order features, texture features, and shape features, wherein the first-order features and the texture features are extracted from the region of interest, the derivative map generated by wavelet filtering, and the derivative map generated by gaussian filtering, respectively, and the shape features are extracted from the original image (breast MR image).
TABLE 1 imaging omics characterization
Figure BDA0003351663510000121
The first feature screening unit is used for screening the imaging omics features acquired by the imaging omics feature extracting unit and removing useless features in the imaging omics features so as to reduce the operation amount of the candidate prediction unit. The first feature screening unit may screen candidate proteomics features by using Lasso regression and chi-square test, and the specific process may be as follows: firstly, screening candidate image omics characteristics by selecting a Lasso linear regression mode; and then screening the screened candidate image omics characteristics again by adopting chi-square test to obtain the image omics characteristics. In this embodiment, the regression coefficients of the image features in the candidate image omics features may be compressed by using a Lasso linear regression method, and the regression coefficients of some image features may be changed to 0, so as to remove the image features with the regression coefficients of 0. Then, the relevance among the measured image features is checked through chi-square, and the image features which are most probably irrelevant to classification are removed based on the relevance, so that the effectiveness of the screened image features is improved.
In one implementation, the chi-square test determines the correlation between image features by comparing the degree of coincidence or goodness of fit between the theoretical frequency and the actual frequency by using the correlation of classification variables (discrete variables) and correlation analysis, wherein the expression formula of the chi-square test may be:
Figure BDA0003351663510000122
wherein f is o Representing the actual frequency Observation; f. of e Indicating the desired frequency expecteration.
The first prediction unit is used for predicting a first HER2 high expression probability and a first HER2 negative expression probability based on the iconomics characteristics, wherein the first prediction unit is a binary model. In a specific implementation manner, the first prediction unit comprises an LR classifier, and the LR classifier is a classification manner based on linear regression and Sigmoid function combination. Thus, the expression for the first prediction unit to determine its corresponding first HER2 high expression probability and first HER2 negative expression probability may be:
Figure BDA0003351663510000131
Figure BDA0003351663510000132
wherein x ∈ R n Representing the feature vector of the input first prediction unit, w ∈ R n Represents a weight vector, b ∈ R n Representing the offset vector.
In one implementation manner of this embodiment, the semantic segmentation module includes a feature extraction unit, a second feature screening unit, and a second prediction unit; the controlling the semantic segmentation module to determine the second HER2 high expression probability and the second HER2 negative expression probability corresponding to the breast MR image specifically includes:
controlling the feature extraction unit to determine candidate semantic features corresponding to the mammary gland MR image;
controlling the second feature screening unit to screen the candidate semantic features to obtain semantic features corresponding to the mammary gland MR image;
controlling the second prediction unit to determine a second HER2 high expression probability and a second HER2 negative expression probability corresponding to the mammary gland MR image based on the semantic features.
Specifically, the feature extraction unit is configured to extract candidate semantic features of the breast MR image, where the feature extraction unit may adopt an encoding and decoding structure, that is, the feature extraction unit may include an encoder and a decoder, and input items of the encoder are the breast MR image and a plurality of first feature maps, where feature dimensions of each of the plurality of first feature maps are different. The encoder is used for determining candidate semantic features based on a plurality of first feature maps, wherein the decoder learns the semantic features in the breast tumor region in the breast MR image in an upsampling mode to obtain the candidate semantic features.
In one implementation, the encoder includes a convolution unit and a plurality of down-sampling units that are cascaded in sequence; the decoder comprises a plurality of up-sampling units and a convolution layer which are sequentially cascaded, wherein the down-sampling units correspond to the up-sampling units one by one, and each down-sampling unit is in jumping connection with the corresponding up-sampling unit. The convolution unit in the encoder is in jump connection with the convolution layer in the decoder, and in the embodiment, the plurality of down-sampling units and the plurality of up-sampling units are in jump connection, and the convolution unit and the convolution layer are in jump connection, so that high-resolution and low-resolution image features can be fused, a breast tumor region can be finely divided, more candidate semantic features can be extracted, and the module performance of a semantic feature module can be improved. In addition, the problem that the gradient disappears under the condition that the network layer number is deep is solved by using the jump connection, the reverse propagation of the gradient is facilitated to accelerate the training process, and the feature information of different dimensions acquired by the encoder is fused, so that the feature accuracy can be improved.
In a typical implementation, as shown in fig. 4, the several down-sampling units are 4 down-sampling units, P1, P2, P3 and P3; each downsampling unit comprises a pooling layer pool and two convolution blocks which are sequentially cascaded, and each convolution block comprises a convolution layer conv, a normalization layer GN and an activation layer relu. The plurality of up-sampling units are 4 up-sampling units which are U1, U2, U3 and U4 respectively, each up-sampling unit comprises an up-sampling layer up-sampling and a convolution block, each down-sampling unit is connected with each up-sampling unit through a concatemate jump, and the model structure of the convolution block in the up-sampling unit is the same as that of the convolution block in the down-sampling unit. Further, the convolution unit C1 in the encoder includes two convolution blocks, and the model structure of the convolution block in the convolution unit is the same as that of the convolution block in the downsampling unit. In addition, the feature extraction unit may adopt local as a loss function in a training stage to evaluate a difference between a prediction result of the segmentation stage and an actual segmentation result, and provide a training direction for the segmentation model. The Focal loss expression is as follows:
Figure BDA0003351663510000141
in an implementation manner of this embodiment, the controlling the feature extraction unit to determine the candidate semantic features corresponding to the MR image specifically includes:
controlling the feature extraction unit to determine a plurality of slice feature maps based on the mammary gland MR image, wherein the plurality of slice feature maps correspond to a plurality of slice images included in the mammary gland MR image one by one;
dividing the plurality of slice characteristic graphs into two characteristic clusters based on a clustering mode, and removing the characteristic clusters with less slice characteristic graphs from the two characteristic clusters to obtain a plurality of target slice characteristics;
and fusing the target slice features to obtain candidate semantic features corresponding to the MR image.
Specifically, the breast MR image includes a plurality of slice images, and when the breast MR image is input to the feature extraction unit, the feature extraction unit obtains a slice feature map corresponding to each slice image, where the slice feature map is used to reflect semantic features of the slice image corresponding to the slice image, and the slice features are output items of a penultimate convolutional layer in the encoder. After the slice features corresponding to all slice images are obtained, because all slice features carry a large number of image features and have redundancy and useless features, after the slice feature maps corresponding to all slice images are obtained, the obtained slice feature maps are screened in a clustering mode to obtain a plurality of target slice features, and then the candidate semantic features are determined based on the target slice features.
In an implementation manner of this embodiment, the slicing feature maps are divided into two feature clusters based on a clustering manner, and the feature clusters with fewer slicing feature maps in the two feature clusters are removed, so as to obtain a plurality of target slicing features. As shown in fig. 4, the specific process of the clustering process may be to preset the number of clusters to be taken, and randomly select a slice feature map of the number of clusters to be taken as an initial clustering center; respectively calculating the distance from each slice characteristic diagram to each initial clustering center, and dividing each section characteristic diagram into clustering clusters corresponding to the initial clustering centers corresponding to the minimum distance; updating the clustering centers of the number of the clustering clusters, and detecting whether the clustering center of each clustering cluster changes; and if the change value occurs, repeating the step of respectively calculating the distance from each slice characteristic diagram to each initial clustering center until the clustering center of each clustering cluster does not change. In this embodiment, the number of clusters is 2.
The second feature screening unit screens the candidate semantic features by using a Lasso regression and mutual information, where the Lasso regression is the same as the Lasso regression used by the first feature screening unit, and will not be described here. Mutual information is used for measuring the dependency relationship among variables, and the correlation between the candidate semantic features and the preset HER2 categories can be displayed through the mutual information, so that the candidate semantic features can be screened. The formula for calculating the mutual information may be:
Figure BDA0003351663510000151
wherein X and Y are two random variables.
In the step S22, the input item of the second classification model is the breast MR image, and the output item includes a predicted HER2 low expression probability and a predicted HER2 zero expression probability, wherein the model structure of the second classification model is substantially the same as the model structure of the first classification model, and the difference between the first classification model and the second classification model is only different between the first feature screening unit and the second feature screening unit, and the difference between the first classification model and the second classification model is not described here. The first feature screening unit in the first classification model screens candidate imaging omics features by adopting Lasso regression and chi-square test, and the second feature screening unit in the first classification model screens candidate semantic features by adopting Lasso regression and mutual information; and the second feature screening unit in the second classification model screens the candidate semantic features by adopting Lasso regression and chi-square test.
In summary, the present embodiment provides a method for predicting a breast cancer HER2 state, where the method includes inputting a breast MR image to be predicted into a first classification model and a second classification model respectively, and controlling the first classification model and the second classification model to determine that a HER2 category corresponding to the breast MR image is a HER2 high expression category, a HER2 low expression category, or a HER2 zero expression category. According to the method, the first classification model and the second classification model are adopted to learn the mammary gland MR images with mammary gland lesion characteristics in different aspects, so that the focus information mined in different aspects is correlated, the HER2 type is determined to be the HER2 high expression type, the HER2 low expression type or the HER2 zero expression type through the correlated focus information, and the accuracy of identifying the HER2 type to be the HER2 high expression type, the HER2 low expression type or the HER2 zero expression type is further improved.
Based on the above method for predicting breast cancer HER2 status, the present embodiment provides a computer-readable storage medium storing one or more programs, which are executable by one or more processors to implement the steps in the method for predicting breast cancer HER2 status as described in the above embodiment.
Based on the method for predicting breast cancer HER2 status, the present application further provides a terminal device, as shown in fig. 6, including at least one processor (processor) 20; a display screen 21; and a memory (memory) 22, and may further include a communication Interface (Communications Interface) 23 and a bus 24. The processor 20, the display 21, the memory 22 and the communication interface 23 can communicate with each other through the bus 24. The display screen 21 is configured to display a user guidance interface preset in the initial setting mode. The communication interface 23 may transmit information. Processor 20 may call logic instructions in memory 22 to perform the methods in the embodiments described above.
Furthermore, the logic instructions in the memory 22 may be implemented in the form of software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products.
The memory 22, which is a computer-readable storage medium, may be configured to store a software program, a computer-executable program, such as program instructions or modules corresponding to the methods in the embodiments of the present disclosure. The processor 20 executes the functional application and data processing, i.e. implements the method in the above-described embodiments, by executing the software program, instructions or modules stored in the memory 22.
The memory 22 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal device, and the like. Further, the memory 22 may include a high speed random access memory and may also include a non-volatile memory. For example, a variety of media that can store program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, may also be transient storage media.
In addition, the specific processes loaded and executed by the storage medium and the instruction processors in the terminal device are described in detail in the method, and are not stated herein.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present application.

Claims (8)

1. A method of predicting breast cancer HER2 status, the method comprising:
respectively inputting a mammary gland MR image to be predicted into a first classification model and a second classification model;
controlling the first and second classification models to determine a HER2 class corresponding to the breast MR image, wherein the HER2 class comprises high HER2 expression, low HER2 expression, or zero HER2 expression;
the controlling the first classification model and the second classification model to determine the HER2 category corresponding to the breast MR image specifically includes:
controlling the first classification model to determine the HER2 high expression probability and the HER2 negative expression probability corresponding to the mammary gland MR image;
controlling the second classification model to determine a predicted HER2 low expression probability and a predicted HER2 zero expression probability corresponding to the mammary gland MR image;
determining a HER2 category corresponding to the breast MR image based on the HER2 high expression probability, the HER2 negative expression probability, the predicted HER2 low expression probability, and the predicted HER2 zero expression probability;
the determining, based on the HER2 negative expression probability, the predicted HER2 low expression probability, and the predicted HER2 zero expression probability, a HER2 category corresponding to the breast MR image specifically includes:
respectively calculating the product of the HER2 negative expression probability and the predicted HER2 low expression probability and the product of the HER2 negative expression probability and the predicted HER2 zero expression probability to obtain a HER2 low expression probability and a HER2 zero expression probability;
determining a HER2 category corresponding to the breast MR image based on the high HER2 expression probability, the low HER2 expression probability and the zero HER2 expression probability.
2. The method of claim 1, wherein the first classification model comprises a proteomics module, a semantic segmentation module and a prediction module; the controlling the first classification model to determine the HER2 high expression probability and the HER2 negative expression probability corresponding to the breast MR image specifically includes:
controlling the imagomics module to determine the imagomics characteristics corresponding to the mammary gland MR image, and determining the first HER2 high expression probability and the first HER2 negative expression probability corresponding to the mammary gland MR image based on the imagomics characteristics;
controlling the semantic segmentation module to determine a second HER2 high expression probability and a second HER2 negative expression probability corresponding to the breast MR image;
controlling the prediction module to determine a HER2 high expression probability and a HER2 negative expression probability corresponding to the breast MR image based on the first HER2 high expression probability, the first HER2 negative expression probability, the second HER2 high expression probability and the second HER2 negative expression probability.
3. The method of claim 2, wherein the imaging omics module comprises a lesion delineation unit, an imaging omics feature extraction unit, a first feature screening unit, and a first prediction unit; the controlling the imagomics module to determine the imagomics characteristics corresponding to the breast MR image, and the determining the first HER2 high expression probability and the first HER2 negative expression probability corresponding to the breast MR image based on the imagomics characteristics specifically includes:
controlling the focus delineation unit to extract a region of interest in the breast MR image;
controlling the imaging omics feature extraction unit to extract imaging features of the region of interest so as to obtain candidate imaging omics features;
controlling the first feature screening unit to perform feature screening on the candidate imagery omics features so as to obtain imagery omics features;
and controlling the first prediction unit to determine a first HER2 high expression probability and a first HER2 negative expression probability corresponding to the mammary gland MR image based on the iconic features.
4. The method of claim 2, wherein the semantic segmentation module comprises a feature extraction unit, a second feature screening unit and a second prediction unit; the controlling the semantic segmentation module to determine the second HER2 high expression probability and the second HER2 negative expression probability corresponding to the breast MR image specifically includes:
controlling the feature extraction unit to determine candidate semantic features corresponding to the mammary gland MR image;
controlling the second feature screening unit to screen the candidate semantic features to obtain semantic features corresponding to the mammary gland MR image;
and controlling the second prediction unit to determine a second HER2 high expression probability and a second HER2 negative expression probability corresponding to the mammary gland MR image based on the semantic features.
5. The method for predicting breast cancer HER2 status as set forth in claim 4, wherein the controlling the feature extraction unit to determine the candidate semantic features corresponding to the breast MR image specifically comprises:
controlling the feature extraction unit to determine a plurality of slice feature maps based on the mammary gland MR image, wherein the plurality of slice feature maps correspond to a plurality of slice images included in the mammary gland MR image one by one;
dividing the plurality of slice characteristic images into two characteristic clusters based on a clustering mode, and removing the characteristic clusters which carry few slice characteristic images from the two characteristic clusters to obtain a plurality of target slice characteristics;
and fusing the target slice features to obtain candidate semantic features corresponding to the breast MR image.
6. The method of claim 2, wherein the model structure of the second classification model is the same as the model structure of the first classification model, and the second classification model is different from the first classification model in that the first feature screening unit in the first classification model screens the candidate imagery omics features by using Lasso regression and chi-square test, and the second feature screening unit in the first classification model screens the candidate semantic features by using Lasso regression and mutual information; and the second feature screening unit in the second classification model screens the candidate semantic features by adopting Lasso regression and chi-square test.
7. A computer readable storage medium, having one or more programs stored thereon for execution by one or more processors to perform the steps of the method for predicting breast cancer HER2 status of any of claims 1-6.
8. A terminal device, comprising: a processor, a memory, and a communication bus; the memory has stored thereon a computer readable program executable by the processor;
the communication bus realizes connection communication between the processor and the memory;
the processor when executing said computer readable program carries out the steps of the method for predicting breast cancer HER2 status as defined in any one of claims 1 to 6.
CN202111338891.8A 2021-11-12 2021-11-12 Breast cancer HER2 state prediction method and related equipment Active CN114171197B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111338891.8A CN114171197B (en) 2021-11-12 2021-11-12 Breast cancer HER2 state prediction method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111338891.8A CN114171197B (en) 2021-11-12 2021-11-12 Breast cancer HER2 state prediction method and related equipment

Publications (2)

Publication Number Publication Date
CN114171197A CN114171197A (en) 2022-03-11
CN114171197B true CN114171197B (en) 2022-10-04

Family

ID=80479183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111338891.8A Active CN114171197B (en) 2021-11-12 2021-11-12 Breast cancer HER2 state prediction method and related equipment

Country Status (1)

Country Link
CN (1) CN114171197B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024021037A1 (en) * 2022-07-29 2024-02-01 京东方科技集团股份有限公司 Disease analysis method and apparatus, and disease analysis model training method and apparatus
CN115147668B (en) * 2022-09-06 2022-12-27 北京鹰瞳科技发展股份有限公司 Training method of disease classification model, disease classification method and related products
CN115440383B (en) * 2022-09-30 2023-03-21 中国医学科学院北京协和医院 System for predicting curative effect of PD-1/PD-L1 monoclonal antibody of advanced cancer patient
CN117152506A (en) * 2023-08-25 2023-12-01 广州市第一人民医院(广州消化疾病中心、广州医科大学附属市一人民医院、华南理工大学附属第二医院) Triple negative breast cancer immunophenotype prediction method and system based on multi-scale characteristics

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11983868B2 (en) * 2018-02-21 2024-05-14 Case Western Reserve University Predicting neo-adjuvant chemotherapy response from pre-treatment breast magnetic resonance imaging using artificial intelligence and HER2 status
CN108898160B (en) * 2018-06-01 2022-04-08 中国人民解放军战略支援部队信息工程大学 Breast cancer histopathology grading method based on CNN and imaging omics feature fusion
CN111401214B (en) * 2020-03-12 2023-04-18 四川大学华西医院 Multi-resolution integrated HER2 interpretation method based on deep learning
CN111583320B (en) * 2020-03-17 2023-04-07 哈尔滨医科大学 Breast cancer ultrasonic image typing method and system fusing deep convolutional network and image omics characteristics and storage medium

Also Published As

Publication number Publication date
CN114171197A (en) 2022-03-11

Similar Documents

Publication Publication Date Title
CN114171197B (en) Breast cancer HER2 state prediction method and related equipment
CN106815481B (en) Lifetime prediction method and device based on image omics
CN112768072B (en) Cancer clinical index evaluation system constructed based on imaging omics qualitative algorithm
Zhang et al. Automated semantic segmentation of red blood cells for sickle cell disease
US20170249739A1 (en) Computer analysis of mammograms
US20230306598A1 (en) Systems and methods for mesothelioma feature detection and enhanced prognosis or response to treatment
JP6168426B2 (en) Disease analysis apparatus, control method, and program
Ström et al. Pathologist-level grading of prostate biopsies with artificial intelligence
CN112561869B (en) Pancreatic neuroendocrine tumor postoperative recurrence risk prediction method
CN112562855B (en) Hepatocellular carcinoma postoperative early recurrence risk prediction method, medium and terminal equipment
CN113764101B (en) Novel auxiliary chemotherapy multi-mode ultrasonic diagnosis system for breast cancer based on CNN
CN115457361A (en) Classification model obtaining method, expression class determining method, apparatus, device and medium
CN109191422B (en) System and method for detecting early ischemic stroke based on conventional CT image
CN115938597A (en) cancer prognosis
CN118541706A (en) Classification method
EP4288975A1 (en) Apparatus and method for training of machine learning models using annotated image data for pathology imaging
CN116563651A (en) Nasopharyngeal carcinoma prognosis feature determination method, system, device and storage medium
CN115631387B (en) Method and device for predicting lung cancer pathology high-risk factor based on graph convolution neural network
CN117011601A (en) Multi-modal classification prediction method, apparatus, processor and machine-readable storage medium
Ulloa et al. Improving multiple sclerosis lesion boundaries segmentation by convolutional neural networks with focal learning
CN114092427B (en) Crohn's disease and intestinal tuberculosis classification method based on multi-sequence MRI image
CN114067154B (en) Crohn's disease fibrosis classification method based on multi-sequence MRI and related equipment
Tampu et al. Diseased thyroid tissue classification in OCT images using deep learning: Towards surgical decision support
CN115063351A (en) Deep learning-based fetal MRI brain tissue segmentation method and device
CN115457339B (en) AD prediction method, system and device based on deep ensemble learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant