CN116415649B - Breast micro cancer analysis method based on multi-mode ultrasonic image self-supervision learning - Google Patents

Breast micro cancer analysis method based on multi-mode ultrasonic image self-supervision learning Download PDF

Info

Publication number
CN116415649B
CN116415649B CN202310364389.7A CN202310364389A CN116415649B CN 116415649 B CN116415649 B CN 116415649B CN 202310364389 A CN202310364389 A CN 202310364389A CN 116415649 B CN116415649 B CN 116415649B
Authority
CN
China
Prior art keywords
micro
cancer
image
self
breast
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310364389.7A
Other languages
Chinese (zh)
Other versions
CN116415649A (en
Inventor
于腾飞
何文
张巍
陈涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Tiantan Hospital
Original Assignee
Beijing Tiantan Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Tiantan Hospital filed Critical Beijing Tiantan Hospital
Priority to CN202310364389.7A priority Critical patent/CN116415649B/en
Publication of CN116415649A publication Critical patent/CN116415649A/en
Application granted granted Critical
Publication of CN116415649B publication Critical patent/CN116415649B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/0895Weakly supervised learning, e.g. semi-supervised or self-supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/096Transfer learning
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Epidemiology (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a breast micro cancer analysis method based on multi-mode ultrasonic image self-supervision learning, which comprises the following steps: step one, data acquisition is carried out on multi-mode ultrasonic images of the breast micro-cancer to form a database, and a gold standard is established; step two, a self-supervision deep learning model is established through a breast tumor clinical image with the size of more than 1 cm; training the processed breast micro-cancer image data set through a self-supervision deep learning model to obtain a micro-cancer recognition model; step four, reconstructing a three-dimensional model based on the identification of the micro cancer; and fifthly, analyzing the relationship between the microvessels and the cancers through the reconstructed three-dimensional model. The breast micro cancer analysis method based on the multi-mode ultrasonic image self-supervision learning improves the early breast cancer detection rate with good prognosis, and provides a new theoretical basis and a new method for accurate diagnosis and treatment for reducing the number of excessive biopsies.

Description

Breast micro cancer analysis method based on multi-mode ultrasonic image self-supervision learning
Technical Field
The invention relates to the technical field of breast cancer ultrasonic images, in particular to a breast micro-cancer analysis method based on multi-mode ultrasonic image self-supervision learning.
Background
Global cancer morbidity and mortality are rapidly increasing. Female breast cancer (11.6% of the global incidence of cancer) is the second most common cancer and is also the leading cause of female cancer death. Early findings have been shown to reduce death from breast cancer. In general, tumor size is closely related to the likelihood of metastasis and thus has an impact on recurrence rate and final survival rate. Therefore, detection and characterization of small lesions of the breast is of paramount importance. Research shows that the lesion survival rate of the diameter smaller than 1cm is higher; the catheter or lobule invasive lesions with the diameter smaller than 1cm have good prognosis, and the survival rate without recurrence is 88% in 20 years; extrapolation of the log-normal relationship between tumor size and metastasis probability onto small breast lesions resulted in a 25.5% metastatic probability for tumors of approximately 2cm in diameter and a 1.2% metastatic probability for tumors of approximately 5mm in diameter.
Breast cancer is the malignant tumor with highest global female morbidity and is the primary factor of female cancer-related death. The size of the tumor is closely related to the probability of metastasis, and has a significant impact on the recurrence rate and final survival rate. Therefore, the method is important for detection, diagnosis and prognosis of the breast micro-lesions. With the progress of ultrasonic technology, the detection rate of breast lesions below 1cm by ultrasonic is also higher and higher. Researches show that the survival rate of lesions with the maximum diameter less than or equal to 1cm is high. However, the major problem faced at present is that T1b and smaller tumors are difficult to evaluate for benign and malignant, and most patients cannot be diagnosed satisfactorily by triple evaluation. Especially, new techniques such as ultrasound imaging and elastography are more challenging.
The main driving force of ultrasonic artificial intelligence is to improve the diagnostic accuracy of medical image interpretation while reducing the manpower demand. Convolutional neural networks can learn from each new case they deal with, similar to building a knowledge base. While qualitative and quantitative imaging parameters have been used in artificial intelligence algorithms. The method has good application prospect in the aspect of diagnosing benign and malignant thyroid gland and mammary gland. Self-supervision techniques were first applied in the NLP domain deep learning method, which was subsequently applied in image processing by carldoesch et al. Compared with the traditional deep CNN network, the self-supervision technology can fully utilize limited training samples by mining the characteristics of the images, so that the recognition performance of the model is improved. The technology can be used for model pre-training in a large number of unlabeled samples, can also be added in supervised learning, and can assist the model in increasing the characteristic representation capability. The technology only obtains good results on databases of natural scenes such as ImageNet and the like at present, but few researches are seen in the field of medical imaging, particularly in the field of ultrasound. Artificial intelligence is a brand-new corner in the field of medical image processing, but currently has difficult conditions: massive ultrasonic images below 1cm and pathological and high-quality labeling.
Disclosure of Invention
The invention aims to provide a breast micro-cancer analysis method based on multi-mode ultrasonic image self-supervision learning, which solves the problems in the background technology.
In order to achieve the above purpose, the invention provides a breast micro cancer analysis method based on multi-mode ultrasonic image self-supervision learning, which comprises the following steps:
step one, data acquisition is carried out on multi-mode ultrasonic images of the breast micro-cancer to form a database, and a gold standard is established;
step two, a self-supervision deep learning model is established through a breast tumor clinical image with the size of more than 1 cm;
training the processed breast micro-cancer image data set through a self-supervision deep learning model to obtain a micro-cancer recognition model;
step four, reconstructing a three-dimensional model based on the identification of the micro cancer;
and fifthly, analyzing the relationship between the microvessels and the cancers through the reconstructed three-dimensional model.
Preferably, in the first step, an ultrasonic multi-mode image of a breast micro tumor is acquired, wherein the ultrasonic multi-mode image comprises 2D, CDFI and PW in conventional ultrasound, ultrasound contrast and micro blood flow 3D imaging before and after ultrasound contrast, and all images respectively store static and dynamic images; after collection, the doctor who has experienced ultrasound clinical work for more than 5 years marks the disease, marks and stores the micro cancer focus area by taking the pathological result as the golden standard.
Preferably, in the second step, a plurality of breast clinical images with the size of more than 1cm are trained by self-supervision deep learning to obtain a primary model;
preferably, in the third step, the clinical image of the small cancer with the size of less than 1cm is divided into a training set and a testing set according to a proportion, and the testing set is kept independent;
training is carried out on a training set through the preliminary model, the relation and the characteristics of each part of data in the training set are mined through a self-supervision learning algorithm, and a plurality of weak models are aggregated into a model with high accuracy by combining a ModelEnsemble method.
Preferably, in the fourth step, in the identification of lesions by the micro cancer identification model, two-dimensional ultrasound and pathological information are fused, and on the basis of the two-dimensional ultrasound and pathological information, 3D image information of micro blood perfusion and pathological is used as constraint and guide, and a generation countermeasure network is adopted to reconstruct a clear 3D structure of the micro blood vessels and the lesions.
Preferably, qualitative and quantitative analysis is respectively carried out on ultrasonic microvascular 3D imaging, tumor boundaries and the like according to the obtained 3D structure data, and the change of microvascular environment is analyzed.
Preferably, the self-supervision deep learning model randomly selects two image patches during training, and extracts the characteristics of the two patches through a deep network, so as to analyze the position relationship between the two patches; in the micro cancer recognition model, the relation among all the patches is predicted through the self-supervision learning branch, and the characteristics inside the image patches and the associated characteristics among the patches are mined through weight sharing with the self-supervision learning branch, meanwhile, the pathological image and the ultrasonic image are used for training, and the ultrasonic image of the pathological style after conversion is input.
Preferably, the acquired data are multi-center data, and the multi-center data are subjected to data difference removal and ultrasonic pathology style image processing by generating an antagonistic network structure.
Therefore, the breast cancer analysis method based on the multi-mode ultrasonic image self-supervision learning is adopted, the effective expression characteristics are extracted from the data set of the research small sample through the self-supervision deep learning, and the model learned from a large number of breast tumor clinical images with the size of more than 1cm is efficiently migrated into the cancer model in combination with migration learning, so that clinical verification is performed. The method provides a new theoretical basis and a new method for accurate diagnosis and treatment for improving early breast cancer detection rate with good prognosis and reducing excessive biopsy quantity.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
FIG. 1 is a schematic overall flow chart of an embodiment of a breast micro-cancer analysis method based on multi-mode ultrasound image self-supervised learning;
FIG. 2 is a schematic diagram of a generated countermeasure network architecture for removing multi-center data differences and ultrasound to pathology style images in accordance with the present invention;
FIG. 3 is a schematic diagram of a micro cancer recognition model structure based on self-supervision deep learning;
FIG. 4 is a schematic diagram of a three-dimensional reconstruction model according to the present invention.
Detailed Description
The technical scheme of the invention is further described below through the attached drawings and the embodiments.
Unless defined otherwise, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this invention belongs. The terms "first," "second," and the like, as used herein, do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that elements or items preceding the word are included in the element or item listed after the word and equivalents thereof, but does not exclude other elements or items. The terms "disposed," "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. "upper", "lower", "left", "right", etc. are used merely to indicate relative positional relationships, which may also be changed when the absolute position of the object to be described is changed.
Examples
As shown in fig. 1, the breast micro cancer analysis method based on multi-mode ultrasonic image self-supervision learning provided by the invention comprises the following steps:
step one, data acquisition is carried out on the breast micro-cancer multi-mode ultrasonic images to form a database, and a gold standard is established.
The method specifically comprises the following steps: the method comprises the steps of collecting ultrasonic multi-mode images of breast micro-tumor, including 2D, CDFI and PW in conventional ultrasound, ultrasound contrast and micro-blood flow 3D imaging before and after ultrasound contrast, and respectively preserving static and dynamic images of all the images. After collection, the doctor who has experienced ultrasound clinical work for more than 5 years marks the disease, marks and stores the micro cancer focus area by taking the pathological result as the golden standard.
To assess performance in a clinical setting of breast micro-lesions (.ltoreq.1 cm), a large representative dataset from the ultrasound department of multiple hospitals was established. The patients in the data set are female, the ages of the patients are 18-60 years, and the patients involved in the ultrasonic guided puncture or surgical pathology. The adopted instrument is a conventional screening ultrasonic instrument brand without limitation, the ultrasonic contrast and microvascular imaging adopts an Aplioi900 color ultrasonic diagnostic instrument, and the adopted probes are M12L, ML6-15D, L-5 and have the frequency of 5-13 MHz respectively. The case image only selects the image with the nodules, and the morphological boundary of the nodules can be identified. According to the early working basis, the condition of the breast micro lesions (less than or equal to 1 cm) is evaluated by adopting conventional ultrasonic examination, ultrasonic radiography and micro blood flow imaging examination, and the breast micro lesions are strictly classified according to the BI-RADS standard and are compared with pathological results. The measurement parameters include: size of breast micro lesions, CDFI and PW; perfusion conditions for ultrasound contrast: start enhancement time, peak arrival time, peak intensity, enhancement mode, etc.; microcirculation blood flow distribution, blood flow, 3D imaging, and the like.
The acquired data are multi-center data, and the data difference and the ultrasonic-to-pathological style image processing are removed from the multi-center data by generating an antagonistic network structure. Because the brands of the ultrasonic machines used by various hospitals are various, and the preference of each doctor for adjusting the images is also quite different, the ultrasonic data acquired by multiple centers have quite different, and the result is quite undesirable by only using the traditional preprocessing methods, such as means of statistical normalization, data expansion and the like. In addition, the style of the ultrasonic data is completely inconsistent with that of the pathological image, the ultrasonic is real-time, the resolution is low, the signal to noise ratio is low, the pathological image is static, the resolution is ultrahigh, and the signal to noise ratio is high, and the two different modal data are very difficult to fuse and analyze the characteristics of the two different modal data. The generation of the antagonism network is adopted to solve the two problems in a unified way, and a specific algorithm structure diagram is shown in fig. 2. The method is characterized in that the multi-center data difference is removed, and the generated countermeasure network structure used for converting the ultrasonic wave into the pathological style image is similar to that of the generated countermeasure network structure and is uniformly listed. In the figure, input_a is an a-type image, generated_b is a B-type image, and cyclic_a is an a-type image restored from the Generated B-type image. The Generator (producer) is responsible for generating one style of image from another, and the determiner (determiner) is responsible for determining whether the input image is a pre-specified style, both implemented using a deep neural network. B is a selected certain center image when the data difference of the multi-center image is removed, and other images are close to the selected certain center image; in the process of converting an ultrasonic image into a pathological image, B is the pathological image. By the processing of the network, various heart data differences are eliminated, data noise is removed, and the representation capability and generalization of the model can be effectively improved; meanwhile, ultrasonic and pathological image information are effectively fused through ultrasonic to pathological image, so that the accuracy of model micro cancer identification can be improved.
And secondly, establishing a self-supervision deep learning model through a breast tumor clinical image with the size of more than 1 cm.
Specifically, a preliminary model is obtained by training massive breast clinical images with the size of more than 1cm through self-supervision deep learning. The model has low identification rate of the micro cancer, but contains general knowledge of breast ultrasound images.
As shown in fig. 3, the self-supervision deep learning model randomly selects two image patches during training, and extracts features of the two patches through a deep network, so as to analyze the position relationship between the two patches.
And thirdly, training the processed breast micro-cancer image data set through a self-supervision deep learning model to obtain a micro-cancer recognition model.
Specifically, the clinical image of the small cancer with the depth of less than 1cm is divided into a training set and a testing set according to the proportion of 7:3, wherein the testing set is kept independent and invisible in the whole training process; the preliminary model in the second step is used for training on a training set, and because the training data volume is smaller, a self-supervision learning algorithm is adopted to fully mine the relation and characteristics of each part of the data in the training set, and a ModelEnsemble method is combined to aggregate a plurality of weak models into a model with high accuracy, and finally the artificial intelligence is compared with a clinician. Classifying the acquired report while acquiring the ultrasonic image information, and comparing the report result with a deep learning model; after the model is established, the ultrasonic doctors with different years and titles are selected to perform competition exams at the same time with the model, and statistical analysis is performed on the results.
As shown in fig. 3, in the micro cancer recognition model, the relationships among all the patches are predicted through the self-supervision learning branches, and the characteristics inside the image patches and the associated characteristics among the patches are fully mined through weight sharing with the self-supervision branches, so that the image characteristic expression capability of the model is further enhanced. Meanwhile, the pathological image and the ultrasonic image are used for training, the ultrasonic image of the converted pathological style is input, the accuracy of model identification is further improved, meanwhile, the fusion of the ultrasonic image and the pathological image can be further promoted, and a foundation is laid for effective feature extraction of the next three-dimensional reconstruction.
And step four, reconstructing a three-dimensional model based on the identification of the micro cancer.
Specifically, in the identification of lesions by the micro cancer identification model, two-dimensional ultrasound and pathological information are fused, on the basis of which 3D image information of micro blood perfusion and pathology is used as constraint and guide, and a generation countermeasure network is adopted to reconstruct a clear 3D structure of micro blood vessels and lesions. And (3) performing wax lump treatment on the pathology, continuously slicing more than 200 slices, performing HE staining, scanning all slices, and performing 3D reconstruction on the scanned file by using 3D reconstruction software. In two-dimensional state lesion recognition, two-dimensional ultrasound and pathological information are fully fused, tissue structure information is effectively enhanced, on the basis of the information, 3D image information of micro blood perfusion and pathological is used as constraint and guide in alignment and fusion, a generation countermeasure network is adopted, and a micro blood vessel and lesion clear 3D structure is reconstructed.
For three-dimensional model reconstruction, because the resolution of the pathological image is much higher than that of the ultrasonic image, the high-definition ultrasonic micro-blood flow three-dimensional image must be synthesized to register and fuse with the pathological image for display. The three-dimensional pathology and ultrasound micro-blood flow imaging fusion image is synthesized by adopting depth three-dimensional generation antagonism network (3 DGAN), and the algorithm principle is shown in figure 4. The result obtained after global averaging is adopted by the convolution network output part of the precise identification model for the micro cancer as a hidden variable, the hidden variable is input into a 3D generation countermeasure network, a 2D-to-3D structure is carried out through a generator, registration and fusion are carried out on the hidden variable and the input high-resolution pathology image, rendering output with high sense of reality is generated, and pathological manifestations, ultrasonic manifestations and relations of the two of the micro cancer can be intuitively explored.
And fifthly, analyzing the relationship between the microvessels and the cancers through the reconstructed three-dimensional model.
Specifically, according to the obtained 3D structure data, qualitative and quantitative analysis is respectively carried out on ultrasonic microvascular 3D imaging, tumor boundaries and the like, and the change of microvascular environment is analyzed.
Therefore, the breast micro cancer analysis method based on the multi-mode ultrasonic image self-supervision learning discovers the image characteristics of breast micro lesions through self-supervision deep learning and microvascular environment, thereby improving the early breast cancer detection rate with good prognosis, providing a new theoretical basis and method for accurate diagnosis and treatment for reducing the number of excessive biopsies, and having good clinical significance and value.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention and not for limiting it, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that: the technical scheme of the invention can be modified or replaced by the same, and the modified technical scheme cannot deviate from the spirit and scope of the technical scheme of the invention.

Claims (3)

1. A breast cancer micro-analysis method based on multi-mode ultrasonic image self-supervision learning is characterized by comprising the following steps: the method comprises the following steps:
step one, data acquisition is carried out on multi-mode ultrasonic images of the breast micro-cancer to form a database, and a gold standard is established;
collecting ultrasonic multi-mode images of breast micro-tumor, wherein the ultrasonic multi-mode images comprise 2D, CDFI and PW in conventional ultrasound, ultrasound contrast and micro-blood flow 3D imaging before and after ultrasound contrast, and all images respectively store static and dynamic images; after collection, a doctor with more than 5 years of ultrasonic clinical working experience marks the lesion area of the micro cancer manually, and the pathological result is taken as a gold standard, and the lesion area of the micro cancer is marked and stored;
step two, a self-supervision deep learning model is established through a breast tumor clinical image with the size of more than 1 cm;
obtaining a preliminary model through self-supervision deep learning training on a plurality of breast clinical images with the size of more than 1 cm;
training the processed breast micro-cancer image data set through a self-supervision deep learning model to obtain a micro-cancer recognition model;
dividing a minor cancer clinical image below 1cm into a training set and a testing set according to a proportion, wherein the testing set is kept independent; training on a training set through a preliminary model, mining the relation and characteristics of each part of data in the training set by adopting a self-supervision learning algorithm, and combining a ModelEnsemble method to aggregate a plurality of weak models into a model with high accuracy;
step four, reconstructing a three-dimensional model based on the identification of the micro cancer; in the identification of the lesion by the micro cancer identification model, two-dimensional ultrasound and pathological information are fused, on the basis of which 3D image information of micro blood perfusion and pathology is used as constraint and guide, a generation countermeasure network is adopted, and a clear 3D structure of the micro blood vessel and the lesion is reconstructed;
fifthly, analyzing the relationship between the microvessels and the cancers through the reconstructed three-dimensional model; and respectively carrying out qualitative and quantitative analysis on ultrasonic microvascular 3D imaging, tumor boundaries and the like according to the obtained 3D structure data, and analyzing the change of microvascular environment.
2. The breast cancer analysis method based on multi-mode ultrasound image self-supervised learning of claim 1, wherein the method comprises the following steps: the self-supervision deep learning model randomly selects two image patches during training, and extracts the characteristics of the two patches through a deep network, so that the position relationship between the two patches is analyzed; in the micro cancer recognition model, the relation among all the patches is predicted through the self-supervision learning branch, and the characteristics inside the image patches and the associated characteristics among the patches are mined through weight sharing with the self-supervision learning branch, meanwhile, the pathological image and the ultrasonic image are used for training, and the ultrasonic image of the pathological style after conversion is input.
3. The breast cancer analysis method based on multi-mode ultrasound image self-supervised learning of claim 1, wherein the method comprises the following steps: the acquired data are multi-center data, and the data difference and the ultrasonic-to-pathological style image processing are removed from the multi-center data by generating an antagonistic network structure.
CN202310364389.7A 2023-04-07 2023-04-07 Breast micro cancer analysis method based on multi-mode ultrasonic image self-supervision learning Active CN116415649B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310364389.7A CN116415649B (en) 2023-04-07 2023-04-07 Breast micro cancer analysis method based on multi-mode ultrasonic image self-supervision learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310364389.7A CN116415649B (en) 2023-04-07 2023-04-07 Breast micro cancer analysis method based on multi-mode ultrasonic image self-supervision learning

Publications (2)

Publication Number Publication Date
CN116415649A CN116415649A (en) 2023-07-11
CN116415649B true CN116415649B (en) 2023-10-27

Family

ID=87054306

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310364389.7A Active CN116415649B (en) 2023-04-07 2023-04-07 Breast micro cancer analysis method based on multi-mode ultrasonic image self-supervision learning

Country Status (1)

Country Link
CN (1) CN116415649B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109447940A (en) * 2018-08-28 2019-03-08 天津医科大学肿瘤医院 Convolutional neural networks training method, ultrasound image recognition positioning method and system
CN113643269A (en) * 2021-08-24 2021-11-12 泰安市中心医院 Breast cancer molecular typing method, device and system based on unsupervised learning
CN114170375A (en) * 2021-11-04 2022-03-11 安徽省胸科医院(省结核病防治研究所) Standardized three-dimensional reconstruction sorting mode for evaluating benign and malignant pulmonary nodules
CN114913120A (en) * 2022-03-31 2022-08-16 上海大学 Multi-task breast cancer ultrasonic detection method based on transfer learning
CN115526870A (en) * 2022-10-09 2022-12-27 杭州电子科技大学 Breast pathology image classification method based on depth network fusion model

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109447940A (en) * 2018-08-28 2019-03-08 天津医科大学肿瘤医院 Convolutional neural networks training method, ultrasound image recognition positioning method and system
CN113643269A (en) * 2021-08-24 2021-11-12 泰安市中心医院 Breast cancer molecular typing method, device and system based on unsupervised learning
CN114170375A (en) * 2021-11-04 2022-03-11 安徽省胸科医院(省结核病防治研究所) Standardized three-dimensional reconstruction sorting mode for evaluating benign and malignant pulmonary nodules
CN114913120A (en) * 2022-03-31 2022-08-16 上海大学 Multi-task breast cancer ultrasonic detection method based on transfer learning
CN115526870A (en) * 2022-10-09 2022-12-27 杭州电子科技大学 Breast pathology image classification method based on depth network fusion model

Also Published As

Publication number Publication date
CN116415649A (en) 2023-07-11

Similar Documents

Publication Publication Date Title
US11101033B2 (en) Medical image aided diagnosis method and system combining image recognition and report editing
Zhou et al. A radiomics approach with CNN for shear-wave elastography breast tumor classification
CN110310281B (en) Mask-RCNN deep learning-based pulmonary nodule detection and segmentation method in virtual medical treatment
Zhang et al. Photoacoustic image classification and segmentation of breast cancer: a feasibility study
JP2022525198A (en) Deep convolutional neural network for tumor segmentation using positron emission tomography
US7672491B2 (en) Systems and methods providing automated decision support and medical imaging
US8111896B2 (en) Method and system for automatic recognition of preneoplastic anomalies in anatomic structures based on an improved region-growing segmentation, and commputer program therefor
TWI750583B (en) Medical image dividing method, device, and system, and image dividing method
KR101805624B1 (en) Method and apparatus for generating organ medel image
CN112086197B (en) Breast nodule detection method and system based on ultrasonic medicine
CN109493325A (en) Tumor Heterogeneity analysis system based on CT images
CN109009110A (en) Axillary lymphatic metastasis forecasting system based on MRI image
Memon et al. Segmentation of lungs from CT scan images for early diagnosis of lung cancer
CN105654490A (en) Lesion region extraction method and device based on ultrasonic elastic image
CN114782307A (en) Enhanced CT image colorectal cancer staging auxiliary diagnosis system based on deep learning
CN109498046A (en) The myocardial infarction quantitative evaluating method merged based on nucleic image with CT coronary angiography
JPWO2020027228A1 (en) Diagnostic support system and diagnostic support method
CN110728239A (en) Gastric cancer enhanced CT image automatic identification system utilizing deep learning
Jayanthi et al. Extracting the liver and tumor from abdominal CT images
Honghan et al. Rms-se-unet: A segmentation method for tumors in breast ultrasound images
Armya et al. Medical images segmentation based on unsupervised algorithms: a review
CN113470060A (en) Coronary artery multi-angle curved surface reconstruction visualization method based on CT image
CN116415649B (en) Breast micro cancer analysis method based on multi-mode ultrasonic image self-supervision learning
CN111265234A (en) Method and system for judging properties of lung mediastinal lymph nodes
CN116612313A (en) Pulmonary nodule benign and malignant classification method based on improved Efficient Net-B0 model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant