CN111915594A - End-to-end neural network-based breast cancer focus segmentation method - Google Patents

End-to-end neural network-based breast cancer focus segmentation method Download PDF

Info

Publication number
CN111915594A
CN111915594A CN202010781871.7A CN202010781871A CN111915594A CN 111915594 A CN111915594 A CN 111915594A CN 202010781871 A CN202010781871 A CN 202010781871A CN 111915594 A CN111915594 A CN 111915594A
Authority
CN
China
Prior art keywords
neural network
data
image
breast cancer
focus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010781871.7A
Other languages
Chinese (zh)
Inventor
邵叶秦
高瞻
汤卫霞
汤佳欢
盛美红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong University
Original Assignee
Nantong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong University filed Critical Nantong University
Priority to CN202010781871.7A priority Critical patent/CN111915594A/en
Publication of CN111915594A publication Critical patent/CN111915594A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Abstract

The invention discloses a breast cancer focus image segmentation method based on an end-to-end neural network, which comprises the following steps: step one, manual marking: marking a lesion area on the data according to the experience of the doctor; step two, data preprocessing: preprocessing three-dimensional image data, and cutting the three-dimensional image data into an image size suitable for a neural network; step three, selecting a model: selecting an improved end-to-end neural network model; step four, training a model: training an optimal model on the marked training set by using the improved network model; step five, predicting data: predicting a focus area by using the trained model; step six, evaluating results: and measuring the accuracy of lesion segmentation by using corresponding evaluation indexes. The invention provides an improved end-to-end neural network, which not only uses a cavity residual error network on an original resolution image and keeps rich characteristic information of a target; and a weighting form Dice loss function is also adopted, so that the segmentation effect on small target and focus edge regions is better.

Description

End-to-end neural network-based breast cancer focus segmentation method
Technical Field
The invention particularly relates to a breast cancer focus image segmentation method based on an improved end-to-end neural network model.
Background
According to the related data of the current world health organization, the breast cancer becomes the cancer with the highest incidence rate in women, and the health of the women is seriously damaged. Early diagnosis and treatment are needed to reduce the mortality of breast cancer.
With the continuous development of neural networks, three-dimensional convolutional neural networks are beginning to emerge, which make full use of the interlayer relevance of three-dimensional images and are well applied to the processing of 3D medical images. With the continuous development of deep learning, there are some segmentation algorithm models for processing three-dimensional images, such as 3D-Unet and V-Net. The two three-dimensional segmentation algorithm models comprise a prototype of a full convolution neural network, the internal implementation and partial details are different, the layer jump thought of a residual error network is enhanced, and the advantages of the three-dimensional convolution network on breast cancer focus segmentation are highlighted. The experimental comparison results show that the algorithms have some problems in the segmentation of the breast cancer focus edge, the boundary of the edge segmentation is not clear enough, the segmentation effect of the part with similar soft tissue gray level at the edge is not good enough, and the edge position is not accurate enough.
Aiming at the problems, the invention provides an improved end-to-end neural network model, and the effect of breast cancer tumor image segmentation is improved.
Disclosure of Invention
The purpose of the invention is as follows: in order to solve the defects of the prior art, the invention provides a breast cancer focus image segmentation method based on an end-to-end neural network model.
The technical scheme is as follows: a breast cancer focus image segmentation method based on an improved end-to-end neural network model comprises the following steps:
step one, manual marking: collecting data, marking a target area on the data according to the experience of a doctor, and making a corresponding data sample set;
step two, data preprocessing: preprocessing three-dimensional image data, and cutting the three-dimensional image data into an image size suitable for neural network input;
step three, selecting a model: the invention selects an improved end-to-end neural network model;
based on the left side of the improved end-to-end neural network model, a down-sampling path is arranged, the right side of the improved end-to-end neural network model is an up-sampling path, a three-layer structure is adopted, in a down-sampling stage, a coder _1 layer, a coder _2 layer, a coder _3 layer and a bottom layer all adopt convolution kernel sizes of 5 x 5, the number of channels is sequentially increased in an increasing mode and is respectively 16, 32, 64 and 128, each pooling layer is replaced by a convolution layer with 2 x 2 and step length of 2, the convolution layers are used for halving a characteristic diagram, and the occupation of an internal memory is reduced;
in the up-sampling stage, the number of channels corresponding to the bottom layer, the decoder _3 layer, the decoder _2 layer and the decoder _1 layer is sequentially decreased to 128, 64, 32 and 16 respectively, the activation function adopts a PReLU function, and the neural network can learn autonomously by using the activation function with parameters, has better expression capability and can learn better characteristics in the training stage;
based on an improved end-to-end neural network model, acquiring features by adopting a cavity residual error network under the original image resolution, and fusing the features with the features acquired by sampling on the end-to-end network to acquire richer features so as to accurately segment the focus;
based on an improved end-to-end neural network model, a weighting form Dice function is used as a loss function, probability weight is distributed to each pixel, the network learns in the direction with large weight, the problem that positive and negative samples are unbalanced due to small breast cancer focuses is solved, learning of a target is strengthened, and the neural network can obtain better segmentation performance on breast cancer focus data. The weighted form of the Dice loss function is formulated as follows:
Figure BDA0002620546310000031
Figure BDA0002620546310000032
where l is a class label (here a 2-class problem), gljFor the j-th pixel value, s, on the manually segmented imageljFor predicting the value of the j-th pixel on the image, wlThe weight of the class I is used for balancing positive and negative samples, and N is the number of pixels on the image;
the Dice loss function in the weighting form has the same advantages as the Dice loss function, can process the problem of secondary classification as the Dice loss function, and can make up the problem of unbalanced small target samples in the training process;
step four, training a model: training the model on the marked training set by using an improved end-to-end neural network model, and continuously adjusting network parameters to ensure that the effect of the network model is optimal;
step five, predicting data: the method comprises the steps of predicting a test sample mark, similarly preprocessing a three-dimensional image in the prediction stage of the image, then sending the preprocessed image into a pre-trained model for prediction, and outputting a three-dimensional image with the same size, wherein the three-dimensional image comprises a well-segmented focus region;
step six, evaluating results: and measuring the similarity of the three-dimensional segmentation image and the manually marked image by using corresponding evaluation indexes, and evaluating the accuracy of the prediction result.
Furthermore, in the first step, the data used in the invention are all real data obtained by the MR imaging department of the hospital in the breast cancer diagnosis and treatment process; the data is mainly collected from 50 patients, and each patient comprises four different sequences of images of stages 1, 2, 4 and 6; phase 1 represents the image without the photographic agent injection, and phases 2, 4 and 6 represent the MR images taken at different time periods after the photographic agent enhancement was injected, respectively.
Furthermore, in the second step, the software used for the experimental labeling is ITK-SNAP, the corresponding function in the tool is used for sketching the focus area, and the sketched focus area is used as marking data which corresponds to the original data to prepare for a subsequent data set.
Furthermore, in the third step, the invention adopts the cavity residual error network to obtain rich characteristics under the original image resolution, and the characteristics are fused with the characteristics obtained by sampling on the end-to-end network, so as to realize accurate focus segmentation.
Furthermore, in the third step, a weighting form Dice function is used as a loss function, probability weight is distributed to each pixel, the problem of unbalance of positive and negative samples caused by small breast cancer focuses is solved, learning is more focused on a region with large weights, learning of the focus region is enhanced, and segmentation performance of the breast cancer focuses is improved.
Further, in the second step, the data acquired from the hospital is DICOM format data, and is to be converted into MHA format.
Has the advantages that: the invention provides an improved end-to-end neural network based on a traditional end-to-end neural network, which not only uses a cavity residual error network on an original resolution image and keeps rich characteristic information of a target; the traditional Dice loss function is replaced by the Dice loss function in a weighting form, and compared with a V-Net neural network, the segmentation effect on small target and focus edge regions is better.
Drawings
FIG. 1 is a schematic view of a breast cancer lesion segmentation process according to the present invention;
FIG. 2 is a schematic view of the lesion image segmentation of the present invention;
FIG. 3 is a schematic diagram of a conventional end-to-end neural network architecture (V-Net) in the present invention;
FIG. 4 is a schematic diagram of an improved end-to-end neural network architecture of the present invention;
FIG. 5 is a comparison graph of an improved end-to-end neural network of the present invention and a conventional end-to-end neural network.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below so that those skilled in the art can better understand the advantages and features of the present invention, and thus the scope of the present invention will be more clearly defined. The embodiments described herein are only a few embodiments of the present invention, rather than all embodiments, and all other embodiments that can be derived by one of ordinary skill in the art without inventive faculty based on the embodiments described herein are intended to fall within the scope of the present invention.
Examples
1. Artificial marking
The data used in the experiment of the invention are all real data obtained by the MR imaging department of the hospital in the breast cancer diagnosis and treatment process, and are used for the experiment under the condition of hospital authorization. The experimental data mainly collected 50 patients, each of which included four different stage images of stage 1, stage 2, stage 4 and stage 6. The software used for the experimental labeling is ITK-SNAP, and the tool is used for manually marking the lesion area layer by layer and independently storing the lesion area into a marking file. In the file, a pixel having a pixel value of 1 represents a lesion area, and a pixel having a pixel value of 0 represents a background area. This markup file corresponds to the original image, and is ready for subsequent processing.
2. Data pre-processing
Because the three-dimensional image data obtained from a hospital is larger, in order to enable the algorithm to better concentrate on the breast cancer focus region and reduce the influence of other regions, the MR images of the breast cancer of the four sequences are cut into the size of the image suitable for being input by a neural network according to the uniform size.
In order to enable the three-dimensional image data to be cut in batch, a SimpleITK tool module is used for automatically reading 384 × 128 three-dimensional original images and marked images, finding a focus area by scanning the marked images corresponding to the original images, calculating the center position of the focus area, taking the center position of the focus as the center, outwards cutting a 3D sub-image with the size of 128 × 64, ensuring that the complete focus area is contained and stored, and performing the same operation on the corresponding marked images. As shown in fig. 2, wherein the red region in the figure represents the lesion segmentation region of the breast cancer, which is respectively shown from the transverse plane, the sagittal plane and the coronal plane.
3. Breast cancer lesion segmentation
Based on the left side of the improved end-to-end neural network model, a down-sampling path is arranged, the right side of the improved end-to-end neural network model is an up-sampling path, a three-layer structure is adopted, in a down-sampling stage, a coder _1 layer, a coder _2 layer, a coder _3 layer and a bottom layer all adopt convolution kernel sizes of 5 x 5, the number of channels is sequentially increased in an increasing mode and is respectively 16, 32, 64 and 128, each pooling layer is replaced by a convolution layer with 2 x 2 and step length of 2, the convolution layers are used for halving a characteristic diagram, and the occupation of an internal memory is reduced;
in the up-sampling stage, the number of channels corresponding to the bottom layer, the decoder _3 layer, the decoder _2 layer and the decoder _1 layer is sequentially decreased to 128, 64, 32 and 16 respectively, the activation function adopts a PReLU function, and the neural network can learn autonomously by using the activation function with parameters, has better expression capability and can learn better characteristics in the training stage;
based on the improved end-to-end neural network model, the hole residual error network is adopted to obtain features under the original image resolution, and the features are fused with the features obtained by sampling on the end-to-end network (as shown by a dotted line frame in fig. 4), so that richer features are obtained, and the focus is accurately segmented.
Based on an improved end-to-end neural network model, a weighting form Dice function is used as a loss function, probability weight is distributed to each pixel, the network learns in the direction with large weight, the problem that positive and negative samples are unbalanced due to small breast cancer focuses is solved, learning of a target is strengthened, and the neural network can obtain better segmentation performance on breast cancer focus data. The weighted form of the Dice loss function is formulated as follows:
Figure BDA0002620546310000061
Figure BDA0002620546310000062
where l is a class label (here a 2-class problem), gljFor the j-th pixel value, s, on the manually segmented imageljFor predicting the value of the j-th pixel on the image, wlIs a weight of class l for balancing positiveNegative examples, N is the number of pixels on the image.
4. Experimental results of the improved model
4.1 results of the experiment
The present invention compares a conventional end-to-end neural network with an improved end-to-end neural network, as shown in figure 5. The segmentation results of the two neural network models on the stage 1, 2, 4 and 6 images of the patient are shown, wherein the red color is the focus area of the breast cancer tumor. Fig. 5 shows segmentation results of 4 slices in each epoch. From experimental results, the improved end-to-end neural network has better segmentation performance than the traditional end-to-end neural network (V-Net), especially in the marginal area of the lesion. This is mainly due to modifications of the network structure and adjustments of the loss function.
In order to quantitatively compare the traditional end-to-end neural network and the improved end-to-end neural network, the invention uses three evaluation indexes of Dice Similarity Coefficient (DSC), average Hausdorff distance (AvgHausdorff), VOE (volume overlap error) to measure the segmentation result, as shown in Table 1.
TABLE 1 COMPARATIVE TABLE OF TRADITIONAL END-TO-END NEURAL NETWORK AND IMPROVED END-TO-END NEURAL NETWORK SEGMENTATION PERFORMANCE
Figure BDA0002620546310000071
As can be seen from table 1, the improved end-to-end neural network is more accurate in segmenting the lesion than the conventional end-to-end neural network in the evaluation indexes of the three segmentation performances.
6. Different loss function result analysis
In order to prove the effect of the weighted form of the Dice loss function, the invention compares the weighted form of the Dice loss function with the traditional Dice loss function on the same neural network. The Dice loss function is used for calculating the overlapping degree between the predicted result and the real result. As shown in table 2:
TABLE 2 comparison of segmentation Performance for different loss functions (Dice index)
Figure BDA0002620546310000072
From the experimental data in the table, it can be seen that the weighting form of the Dice loss function achieves better segmentation results than the Dice loss function.

Claims (6)

1. A segmentation method of breast cancer focus images based on an end-to-end neural network is characterized by comprising the following steps: the method comprises the following steps:
step one, manual marking: collecting data, marking a target area on the data according to the experience of a doctor, and making a corresponding data sample set;
step two, data preprocessing: preprocessing three-dimensional image data, and cutting the three-dimensional image data into an image size suitable for neural network input;
step three, selecting a model: the invention selects an improved end-to-end neural network model;
a left side based on an end-to-end neural network model is a down-sampling path, the right side is an up-sampling path, a three-layer structure is adopted, in a down-sampling stage, a coder _1 layer, a coder _2 layer, a coder _3 layer and a bottom layer all adopt convolution kernel sizes of 5 x 5, the number of channels is sequentially increased in an increasing mode and is respectively 16, 32, 64 and 128, each pooling layer is replaced by a convolution layer with 2 x 2 and step length of 2, the convolution layer is used for halving a feature diagram, and the occupation of an internal memory is reduced;
in the up-sampling stage, the number of channels corresponding to the bottom layer, the decoder _3 layer, the decoder _2 layer and the decoder _1 layer is sequentially decreased to 128, 64, 32 and 16 respectively, the activation function adopts a PReLU function, and the neural network can learn autonomously by using the activation function with parameters, has better expression capability and can learn better characteristics in the training stage;
a method based on end-to-end neural network model adopts cavity residual error network to obtain features under original image resolution, and fuses with features obtained by sampling on end-to-end network to obtain richer features so as to accurately segment focus;
a kind of based on end-to-end neural network model adopts the Dice function of the weighting form as the loss function, distribute the probability weight for every pixel, let the network study toward the direction that the weight is big, reduced the unbalanced question of positive and negative sample that the breast cancer focus is little brought, strengthen the study of the goal, make the neural network obtain better segmentation performance on the breast cancer focus data, the Dice loss function formula of the weighting form is as follows:
Figure FDA0002620546300000021
Figure FDA0002620546300000022
where l is a class label (here a 2-class problem), gljFor the j-th pixel value, s, on the manually segmented imageljFor predicting the value of the j-th pixel on the image, wlThe weight of the class I is used for balancing positive and negative samples, and N is the number of pixels on the image;
the Dice loss function in the weighting form has the same advantages as the Dice loss function, can process the problem of secondary classification as the Dice loss function, and can make up the problem of unbalanced small target samples in the training process;
step four, training a model: training the model on the marked training set by using an improved end-to-end neural network model, and continuously adjusting network parameters to ensure that the effect of the network model is optimal;
step five, predicting data: the test sample markers are predicted. In the image prediction stage, the same preprocessing is also needed to be carried out on the three-dimensional image, then the preprocessed image is sent into a pre-trained model for prediction, and a three-dimensional image with the same size is output and contains a well-segmented focus region;
step six, evaluating results: and measuring the similarity of the three-dimensional segmentation image and the manually marked image by using corresponding evaluation indexes, and evaluating the accuracy of the prediction result.
2. The method for segmenting the breast cancer lesion image based on the improved end-to-end neural network model according to claim 1, wherein: in the first step, the data used in the invention are real data obtained by the MR imaging department of the hospital in the breast cancer diagnosis and treatment process, the data are mainly collected from 50 patients, each patient comprises four different sequence images of stages 1, 2, 4 and 6, the stage 1 represents the image without the injection of the contrast agent, and the stages 2, 4 and 6 represent the MR images taken in different time periods after the contrast agent is injected for enhancement.
3. The method for segmenting the breast cancer lesion image based on the improved end-to-end neural network model according to claim 1, wherein: in the second step, the software used for the experimental labeling is ITK-SNAP, the corresponding function in the tool is used for sketching the focus area, the sketched focus area is used as marking data, and the marking data corresponds to the original data to prepare for a subsequent data set.
4. The method for segmenting the breast cancer lesion image based on the improved end-to-end neural network model according to claim 1, wherein: in the third step, the invention adopts the cavity residual error network to obtain rich characteristics under the original image resolution and fuses the characteristics with the characteristics obtained by sampling on the end-to-end network so as to realize accurate focus segmentation.
5. The method for segmenting the breast cancer lesion image based on the improved end-to-end neural network model according to claim 1, wherein: in the third step, a weighting type Dice function is used as a loss function, probability weight is distributed to each pixel, the problem of unbalance of positive and negative samples caused by small breast cancer focuses is solved, the learning is more focused on a region with large weight, the learning of the focus region is enhanced, and the segmentation performance of the breast cancer focuses is improved.
6. The method for segmenting the breast cancer lesion image based on the improved end-to-end neural network model according to claim 2, wherein: in the second step, the data obtained from the hospital is DICOM format data, and is converted into MHA format.
CN202010781871.7A 2020-08-06 2020-08-06 End-to-end neural network-based breast cancer focus segmentation method Withdrawn CN111915594A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010781871.7A CN111915594A (en) 2020-08-06 2020-08-06 End-to-end neural network-based breast cancer focus segmentation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010781871.7A CN111915594A (en) 2020-08-06 2020-08-06 End-to-end neural network-based breast cancer focus segmentation method

Publications (1)

Publication Number Publication Date
CN111915594A true CN111915594A (en) 2020-11-10

Family

ID=73288124

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010781871.7A Withdrawn CN111915594A (en) 2020-08-06 2020-08-06 End-to-end neural network-based breast cancer focus segmentation method

Country Status (1)

Country Link
CN (1) CN111915594A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112617850A (en) * 2021-01-04 2021-04-09 苏州大学 Premature beat and heart beat detection method for electrocardiosignals
CN113129327A (en) * 2021-04-13 2021-07-16 中国科学院近代物理研究所 Method and system for generating inner general target area based on neural network model
CN113160253A (en) * 2020-12-29 2021-07-23 南通大学 Three-dimensional medical image segmentation method based on sparse mark and storage medium
CN113435491A (en) * 2021-06-20 2021-09-24 上海体素信息科技有限公司 Medical image processing method and device
CN113469229A (en) * 2021-06-18 2021-10-01 中山大学孙逸仙纪念医院 Method and device for automatically labeling breast cancer focus based on deep learning

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113160253A (en) * 2020-12-29 2021-07-23 南通大学 Three-dimensional medical image segmentation method based on sparse mark and storage medium
CN113160253B (en) * 2020-12-29 2024-01-30 南通大学 Three-dimensional medical image segmentation method based on sparse markers and storage medium
CN112617850A (en) * 2021-01-04 2021-04-09 苏州大学 Premature beat and heart beat detection method for electrocardiosignals
CN112617850B (en) * 2021-01-04 2022-08-30 苏州大学 Premature beat and heart beat detection system for electrocardiosignals
CN113129327A (en) * 2021-04-13 2021-07-16 中国科学院近代物理研究所 Method and system for generating inner general target area based on neural network model
CN113469229A (en) * 2021-06-18 2021-10-01 中山大学孙逸仙纪念医院 Method and device for automatically labeling breast cancer focus based on deep learning
CN113435491A (en) * 2021-06-20 2021-09-24 上海体素信息科技有限公司 Medical image processing method and device

Similar Documents

Publication Publication Date Title
CN111915594A (en) End-to-end neural network-based breast cancer focus segmentation method
CN110930416B (en) MRI image prostate segmentation method based on U-shaped network
CN109376636B (en) Capsule network-based eye fundus retina image classification method
CN111798416B (en) Intelligent glomerulus detection method and system based on pathological image and deep learning
CN110276745B (en) Pathological image detection algorithm based on generation countermeasure network
CN110930397A (en) Magnetic resonance image segmentation method and device, terminal equipment and storage medium
CN109389584A (en) Multiple dimensioned rhinopharyngeal neoplasm dividing method based on CNN
CN112132817A (en) Retina blood vessel segmentation method for fundus image based on mixed attention mechanism
CN112381178B (en) Medical image classification method based on multi-loss feature learning
CN110751636B (en) Fundus image retinal arteriosclerosis detection method based on improved coding and decoding network
CN116739985A (en) Pulmonary CT image segmentation method based on transducer and convolutional neural network
CN110543912A (en) Method for automatically acquiring cardiac cycle video in fetal key section ultrasonic video
CN113034462A (en) Method and system for processing gastric cancer pathological section image based on graph convolution
CN115546605A (en) Training method and device based on image labeling and segmentation model
CN112348059A (en) Deep learning-based method and system for classifying multiple dyeing pathological images
CN114998265A (en) Liver tumor segmentation method based on improved U-Net
CN114821189A (en) Focus image classification and identification method based on fundus images
CN112381846A (en) Ultrasonic thyroid nodule segmentation method based on asymmetric network
JP2024027078A (en) Multi-scale whole slide pathological feature fusion extraction method, system, electronic equipment and storage medium
CN115035127A (en) Retinal vessel segmentation method based on generative confrontation network
CN117058676B (en) Blood vessel segmentation method, device and system based on fundus examination image
CN114140437A (en) Fundus hard exudate segmentation method based on deep learning
CN113237881A (en) Method and device for detecting specific cells and pathological section detection system
CN116486156A (en) Full-view digital slice image classification method integrating multi-scale feature context
CN114519722A (en) Carotid artery extraction method based on convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20201110