CN111047606B - Pathological full-section image segmentation algorithm based on cascade thought - Google Patents

Pathological full-section image segmentation algorithm based on cascade thought Download PDF

Info

Publication number
CN111047606B
CN111047606B CN201911235144.4A CN201911235144A CN111047606B CN 111047606 B CN111047606 B CN 111047606B CN 201911235144 A CN201911235144 A CN 201911235144A CN 111047606 B CN111047606 B CN 111047606B
Authority
CN
China
Prior art keywords
network
algorithm based
segmentation
segmentation algorithm
full
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911235144.4A
Other languages
Chinese (zh)
Other versions
CN111047606A (en
Inventor
姜志国
孙树娇
郑钰山
谢凤英
张浩鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201911235144.4A priority Critical patent/CN111047606B/en
Publication of CN111047606A publication Critical patent/CN111047606A/en
Application granted granted Critical
Publication of CN111047606B publication Critical patent/CN111047606B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Abstract

The invention discloses a pathological full-section image segmentation algorithm based on a cascade thought, which comprises the following steps: the first network is trained by using samples collected under low resolution, and regions easy to segment are filtered out to obtain a rough segmentation result of the cancer region; the second network optimizes the cancer region segmentation results obtained by the first network. The invention not only improves the segmentation precision of the digital pathological image, but also shortens the test time.

Description

Pathological full-section image segmentation algorithm based on cascade thought
Technical Field
The invention belongs to the technical field of digital image processing, and particularly relates to a pathological full-section image segmentation algorithm based on a cascade thought.
Background
The pathological full section is converted into a large-scale digital image with high magnification for computer display, transmission and processing by a special scanning imaging system. In clinical diagnosis, a large number of diagnosed pathology whole sections are saved, forming a valuable database of cases. Cancer diagnosis based on pathological full-section images is a work which has high requirements on the diagnosis experience of doctors, however, different pathological experts have diversity on the diagnosis results of the same section, and difficulty is caused in accurate diagnosis of cancer.
Due to the characteristics of high resolution and large scale of pathological digital sections, in order to obtain a better segmentation effect, the conventional segmentation method mainly utilizes full-section images under high magnification to carry out modeling, so that a large amount of calculation is consumed in the test process, and the automatic processing process needs a long time.
In the prior art, a Multi-scale-input attachment U-Net is adopted, as shown in FIG. 1, not only pathological sections with high resolution are used as input, but also images with low resolution are input, the calculation amount is greatly increased in the establishment of the whole network model, and the time consumption of the test process is long.
Therefore, how to provide a pathological full-slice image segmentation algorithm based on the cascade thought is a problem that needs to be solved urgently by those skilled in the art.
Disclosure of Invention
In view of this, the invention provides a pathological full-section image segmentation algorithm based on a cascading thought, which not only improves the segmentation precision, but also shortens the testing time.
In order to achieve the purpose, the invention adopts the following technical scheme:
a pathology full-section image segmentation algorithm based on a cascade thought comprises two U-Net structures, wherein a first network is trained by samples collected under low resolution, and regions easy to segment are filtered out to obtain cancer region segmentation results; the second network optimizes the cancer region segmentation result obtained by the first network; the training sample size of the second network is n × n, which is determined from the segmentation result graph of the first network according to the following formula:
Figure BDA0002304690730000021
wherein p is k And outputting the probability value of the kth pixel of the segmentation result graph for the first network, wherein t is a threshold value.
Preferably, the threshold t is set to 0.05.
Preferably, the first network selects a pathological full-slice image under a 5-fold objective lens as an input of the network, and the size of an input image block is 512 × 512 × 3 pixels.
Preferably, the model of the U-Net structure comprises: a contraction part which is composed of a convolution layer and a down-sampling layer and is used for extracting the context information of the image; and the expansion part is composed of a convolution layer and an upper sampling layer to acquire accurate position information and fuse the shallow feature and the deep feature to avoid the loss of the shallow structure feature and finally obtain an accurate segmentation result graph.
Preferably, the model of the U-Net structure comprises four 2 × 2 down-sampling layers, four symmetrical 2 × 2 up-sampling layers, the contraction part and the expansion part each comprise two convolution layers with convolution kernels of 3 × 3, and the activation function is a ReLU function; and finally, a convolution layer with a convolution kernel of 1 multiplied by 1 is arranged, and the activation function is a sigmoid function.
Preferably, the optimization methods adopted by the first network and the second network during training are both random gradient descent methods with momentum, the initial learning rate of the first network is 0.01, the initial learning rate of the second network is 0.001, the initial learning rate of the second network is 0.9, the momentum of the first network is 0.9, the weight attenuation is 1e-6, the batch size of the first network is 8, and the batch size of the second network is 2.
Preferably, when the first network segments the cancer region, the cancer region sensitivity loss function designed based on the Dice coefficient is as follows:
Figure BDA0002304690730000031
Figure BDA0002304690730000032
where N is the total number of pixels in the partitioned prediction matrix, p ic ∈[0,1]Is the probability value of the i-th pixel in the segmentation prediction matrix belonging to class c, g ic Is a label of the training sample, g ic =1 denotes that the pixel is a positive sample, g ic =0 indicates that the pixel is a negative sample; λ e (1, ∞) is used to treat positive and negative samples differently; ε is a small positive constant.
The invention has the beneficial effects that:
the invention can improve the segmentation precision, reduce the calculation amount of the network and improve the test speed. Compared with the prior art that the extension U-Net directly takes a high-resolution image as input in order to obtain a better segmentation result, the method disclosed by the invention filters a large number of easily-identified negative sample pixels under low resolution, the high-resolution network input is only a difficult sample, and the segmentation capability of the high-resolution network is improved. The segmentation method has the advantages that the segmentation precision is better than that of the high-resolution image which is directly segmented, about half of processing time is reduced, and the efficiency of segmenting the network is greatly improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a structure diagram of a conventional U-Net network.
Fig. 2 is a structural view of the present invention.
FIG. 3 is a graph of the regional sensitivity loss function of the cancer according to the present invention.
FIG. 4 is a diagram of the segmentation results at various stages of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 2, the invention provides a pathological full-slice image segmentation algorithm based on a cascade thought, which includes two U-Net structures, wherein a first network trains with samples collected under low resolution, and filters out regions easy to segment to obtain cancer region segmentation results; the second network optimizes the cancer region segmentation result obtained by the first network; the training sample size of the second network is n × n, which is determined from the segmentation result graph of the first network according to the following formula:
Figure BDA0002304690730000041
wherein p is k And outputting the probability value of the kth pixel of the segmentation result graph for the first network, wherein t is a threshold value and is set to be 0.05, namely, selecting a positive sample area which is difficult to segment as the input of the second network, so that the trained second network has strong capability of segmenting difficult samples.
The first network selects a pathological full-slice image under a 5-time objective lens as the input of the network, and the size of an input image block is 512 × 512 × 3 pixels. The model of the U-Net structure comprises four 2 x 2 down-sampling layers and four symmetrical 2 x 2 up-sampling layers, the contraction part and the expansion part respectively comprise two convolution layers with convolution kernels of 3 x 3, and the activation function is a ReLU function; and finally, a convolution layer with a convolution kernel of 1 multiplied by 1 is arranged, and the activation function is a sigmoid function. The first network is mainly used to filter out easily segmented regions, such as most negative samples, and obtain approximate cancer region segmentation results.
The model of the U-Net structure comprises the following components: a contraction part which is composed of a convolution layer and a down-sampling layer and is used for extracting the context information of the image; and the expansion part is composed of a convolution layer and an upper sampling layer to acquire accurate position information and fuse the shallow feature and the deep feature to avoid the loss of the shallow structure feature and finally obtain an accurate segmentation result graph.
The size of the area selected according to equation (1) is enlarged by 10 times by the objective lens, and is cut out into image blocks of 1024 × 1024 × 3 pixels, and the size is increased compared to the input of the first network in order to ensure that the amount of context information included in each input image block is uniform. The U-Net network structure is consistent with the first network. The second network is mainly used for finely dividing the cancer region which is already divided in the first network, and plays a role in optimizing the division result.
The optimization methods adopted by the first network and the second network during training are both random gradient descent methods with momentum, the initial learning rate of the first network is 0.01, the initial learning rate of the second network is 0.001, the initial learning rate of the second network is 0.9, the momentum of the second network is 0.9, the weight attenuation is 1e-6, the batch size of the first network is 8, and the batch size of the second network is 2.
In the testing stage of the network, firstly, inputting a pathological full-slice image into a first low-resolution network to obtain a preliminary segmentation result; then, according to the formula (1), selecting a corresponding area to be transmitted to a second network, and replacing the predicted value of the second network at the corresponding position of the first network segmentation result. In the high-resolution network, only the positive sample area obtained by the first network needs to be processed, so that the calculation amount is greatly reduced. Compared with the common segmentation network, the cascade thought in the invention improves the accuracy of the segmentation result on one hand, and greatly reduces the calculation amount consumed in the segmentation process on the other hand.
The data used by the method is a data set ACDC-LungHP issued in an ISBI2019 competition, which comprises 150 labeled lung cancer pathology full-sections, 100 data are arbitrarily selected in the experiment of the method and are used as training sets, 80% of the training sets are used for training a network, and 20% of the training sets are used as verification sets; the remaining 50 sheets were used as a test set to verify the effectiveness of the network.
The Loss function in the invention is designed on the basis of Dice Loss. First, the Dice coefficient (DSC) is one of the most commonly used evaluation indicators in segmentation, and is used to calculate the degree of coincidence between a prediction map and a true value:
Figure BDA0002304690730000061
where N is the total number of pixels in the partitioned prediction matrix, p ic ∈[0,1]Is the probability value of the i-th pixel in the segmentation prediction matrix belonging to class c, g ic Is a label of the training sample, g ic =1 denotes that the pixel is a positive sample, g ic =0 indicates that the pixel is a negative sample; ε is a small positive constant, with ε above equation (3) serving as a normalization and ε below to prevent the denominator from appearing 0.
Dice Loss (DL) is a Loss function designed with this index:
Figure BDA0002304690730000062
the disadvantage of the DL function is that False Positive (FP) and False Negative (FN) are treated equally, which cannot satisfy the requirement of the two networks of the cascade structure in the present invention.
When the first network of the invention segments the cancer region, a lambda power is added on the basis of DL, and the adopted cancer region sensitivity loss function is as follows:
Figure BDA0002304690730000063
where λ ∈ (1, ∞) is used to treat positive and negative samples differently.
The reason why the cancer region sensitivity loss function holds is described in detail below:
as shown in fig. 3, assuming that the cancer region is circular and the area is 1, the change trend of CSL was observed with the area of the divided positive sample as a variable. When λ =1, i.e. the blue curve in the figure, is DL, and when λ > 1, the change in CSL with respect to the original DL is indicated by a black arrow, it can be seen that for the same magnitude of the loss function value (as loss = 2), the CSL is more inclusive for excessively segmenting the positive sample. Therefore, the first network in the cascaded networks proposed in this patent is trained by using the cancer region sensitivity loss function, so that a high recall rate can be obtained, and as many positive samples as possible can be segmented, so that the second network does not lose too many cancer regions in the process of optimizing the segmentation result. Experiments prove that the segmentation effect is best when the lambda =2.5, and the value is selected to carry out a series of experiments. And the second network selects the lambda =0.75 with the best segmentation effect to obtain a fine segmentation result, and the training of the whole network is completed.
Specifically, the training process is as follows: firstly, training a first network, executing forward propagation once to obtain a segmentation result probability graph, then utilizing an error between a calculated network predicted value and a training true value, and updating weight parameters in the network by a random gradient descent method with momentum so as to reduce the error; next, carrying out the next iteration, namely using the updated network parameters to execute forward propagation, calculating the error between the predicted value and the target value, and continuously updating the network weight parameters until all data are circulated for 20 times; the input of the second network is the area of the suspected positive sample obtained according to the formula (1) in the prediction result of the first network, and the training process is consistent with that of the first network.
The invention can improve the segmentation precision, reduce the calculation amount of the network and improve the test speed. Compared with the prior art that the Attention U-Net directly takes a high-resolution image as input in order to obtain a better segmentation result, the method filters a large number of easily-identified negative sample pixels under low resolution, the high-resolution network input is only a difficult sample, and the segmentation capability of the high-resolution network is improved. In addition, the invention also designs corresponding cancer regional sensitivity loss functions according to different requirements of two parts in the cascade network, so that the first network has higher sensitivity and the second network has better specificity. The segmentation method has the advantages that the segmentation precision is better than that of the high-resolution image which is directly segmented, about half of processing time is reduced, and the efficiency of segmenting the network is greatly improved.
The segmentation effect of the present invention is compared with the prior art Attention U-Net and Multi-scale-input Attention U-Net in Table 1. The network CSC-Net of the invention obtains the highest DSC and precision, and the network provided by the invention reduces about half of the time in the running time compared with the prior method, thereby greatly improving the segmentation efficiency. The cascade thought provided by the invention is also verified on the existing loss function, the effectiveness of the cascade strategy is verified, the effect of the sensitive loss function of the cancer region in the patent is also tested, and quantitative results are shown in table 1. The cancer region sensitivity loss function CSL provided by the invention enables the first low-resolution network to obtain the highest Recall rate (Recall) and segment the cancer region as comprehensively as possible. The intermediate results of the two CSC-Net stages proposed in the present invention are shown in fig. 4, and it can be visually observed that the cancer regions segmented by the first network are significantly more, and the segmentation results after the second network treatment are more refined and closer to the true value. This segmentation effect map is consistent with the data in table 1.
Table 1 shows the quantitative comparison results of CSC-Net and the present segmentation method (DSC: dice Score Cooefficient, recall: recall, precision, time: total test Time of 50 test sets).
Table 1:
Figure BDA0002304690730000081
the embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (6)

1. A pathology full-section image segmentation algorithm based on a cascade thought is characterized by comprising two U-Net structures, wherein a first network is trained by samples collected under low resolution, and regions easy to segment are filtered out to obtain cancer region segmentation results; the second network optimizes the first networkThe segmentation result of the cancer region; the training sample size of the second network is
Figure DEST_PATH_IMAGE002
The division result graph of the first network is determined according to the following formula:
Figure DEST_PATH_IMAGE004
(1)
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE006
output of the segmentation result map for the first network
Figure DEST_PATH_IMAGE008
The probability value of each pixel of the image,
Figure DEST_PATH_IMAGE010
is a threshold value;
when the first network segments the cancer region, the cancer region sensitivity loss function designed based on the Dice coefficient is as follows:
Figure DEST_PATH_IMAGE012
(2)
Figure DEST_PATH_IMAGE014
(3)
wherein N is the total number of pixels in the partitioned prediction matrix,
Figure DEST_PATH_IMAGE016
is to divide the prediction matrix into
Figure DEST_PATH_IMAGE018
A pixel belongs to
Figure DEST_PATH_IMAGE020
The probability value of a class is,
Figure DEST_PATH_IMAGE022
is a label for the training sample that is,
Figure DEST_PATH_IMAGE024
Figure DEST_PATH_IMAGE026
indicating that the pixel is a positive sample,
Figure DEST_PATH_IMAGE028
Figure DEST_PATH_IMAGE030
indicating that the pixel is a negative example;
Figure DEST_PATH_IMAGE032
used for treating positive samples and negative samples differently;
Figure DEST_PATH_IMAGE034
a small positive constant.
2. The pathological full-slice image segmentation algorithm based on the cascade thought as claimed in claim 1, wherein the threshold value is
Figure DEST_PATH_IMAGE036
Set to 0.05.
3. The pathological full-slice image segmentation algorithm based on the cascade thought as claimed in claim 1 or 2, wherein the first network selects the pathological full-slice image under the 5-fold objective lens as the input of the network, and the size of the input image block is 512 x 3 pixels.
4. The pathological full-slice image segmentation algorithm based on the cascade thought as claimed in claim 1, wherein the model of the U-Net structure comprises: a contraction part which is composed of a convolution layer and a down-sampling layer and is used for extracting the context information of the image; and the expansion part is composed of a convolution layer and an upper sampling layer to acquire accurate position information and fuse the shallow feature and the deep feature to avoid the loss of the shallow structure feature and finally obtain an accurate segmentation result graph.
5. The pathological full-slice image segmentation algorithm based on the cascade thought as claimed in claim 4, wherein the model of the U-Net structure comprises four 2 x 2 downsampling layers, four symmetrical 2 x 2 upsampling layers, the contraction part and the expansion part each comprise two convolution layers with convolution kernel of 3 x 3, and the activation function is a ReLU function; the end of the expansion part is a convolution layer with convolution kernel 1 × 1, and the activation function is sigmoid function.
6. The pathological full-slice image segmentation algorithm based on the cascade thought as claimed in claim 5, wherein the optimization methods adopted by the first network and the second network during training are both random gradient descent methods with momentum, the initial learning rate is 0.01 for the first network, 0.001 for the second network, 0.9 for the momentum, the weight attenuation is 1e-6, the batch size of the first network is 8, and the batch size of the second network is 2.
CN201911235144.4A 2019-12-05 2019-12-05 Pathological full-section image segmentation algorithm based on cascade thought Active CN111047606B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911235144.4A CN111047606B (en) 2019-12-05 2019-12-05 Pathological full-section image segmentation algorithm based on cascade thought

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911235144.4A CN111047606B (en) 2019-12-05 2019-12-05 Pathological full-section image segmentation algorithm based on cascade thought

Publications (2)

Publication Number Publication Date
CN111047606A CN111047606A (en) 2020-04-21
CN111047606B true CN111047606B (en) 2022-10-04

Family

ID=70234754

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911235144.4A Active CN111047606B (en) 2019-12-05 2019-12-05 Pathological full-section image segmentation algorithm based on cascade thought

Country Status (1)

Country Link
CN (1) CN111047606B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111724401A (en) * 2020-05-08 2020-09-29 华中科技大学 Image segmentation method and system based on boundary constraint cascade U-Net
CN111915622B (en) * 2020-07-09 2024-01-23 沈阳先进医疗设备技术孵化中心有限公司 Training of image segmentation network model and image segmentation method and device
CN112529908B (en) * 2020-12-03 2022-10-04 北京航空航天大学 Digital pathological image segmentation method based on cascade convolution network and model thereof
CN113610035B (en) * 2021-08-16 2023-10-10 华南农业大学 Rice tillering stage weed segmentation and identification method based on improved coding and decoding network
CN116310391B (en) * 2023-05-18 2023-08-15 安徽大学 Identification method for tea diseases

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109493346A (en) * 2018-10-31 2019-03-19 浙江大学 It is a kind of based on the gastric cancer pathology sectioning image dividing method more lost and device
CN109598728A (en) * 2018-11-30 2019-04-09 腾讯科技(深圳)有限公司 Image partition method, device, diagnostic system and storage medium
CN109993735A (en) * 2019-03-29 2019-07-09 成都信息工程大学 Image partition method based on concatenated convolutional
CN110211140A (en) * 2019-06-14 2019-09-06 重庆大学 Abdominal vascular dividing method based on 3D residual error U-Net and Weighted Loss Function

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109493346A (en) * 2018-10-31 2019-03-19 浙江大学 It is a kind of based on the gastric cancer pathology sectioning image dividing method more lost and device
CN109598728A (en) * 2018-11-30 2019-04-09 腾讯科技(深圳)有限公司 Image partition method, device, diagnostic system and storage medium
CN109993735A (en) * 2019-03-29 2019-07-09 成都信息工程大学 Image partition method based on concatenated convolutional
CN110211140A (en) * 2019-06-14 2019-09-06 重庆大学 Abdominal vascular dividing method based on 3D residual error U-Net and Weighted Loss Function

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
A NOVEL FOCAL TVERSKY LOSS FUNCTIONWITH IMPROVED ATTENTION U-NET FOR LESION SEGMENTATION;Nabila Abraham等;《http://arXiv.org》;20181018;第2部分 *
Automatic Liver and Lesion Segmentation in CT Using Cascaded Fully Convolutional Neural Networks and 3D Conditional Random Fields;Patrick Ferdinand Christ;《http://arXiv.org》;20161007;摘要,第2部分 *
CANCER SENSITIVE CASCADED NETWORKS (CSC-NET) FOR EFFICIENT HISTOPATHOLOGY WHOLE SLIDE IMAGE SEGMENTATION;Shujiao Sun 等;《2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI)》;20200407;全文 *
Deep Learning Methods for Lung Cancer Segmentation in Whole-slide;Zhang Li等;《http://arXiv.org》;20200821;全文 *
Patrick Ferdinand Christ.Automatic Liver and Lesion Segmentation in CT Using Cascaded Fully Convolutional Neural Networks and 3D Conditional Random Fields.《http://arXiv.org》.2016, *

Also Published As

Publication number Publication date
CN111047606A (en) 2020-04-21

Similar Documents

Publication Publication Date Title
CN111047606B (en) Pathological full-section image segmentation algorithm based on cascade thought
CN111612790B (en) Medical image segmentation method based on T-shaped attention structure
CN111696094B (en) Immunohistochemical PD-L1 membrane staining pathological section image processing method, device and equipment
CN104615911B (en) Method based on sparse coding and chain study prediction memebrane protein beta barrel trans-membrane regions
CN112446891A (en) Medical image segmentation method based on U-Net network brain glioma
CN114171197B (en) Breast cancer HER2 state prediction method and related equipment
CN115393584A (en) Establishment method based on multi-task ultrasonic thyroid nodule segmentation and classification model, segmentation and classification method and computer equipment
CN112381846A (en) Ultrasonic thyroid nodule segmentation method based on asymmetric network
CN111899259A (en) Prostate cancer tissue microarray classification method based on convolutional neural network
CN115393293A (en) Electron microscope red blood cell segmentation and positioning method based on UNet network and watershed algorithm
CN114913327A (en) Lower limb skeleton CT image segmentation algorithm based on improved U-Net
CN114972202A (en) Ki67 pathological cell rapid detection and counting method based on lightweight neural network
CN113554668B (en) Skin mirror image melanoma segmentation method, device and related components
CN117152433A (en) Medical image segmentation method based on multi-scale cross-layer attention fusion network
CN117132827A (en) Hot rolled steel strip surface defect detection method based on improved YOLOv5s network
CN108346471B (en) Pathological data analysis method and device
CN113269734B (en) Tumor image detection method and device based on meta-learning feature fusion strategy
CN115131628A (en) Mammary gland image classification method and equipment based on typing auxiliary information
CN114565626A (en) Lung CT image segmentation algorithm based on PSPNet improvement
CN113177913A (en) Coke microscopic optical tissue extraction method based on multi-scale U-shaped neural network
CN112634279A (en) Medical image semantic segmentation method based on attention Unet model
CN110969117A (en) Fundus image segmentation method based on Attention mechanism and full convolution neural network
CN113361417B (en) Human behavior recognition method based on variable time sequence
CN116883393B (en) Metal surface defect detection method based on anchor frame-free target detection algorithm
CN116150617B (en) Tumor infiltration lymphocyte identification method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant