CN113160142A - Brain tumor segmentation method fusing prior boundary - Google Patents

Brain tumor segmentation method fusing prior boundary Download PDF

Info

Publication number
CN113160142A
CN113160142A CN202110314971.3A CN202110314971A CN113160142A CN 113160142 A CN113160142 A CN 113160142A CN 202110314971 A CN202110314971 A CN 202110314971A CN 113160142 A CN113160142 A CN 113160142A
Authority
CN
China
Prior art keywords
boundary
tumor
optimal
network
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110314971.3A
Other languages
Chinese (zh)
Inventor
赵昶辰
陆星州
曾庆润
赵志明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202110314971.3A priority Critical patent/CN113160142A/en
Publication of CN113160142A publication Critical patent/CN113160142A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Abstract

A brain tumor segmentation method fusing prior boundaries aims at the defects that the existing convolution network cannot fully utilize global image information to cause the problems of rough boundary generation of brain tumor segmentation and easy false reconstruction of tumors, and the like, obtains the optimal boundary of a tumor true value from tumor prior knowledge, constructs an optimal boundary generation network, adds the optimal boundary to a 3D U-net network of a multi-down sampling channel to perform weight distribution and boundary enhancement on each layer of the network, and adds the similarity between the generated tumor edge and the tumor true value edge as a loss term to an original loss function to improve the accuracy of edge segmentation. According to the invention, the tumor information of nuclear magnetic images in different modes is utilized through multiple down-sampling channels, and the priori knowledge is fused into the network and the loss function, so that the comprehensiveness of tumor information utilization and the accuracy of tumor edge segmentation are improved.

Description

Brain tumor segmentation method fusing prior boundary
Technical Field
The invention relates to medical image processing, in particular to a brain tumor MRI image segmentation method.
Background
Brain tumor image segmentation based on medical images belongs to an application of computer vision in medical image processing, and tumor regions are separated from other tissue regions mainly by using image features. Reliable brain tumor segmentation is critical for accurate medical diagnosis and subsequent treatment. Since manual segmentation of brain tumors requires manual labeling by experts and screening of tissues is a very time-consuming, expensive and subjective task, practical automated methods are highly desirable. However, since brain tumors are highly heterogeneous in location, shape and size, the development of automatic segmentation methods has been a formidable task for decades.
Various methods of segmentation based on MRI images of brain tumors have been proposed. Most typical conventional segmentation methods are threshold and region-based segmentation methods, all information cannot be obtained from MRI images, and the segmentation results are relatively coarse. The fuzzy clustering-based method is easy to be influenced by noise in the image by segmenting according to the gray level of the image under the condition of no prior information, so that the applicability of the algorithm is limited. To date, 2014, studies related to the MRI brain tumor segmentation method based on the convolutional network are rapidly increasing. The method automatically extracts the characteristics of the tumor from the input image through data-driven learning, can automatically realize brain tumor segmentation, has strong capability of overcoming noise, and is greatly helpful for relieving the burden of medical workers. However, the edge between the tumor boundary and the normal tissue is very fuzzy, the phenomenon of gray level overlapping exists, the global image information cannot be fully utilized, the situation that false positive and false negative are easy to occur to the reconstructed brain tumor is caused, the fuzzy boundary segmentation is not accurate enough, and the segmentation effect needs to be further improved.
Disclosure of Invention
In order to solve the problems that the existing convolution network cannot fully utilize global image information to cause the generation of coarse brain tumor segmentation boundaries and the common false problem of tumor reconstruction, the invention provides a brain tumor segmentation method fusing prior boundaries.
The technical scheme adopted by the invention for solving the technical problems is as follows:
a method of brain tumor segmentation fused to a priori boundaries, the method comprising the steps of:
the method comprises the following steps: the method for extracting the prior boundary characteristics of the brain tumor comprises the following steps:
obtaining a tumor boundary through a corrosion algorithm, randomly taking boundary points and carrying out iterative computation to obtain boundary points with the highest similarity between a point enclosure space and a true value, taking the boundary points as optimal boundary points, and sequentially connecting the optimal boundary points to form an optimal boundary;
step two: the optimal boundary generates a network model. The process is as follows:
taking an original brain tumor image as input, calculating the loss of the output and the optimal boundary of the tumor, and training a generation network of the optimal boundary;
step three: building a 3D U-net basic network model of a multi-down-sampling channel, wherein the process is as follows:
acquiring nuclear magnetic images of two modes, simultaneously performing down-sampling on the two modes on the basis of 3DUnet, fusing bottom layer details in the deconvolution up-sampling process, and outputting a brain tumor segmentation map;
step four: adding a boundary loss term to the loss function as follows:
calculating and generating similarity between a tumor edge and a true tumor edge, and adding the similarity into a loss function to serve as a loss term for improving the edge segmentation accuracy;
step five: and joining the basic network by utilizing the optimal boundary construction weight distribution module and the boundary enhancement module, wherein the process is as follows:
and simultaneously downsampling the corresponding optimal boundary and the two modes in the training set, calculating the similarity between the optimal boundary and the two modes once in each layer, distributing weights to the two modes according to the similarity, then overlapping the boundary of the same layer on the two modes, and enhancing the modal boundary by using the prior boundary knowledge.
Further, in the step one, the process of extracting the brain tumor prior boundary features is as follows: corroding the existing true value of the tumor, and subtracting the true value after corrosion from the original true value of the tumor to obtain a tumor boundary; randomly selecting N points on the boundaries, filling a space surrounded by the points, and calculating the similarity between the space surrounded by the points and a true value; and repeating the operation for a set number of times, finally taking the points with the highest similarity between the filling space and the truth value as optimal boundary points, and sequentially connecting the optimal boundary points to form an optimal boundary.
Further, in the second step, the process of generating the network model by the optimal boundary includes:
taking an original brain tumor training set as network input, and calculating the loss of the output characteristics and the optimal tumor boundary:
Figure RE-GDA0003084716590000021
wherein t represents the optimal boundary of the tumor extracted from the truth training set, x represents the original brain tumor image, and F represents the boundary generation network;
and finally, training a generation network of the optimal boundary.
Furthermore, in the third step, the process of constructing the 3D U-net basic network model of the multi-down-sampling channel is as follows
The 3D U-net network replaces two dimensions in the U-net network by three-dimensional convolution operation, and is more suitable for training massive brain tumor MRI images; synchronously performing down-sampling on the MRI images of the two modes to fully utilize tumor information in the images, respectively obtaining two characteristics with the size of 8 x 8 voxels, wherein the number of channels is 512, and fusing the characteristics; deconvolution is carried out on the fused features to carry out up-sampling, the number of channels is kept unchanged, the size of the channels is changed into 16 × 16, the upper layer is fused with the bottom layer of one mode, and the lower layer is fused with the bottom layer of the other mode to obtain the features of 1024 channels; repeating the operations of deconvolution, fusion and convolution to obtain a tumor segmentation map with the size of 64 × 64;
inputting an original brain tumor image, and calculating a loss function between an output characteristic and a tumor truth value:
Figure RE-GDA0003084716590000031
wherein y represents tumor truth, G represents a network model, and G (x) represents output characteristics.
In the fourth step, the process of adding the boundary loss term into the loss function is as follows:
the edge characteristic of the generated tumor and the true value edge of the tumor are obtained by using a corrosion algorithm, the difference between the two is compared, the similarity measurement of the edge between the generated mask and the true value is used for generating similarity measurement, and the similarity is used as a loss term and added into a loss function for improving the accuracy of edge segmentation:
Lb=e-log(B(G(x)),B(y)) (3)
wherein, B represents a method for extracting tumor margins;
the loss of the generated network is noted as:
L=Ldice(y,G(x))+λLb (4)。
in the fifth step, the process of joining the basic network by using the optimal boundary construction weight distribution module and the boundary enhancement module is as follows:
obtaining the optimal boundary of a tumor true value in a training set, performing down-sampling on the boundary and the two modes together, and calculating the similarity between the boundary and the two modes by using a dice coefficient at each layer of the down-sampling:
Figure RE-GDA0003084716590000032
Figure RE-GDA0003084716590000033
wherein n represents the followingNumber of sampled layers, x1、x2Respectively representing the brain tumor original images of two modes;
and according to the similarity, modal weight is distributed, and the boundary of the layer is superposed on the two modals after the weight is distributed, so that the modal boundary is enhanced:
Figure RE-GDA0003084716590000041
Figure RE-GDA0003084716590000042
wherein, with x1By way of example, x'1nRepresenting the output mode of weight distribution and boundary enhancement in n layers, and continuing the iterative operation of weight distribution and boundary enhancement in the next layer, W1nIs shown in n layers x1The weight assigned to the modality.
The invention has the beneficial effects that: tumor information of nuclear magnetic images in different modes is utilized through multiple down-sampling channels, prior knowledge is fused into a network and a loss function, and comprehensiveness of tumor information utilization and accuracy of tumor edge segmentation are improved.
Drawings
Fig. 1 is a schematic diagram of a boundary-generating network architecture.
FIG. 2 is a weight assignment and boundary enhancement module.
FIG. 3 is a schematic diagram of a multisampling channel 3D U-net network architecture incorporating boundary prior designed by the method.
FIG. 4 is an overall framework of the present method in network training and prediction.
Detailed description of the preferred embodiments
The present invention is further described below.
Referring to fig. 1 to 4, a brain tumor segmentation method with a fused prior boundary includes the following steps:
the method comprises the following steps: method for extracting brain tumor prior Boundary characteristics (Extract the Boundary, EB), the process is as follows:
corroding the existing true value of the tumor, and subtracting the true value after corrosion from the original true value of the tumor to obtain a tumor boundary; randomly selecting N points on the boundaries, filling a space surrounded by the points, and calculating the similarity between the space surrounded by the points and a true value; repeating the operation for multiple times, finally taking the points with the highest similarity between the filling space and the truth value as optimal boundary points, and sequentially connecting the optimal boundary points to form an optimal boundary;
step two: the optimal boundary generates a network model, and the process is as follows:
taking an original brain tumor training set as network input, and calculating the loss of the output characteristics and the optimal tumor boundary:
Figure RE-GDA0003084716590000043
wherein t represents the optimal boundary of the tumor extracted from the truth training set, x represents the original brain tumor image, and F represents the boundary generation network;
and finally, training a generation network of the optimal boundary. The network architecture is shown in fig. 1.
Step three: building a 3D U-net basic network model of a multi-down-sampling channel, wherein the process is as follows:
3D U-net network replaces two-dimension in U-net network with three-dimension convolution operation, more suitable for training block brain tumor MRI image, in order to fully utilize tumor information in image, synchronously down-sampling two mode MRI images, respectively obtaining two 8 voxel size characteristics, the number of channels is 512, and fusing the characteristics, deconvoluting the fused characteristics to up-sample, maintaining the number of channels unchanged, changing the size to 16, fusing the upper layer with the bottom layer of one mode, fusing the lower layer with the bottom layer of the other mode to obtain 1024 channel characteristics, repeating the operations of deconvolution, fusion and convolution, and finally obtaining a 64 size tumor segmentation graph;
inputting an original brain tumor image, and calculating a loss function between an output characteristic and a tumor truth value:
Figure RE-GDA0003084716590000051
wherein y represents a tumor truth value, G represents a network model, and G (x) represents an output characteristic;
step four: adding a boundary loss term to the loss function as follows:
the margin characteristic of the generated tumor and the true margin of the tumor are obtained by an erosion algorithm, and the difference between the two is compared to be a similarity measure of the margin between the generated mask and the true margin. And adding the similarity as a loss term into a loss function for improving the edge segmentation accuracy:
Lb=e-log(B(G(x)),B(y)) (3)
wherein, B represents a method for extracting tumor margins;
the loss of the generated network is noted as:
L=Ldice(y,G(x))+λLb (4)
step five: joining the basic network by using an optimal Boundary construction Weight distribution module And a Boundary enhancement module (WAB), wherein the process is as follows:
and obtaining the optimal boundary of a tumor true value in the training set, and downsampling the boundary and the two modes together. At each layer of the down-sampling, the similarity between the boundary and the two modalities is calculated using the dice coefficient:
Figure RE-GDA0003084716590000052
Figure RE-GDA0003084716590000053
where n denotes the number of down-sampled layers, x1、x2The brain tumor original images of the two modes are respectively represented.
And according to the similarity, modal weight is distributed, and the boundary of the layer is superposed on the two modals after the weight is distributed, so that the modal boundary is enhanced:
Figure RE-GDA0003084716590000061
Figure RE-GDA0003084716590000062
wherein, with x1By way of example, x'1nThe output modalities which are subjected to weight distribution and boundary enhancement at the n layers are represented, and the iterative operation of weight distribution and boundary enhancement is continued at the next layer. W1nIs shown in n layers x1The weight assigned to the modality. The weight assignment and boundary enhancement module framework is shown in fig. 2.
Finally, a multisampling channel 3D U-net network framework that merges boundary priors is shown in FIG. 3. The overall framework for network training and prediction is shown in fig. 4.
The embodiments described in this specification are merely illustrative of implementations of the inventive concepts, which are intended for purposes of illustration only. The scope of the present invention should not be construed as being limited to the particular forms set forth in the examples, but rather as being defined by the claims and the equivalents thereof which can occur to those skilled in the art upon consideration of the present inventive concept.

Claims (6)

1. A brain tumor segmentation method fused with prior boundaries is characterized in that: the method comprises the following steps:
the method comprises the following steps: the method for extracting the prior boundary characteristics of the brain tumor comprises the following steps:
obtaining a tumor boundary through a corrosion algorithm, randomly taking boundary points and carrying out iterative computation to obtain boundary points with the highest similarity between a point enclosure space and a true value, taking the boundary points as optimal boundary points, and sequentially connecting the optimal boundary points to form an optimal boundary;
step two: the optimal boundary generates a network model, and the process is as follows:
taking an original brain tumor image as input, calculating the loss of the output and the optimal boundary of the tumor, and training a generation network of the optimal boundary;
step three: building a 3D U-net basic network model of a multi-down-sampling channel, wherein the process is as follows:
acquiring nuclear magnetic images of two modes, simultaneously performing down-sampling on the two modes on the basis of 3DUnet, fusing bottom layer details in the deconvolution up-sampling process, and outputting a brain tumor segmentation map;
step four: adding a boundary loss term to the loss function as follows:
calculating and generating similarity between a tumor edge and a true tumor edge, and adding the similarity into a loss function to serve as a loss term for improving the edge segmentation accuracy;
step five: and joining the basic network by utilizing the optimal boundary construction weight distribution module and the boundary enhancement module, wherein the process is as follows:
simultaneously downsampling the corresponding optimal boundary and the two modes in the training set, calculating the similarity between the optimal boundary and the two modes once in each layer, and distributing weights to the two modes according to the similarity; and then, overlapping the boundary of the same layer on the two modes, and enhancing the modal boundary by using the prior boundary knowledge.
2. The method for segmenting the brain tumor with the fused prior boundary as claimed in claim 1, wherein the step one, extracting the prior boundary feature of the brain tumor, comprises: corroding the existing true value of the tumor, and subtracting the true value after corrosion from the original true value of the tumor to obtain a tumor boundary; randomly selecting N points on the boundaries, filling a space surrounded by the points, and calculating the similarity between the space surrounded by the points and a true value; and repeating the operation for a set number of times, finally taking the points with the highest similarity between the filling space and the truth value as optimal boundary points, and sequentially connecting the optimal boundary points to form an optimal boundary.
3. The method for segmenting brain tumors by fusing prior boundaries according to claim 1 or 2, wherein in the second step, the process of generating the network model by the optimal boundaries comprises:
taking an original brain tumor training set as network input, and calculating the loss of the output characteristics and the optimal tumor boundary:
Figure RE-FDA0003084716580000011
wherein t represents the optimal boundary of the tumor extracted from the truth training set, x represents the original brain tumor image, and F represents the boundary generation network;
and finally, training a generation network of the optimal boundary.
4. The method for segmenting the brain tumor fusing the prior boundary as claimed in claim 1 or 2, wherein in the third step, the process of constructing the 3D U-net basic network model of the multi-down-sampling channel comprises the following steps:
the 3D U-net network replaces two dimensions in the U-net network by three-dimensional convolution operation, and is more suitable for training massive brain tumor MRI images; synchronously performing down-sampling on the MRI images of the two modes to fully utilize tumor information in the images, respectively obtaining two characteristics with the size of 8 x 8 voxels, wherein the number of channels is 512, and fusing the characteristics; deconvolution is carried out on the fused features to carry out up-sampling, the number of channels is kept unchanged, the size of the channels is changed into 16 × 16, the upper layer is fused with the bottom layer of one mode, and the lower layer is fused with the bottom layer of the other mode to obtain the features of 1024 channels; repeating the operations of deconvolution, fusion and convolution to obtain a tumor segmentation map with the size of 64 × 64;
inputting an original brain tumor image, and calculating a loss function between an output characteristic and a tumor truth value:
Figure RE-FDA0003084716580000021
wherein y represents tumor truth, G represents a network model, and G (x) represents output characteristics.
5. The method for segmenting brain tumors by fusing a priori boundaries as set forth in claim 1 or 2, wherein in the fourth step, the process of adding boundary loss terms into the loss function is as follows:
the edge characteristic of the generated tumor and the true value edge of the tumor are obtained by using a corrosion algorithm, the difference between the two is compared, the similarity measurement of the edge between the generated mask and the true value is used for generating similarity measurement, and the similarity is used as a loss term and added into a loss function for improving the accuracy of edge segmentation:
Lb=e-log(B(G(x)),B(y)) (3)
wherein, B represents a method for extracting tumor margins;
the loss of the generated network is noted as:
L=Ldice(y,G(x))+λLb (4)。
6. the method for segmenting brain tumors fusing prior boundaries according to claim 1 or 2, wherein in the fifth step, the process of joining the basic network by using the optimal boundary construction weight distribution module and the boundary enhancement module is as follows:
obtaining the optimal boundary of a tumor true value in a training set, performing down-sampling on the boundary and the two modes together, and calculating the similarity between the boundary and the two modes by using a dice coefficient at each layer of the down-sampling:
Figure RE-FDA0003084716580000031
Figure RE-FDA0003084716580000032
where n denotes the number of down-sampled layers, x1、x2Respectively representing the brain tumor original images of two modes;
and according to the similarity, modal weight is distributed, and the boundary of the layer is superposed on the two modals after the weight is distributed, so that the modal boundary is enhanced:
Figure RE-FDA0003084716580000033
Figure RE-FDA0003084716580000034
wherein, with x1By way of example, x'1nRepresenting the output mode of weight distribution and boundary enhancement in n layers, and continuing the iterative operation of weight distribution and boundary enhancement in the next layer, W1nIs shown in n layers x1The weight assigned to the modality.
CN202110314971.3A 2021-03-24 2021-03-24 Brain tumor segmentation method fusing prior boundary Pending CN113160142A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110314971.3A CN113160142A (en) 2021-03-24 2021-03-24 Brain tumor segmentation method fusing prior boundary

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110314971.3A CN113160142A (en) 2021-03-24 2021-03-24 Brain tumor segmentation method fusing prior boundary

Publications (1)

Publication Number Publication Date
CN113160142A true CN113160142A (en) 2021-07-23

Family

ID=76884584

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110314971.3A Pending CN113160142A (en) 2021-03-24 2021-03-24 Brain tumor segmentation method fusing prior boundary

Country Status (1)

Country Link
CN (1) CN113160142A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113554642A (en) * 2021-08-12 2021-10-26 北京安德医智科技有限公司 Focus robust brain region positioning method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113554642A (en) * 2021-08-12 2021-10-26 北京安德医智科技有限公司 Focus robust brain region positioning method and device, electronic equipment and storage medium
CN113554642B (en) * 2021-08-12 2022-03-11 北京安德医智科技有限公司 Focus robust brain region positioning method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109598727B (en) CT image lung parenchyma three-dimensional semantic segmentation method based on deep neural network
CN108776969B (en) Breast ultrasound image tumor segmentation method based on full convolution network
Zheng et al. 3-D consistent and robust segmentation of cardiac images by deep learning with spatial propagation
CN111798462B (en) Automatic delineation method of nasopharyngeal carcinoma radiotherapy target area based on CT image
CN110097550B (en) Medical image segmentation method and system based on deep learning
CN111612754B (en) MRI tumor optimization segmentation method and system based on multi-modal image fusion
CN111563902A (en) Lung lobe segmentation method and system based on three-dimensional convolutional neural network
WO2021203795A1 (en) Pancreas ct automatic segmentation method based on saliency dense connection expansion convolutional network
CN109493347A (en) The method and system that the object of sparse distribution is split in the picture
JP2023550844A (en) Liver CT automatic segmentation method based on deep shape learning
CN111179366B (en) Anatomical structure difference priori based low-dose image reconstruction method and system
CN110136122B (en) Brain MR image segmentation method based on attention depth feature reconstruction
CN111275712B (en) Residual semantic network training method oriented to large-scale image data
CN115578404B (en) Liver tumor image enhancement and segmentation method based on deep learning
CN111696126B (en) Multi-view-angle-based multi-task liver tumor image segmentation method
CN112991365B (en) Coronary artery segmentation method, system and storage medium
CN114037714A (en) 3D MR and TRUS image segmentation method for prostate system puncture
CN112465754B (en) 3D medical image segmentation method and device based on layered perception fusion and storage medium
CN116563265B (en) Cardiac MRI (magnetic resonance imaging) segmentation method based on multi-scale attention and self-adaptive feature fusion
CN110853048A (en) MRI image segmentation method, device and storage medium based on rough training and fine training
CN114596317A (en) CT image whole heart segmentation method based on deep learning
CN116452618A (en) Three-input spine CT image segmentation method
CN116071383A (en) Hippocampus subzone segmentation method and system based on ultra-high field magnetic resonance image reconstruction
CN114494289A (en) Pancreatic tumor image segmentation processing method based on local linear embedded interpolation neural network
CN113160142A (en) Brain tumor segmentation method fusing prior boundary

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination