CN116452618A - Three-input spine CT image segmentation method - Google Patents

Three-input spine CT image segmentation method Download PDF

Info

Publication number
CN116452618A
CN116452618A CN202310338594.6A CN202310338594A CN116452618A CN 116452618 A CN116452618 A CN 116452618A CN 202310338594 A CN202310338594 A CN 202310338594A CN 116452618 A CN116452618 A CN 116452618A
Authority
CN
China
Prior art keywords
spine
slices
input
slice
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310338594.6A
Other languages
Chinese (zh)
Inventor
刘立佳
顾施吉
王义文
张馨泽
周长利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin University of Science and Technology
Original Assignee
Harbin University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin University of Science and Technology filed Critical Harbin University of Science and Technology
Priority to CN202310338594.6A priority Critical patent/CN116452618A/en
Publication of CN116452618A publication Critical patent/CN116452618A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • G06T2207/30012Spine; Backbone
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention relates to the field of medical image segmentation, in particular to a three-input spine CT image segmentation method; the method comprises the following specific steps: firstly, converting a three-dimensional spine CT image into two-dimensional spine slices and renaming the two-dimensional spine slices, and then adjusting the sizes and the channel numbers of the two-dimensional spine slices; secondly, performing bilateral filtering and normalized data preprocessing operation on the two-dimensional slice; thirdly, improving a U-Net network model, changing single input into three inputs, wherein the three inputs are three continuous spine slices, and sharing the weight of an encoder of each slice; then combining a multi-scale feature extraction module and an attention module on the basis of the U-Net network, wherein the multi-scale feature extraction module is used for increasing global feature extraction capability, and the attention module is used for paying attention to the relevance of on-chip information; and finally, obtaining a network optimal model through an ablation experiment. According to the method, three continuous spine slices are input, and the problems of poor segmentation effect, low accuracy and the like caused by weak spine edge extraction capability in a two-dimensional segmentation model are optimized.

Description

Three-input spine CT image segmentation method
Technical field:
the invention relates to the field of medical image segmentation, in particular to a three-input spine CT image segmentation method.
The background technology is as follows:
the initial medical image analysis is a medical image segmentation technique for solving specific medical tasks by constructing a system based on manually specified rules in a manner that image foreground organs are mathematically modeled by some operators. By the end of the 90 s, machine learning for computer aided detection and diagnosis has attracted considerable attention in the field of medical image analysis. In a spine CT image, the vertebral structures often exist in a low gray scale, the boundary contours have highly complex variations and the spine shape is complex, the adjacent structures are similar, and the spatial interrelationship between vertebrae and surrounding tissues makes automatic segmentation of vertebrae extremely difficult.
The traditional spine CT image segmentation algorithm replaces manual segmentation, automatic segmentation of a spine CT image is achieved, but the traditional spine CT image segmentation algorithm excessively depends on the fact that a doctor utilizes manual labeling to define and select the characteristics of the image, subjective factors of the doctor can greatly influence a diagnosis result, wrong selection or missing selection of effective characteristics can lead to the loss of reliability of the segmentation result, diagnosis of the doctor cannot be assisted, and misguidance is caused for the proposal of a follow-up treatment scheme. Therefore, the realization of rapid and accurate automatic segmentation of the spine image on the basis of improving the edge extraction capability has important clinical significance.
The invention comprises the following steps:
aiming at the problems of poor segmentation effect, low accuracy and the like caused by weak extraction capability of the edges of the spine image in the existing two-dimensional segmentation model, the invention provides a three-input spine CT image segmentation method, which is used for simulating a doctor to judge symptoms by observing continuous slices, enhancing the utilization rate of information in the slice and improving the extraction capability of the edges.
In order to achieve the above purpose, the present invention provides a three-input spine CT image segmentation method, comprising the steps of:
step 1: converting the three-dimensional spine CT image into two-dimensional spine slices, and renaming and adjusting the sizes;
step 2: performing data preprocessing on the two-dimensional spine slice;
step 3: the U-Net network model is used as a base line network, and a single input is changed into three continuous spine slices, namely a three-input network framework, so that the DD-Net network framework is provided by the invention;
step 4: the encoding end of DD-Net is combined with a multi-scale feature extraction module for increasing the global feature extraction capability;
step 5: the attention module is integrated AT the decoding end of the DD-Net, namely the AT module of the invention;
step 6: obtaining a network optimal model through an ablation experiment;
preferably, in step 1, the slice renaming is named in the format of verse_img_to indicate what number of patient is, then the two-dimensional slices are screened, and some slices without pixel points in the beginning and the end of the two-dimensional spine slice are removed, so that the number of the stored slices of the three-dimensional spine image of each patient is 500, finally the sizes of the two-dimensional spine slices are uniformly adjusted to 256×256, and the channel number is changed to 1;
preferably, in step 2, the data preprocessing comprises slicing 80 patients after processing according to 8:1:1, dividing a training set, a verification set and a test set in proportion, enhancing the slice contrast by using a bilateral filtering method, carrying out normalization operation, and improving the comparability among data indexes;
preferably, in step 3, the DD-Net network framework changes a single input into three consecutive spine slices, three branches are arranged in the DD-Net network, the encoding side of each branch consists of a multi-scale feature extraction module and a pooling layer, the decoding side consists of an upsampling and attention module, the feature content to be extracted for each slice is the same, and the encoder weights of the three input slices are shared;
preferably, in step 3, naming three consecutive slices, wherein the middle slice is the current processing slice, named archslice, enters a second branch in the DD-Net network, performs image segmentation processing, and the slices adjacent front and back are named preslice and subslice, enter a first branch and a third branch, perform feature fusion processing on archslice AT the encoding side, and generate an attention weight map on archslice AT the decoding side, namely the AT module of the invention;
preferably, in step 3, feature fusion is performed on the archslice at the encoding side, and in order to avoid detail loss during downsampling, a high-level feature map of a slice preslice and a subslice adjacent to each other is spliced and fused with an intermediate slice archslice.
Preferably, in step 4, the added multi-scale feature extraction module is an acceptance V2 module;
preferably, in step 5, an attention module is incorporated AT the decoding side, namely an AT module modified in the present invention, and the attention feature map with feature weights is generated by using adjacent slices to highlight boundary features;
preferably, in step 6, an acceptance V2 module and an inventive AT module are added into the DD-Net network through an ablation experiment to obtain a network optimal model.
Compared with the prior art, the invention has the following beneficial effects:
compared with the traditional spine segmentation method, the traditional segmentation method is to segment a single picture, the segmentation method adopted by the invention is to increase the single input to three inputs, and the edge extraction capacity is improved by utilizing the front and back adjacent slice information.
Drawings
FIG. 1 is a flow chart of a three-input spine CT image segmentation method in an embodiment of the invention;
FIG. 2 is a diagram of a DD-Net network architecture in accordance with an embodiment of the present invention;
FIG. 3 is an acceptance V2 module in an embodiment of the invention;
fig. 4 is an AT module in an embodiment of the invention.
Detailed Description
The present invention will be described in further detail below.
For the purpose of making the objects, technical solutions and advantages of the present invention clearer, the present invention is described below by way of specific examples in the accompanying drawings. It should be understood that the description is only illustrative and is not intended to limit the scope of the invention. In this case, in order to avoid obscuring the present invention due to unnecessary details, only the structures and processing steps closely related to the aspects of the present invention are shown in the drawings, and other details not greatly related to the present invention are omitted.
As shown in fig. 1, a specific embodiment of the present invention includes the steps of:
step 1: and converting the three-dimensional spine CT image into two-dimensional spine slices, and renaming and adjusting the sizes. The dataset of the present invention is the MICCAI VerSe2020 dataset, which is a disclosure of the medical image calculation and computer-aided intervention international conference sponsored cone labeling and segmentation challenges in 2020, verSe2020 includes annotated spinal Computed Tomography (CT) images from 300 subjects, 4142 of which were fully visualized and annotated vertebrae, collected from multiple centers of four different scanner manufacturers, and which were rich in cases exhibiting anatomical variations. The metadata includes vertebrae labeling information, voxel level segmentation masks obtained by a man-machine mixing algorithm, and anatomical ratings. The method comprises the steps of selecting 80 cases of data as experimental data, converting three-dimensional data into two-dimensional slice data, naming the two-dimensional slice data again in a verse img format, representing the number of the patient, eliminating some non-pixel-point slices in the beginning and ending slices of the two-dimensional spine slice in order to reduce the influence of non-pixel-point vertebral slices in the image on image segmentation, enabling the number of the saved slices of the three-dimensional spine CT image of each patient to be 500, uniformly adjusting the size of the two-dimensional spine slice to 256×256, and changing the channel number to 1.
Step 2: and (3) carrying out data preprocessing on the two-dimensional spine slice, and completing the division of the data set. The data preprocessing comprises the steps of dividing 40000 slices of 80 treated patients into 8:1:1, a training set, a verification set and a test set are divided proportionally, a bilateral filtering method is used for enhancing slice contrast, enhancing edge characteristics of a spine image, carrying out normalization operation, and improving comparability among data indexes.
Bilateral filtering: the edge protection filtering algorithm adds a pixel weight item on the basis of Gaussian filtering, and considers the distance factor and the difference of pixel values, and the closer the pixel values are, the larger the weight is.
The filtering result of the filter BF is:
wherein W is q For the sum of weights for each pixel value within the filter window, normalization for weights:
G S the spatial distance weight is:
G r the pixel weights are:
normalization operation:
wherein input represents an input image pixel value; max (), min () represent the maximum value and minimum value of the input pixel, respectively. output is the output image pixel value. The image pixels are adjusted to within the [0,1] interval by normalization.
Standardization:
wherein input represents an input image pixel value; mean (input) represents the pixel mean of the input image. std represents the standard deviation of the pixels of the input image. The image pixels are normalized to be within the [ -1,1] interval.
As shown in fig. 2, step 3: the U-Net network model is used as a base line network, a single input is changed into three continuous spine slices, namely, a three-input network framework DD-Net is formed by three branches, the coding side of each branch consists of a multi-scale feature extraction module and a pooling layer, the decoding side consists of an up-sampling and attention module, and the weights of three input slice encoders are shared in view of the fact that the feature content to be extracted by each slice is the same. When DD-Net performs segmentation, the slice is processed sequentially, the currently processed slice is called archslice, and if the current slice is not the first slice or the last slice, the slice adjacent to the current slice is selected to be preslice, subslice. If the first slice is selected, the next slice is repeated twice, called preslice, subslice. If it is the last one, the previous one is selected and repeated twice, called preslice, subslice.
The archslice enters a second branch in the DD-Net network for image segmentation processing, preslice and subslice respectively enter a first branch and a third branch, feature fusion processing is carried out on the archslice AT the encoding side, and an injection weight map is generated on the archslice AT the decoding side, so that the AT module is obtained.
Feature fusion is carried out on the archslice at the encoding side, namely the high-level feature map after downsampling of each layer of preslice and subslice is spliced and fused with the high-level feature map of archslice.
As shown in fig. 3, step 4: the added multi-scale feature extraction module is an acceptance V2 module, and four branches are shared, the first branch carries out 1×1 convolution on the input, the second branch carries out 1×1 convolution by using 3×3 max pooling and then 1×1 convolution, the third branch carries out 1×1 convolution and then 3×3 convolution, the fourth branch carries out 1×1 convolution and then 2 3×3 convolutions, and two 3×3 convolutions replace one 5×5 for increasing the depth of the collaterals and reducing the parameters, and all convolution layers of the structure use a Relu activation function.
As shown in fig. 4, step 5: the method comprises the steps of integrating an attention module AT a decoding end of DD-Net, namely an AT module of the invention, generating a spatial attention map by utilizing a characteristic spatial correlation, wherein the spatial attention map is focused on a part with rich information, firstly respectively carrying out an average pooling operation and a maximum pooling operation on preslice and subslice, splicing the preslice and the subslice to generate efficient characteristic maps T1 and T2, carrying out element addition on the two efficient characteristic maps to generate a characteristic descriptor, and generating a spatial attention map M (F) E R by using a convolution layer aiming AT the characteristic descriptor after element addition (H×W) The feature map encodes salient or suppressed features in the region of adjacent slices, and a sigmoid activation function is used to obtain a spatial attention weighting map M s ,M s Multiplying x to obtain attention feature map alpha with feature weight, and highlighting boundary feature. Finally, alpha, x and a are added 1 And performing feature addition fusion.
Step 5: through an ablation experiment, an acceptance V2 module and an AT module of the invention are added into the DD-Net network to obtain a network optimal model. Because different evaluation indexes have different estimation meanings on experimental results, the invention uses a plurality of segmentation Precision indexes for analysis, wherein the evaluation indexes comprise dice similarity coefficients (Dice Similarity Coefficient, DSC), cross-over ratios (Intersection over Union, ioU), accuracy and Recall rate (Recall). Wherein the formula is as follows:
wherein TP represents a positive sample of true positive, FP represents a negative sample of false positive, FN represents a positive sample of false negative, and TN represents a negative sample of true negative.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
Furthermore, it should be understood that although the present disclosure describes embodiments, not every embodiment is provided with a separate embodiment, and that this description is provided for clarity only, and that the disclosure is not limited to the embodiments described in detail below, and that the embodiments described in the examples may be combined as appropriate to form other embodiments that will be apparent to those skilled in the art.

Claims (9)

1. A three-input spine CT image segmentation method is characterized in that: the method comprises the following steps:
step 1: converting the three-dimensional spine CT image into two-dimensional spine slices, and renaming and adjusting the sizes;
step 2: performing data preprocessing on the two-dimensional spine slice;
step 3: the U-Net network model is used as a base line network, and a single input is changed into three continuous spine slices, namely a three-input network framework, so that the DD-Net network framework is provided by the invention;
step 4: the encoding end of DD-Net is combined with a multi-scale feature extraction module for increasing the global feature extraction capability;
step 5: the attention module is integrated AT the decoding end of the DD-Net, namely the AT module of the invention;
step 6: and obtaining a network optimal model through an ablation experiment.
2. The method for segmenting a three-input spine CT image according to claim 1, wherein: in the step 1, the slice renaming is performed in a version_img_format to indicate what number of patient is, then the two-dimensional slices are screened, and some slices without pixel points in the beginning and the end of the two-dimensional spine slice are removed, so that the number of the stored slices of the three-dimensional spine image of each patient is 500, finally the sizes of the two-dimensional spine slices are uniformly adjusted to 256×256, and the number of channels is changed to 1.
3. The method for segmenting a three-input spine CT image according to claim 1, wherein: in the step 2, the data preprocessing comprises the steps of slicing 80 processed patients according to 8:1:1, a training set, a verification set and a test set are divided proportionally, a bilateral filtering method is used for enhancing the contrast of the slice, normalization operation is carried out, and the comparability among data indexes is improved.
4. The method for segmenting a three-input spine CT image according to claim 1, wherein: in the step 3, the DD-Net network framework changes a single input into three continuous spine slices, three branches are arranged in the DD-Net network, the coding side of each branch consists of a multi-scale feature extraction module and a pooling layer, the decoding side consists of an upsampling and attention module, the feature content to be extracted of each slice is the same, and the encoder weights of the three input slices are shared.
5. The method for three-input spine CT image segmentation of claim 4 wherein: in the step 3, three continuous slices are named, the middle slice is named archslice for the current processing slice, and the current processing slice enters a second branch in the DD-Net network to perform image segmentation processing; and naming a preslice and a subslice by the front and back adjacent slices, entering a first branch and a third branch, performing feature fusion processing on the archlice AT the encoding side, and generating an attention weight graph on the archlice AT the decoding side, namely the AT module of the invention.
6. The method for three-input spine CT image segmentation of claim 4 wherein: feature fusion is carried out on the archslice at the encoding side, namely, the high-level feature graphs of the slice preslice and the subslice which are adjacent front and back and the middle slice archslice are spliced and fused to avoid detail loss during downsampling.
7. The method for segmenting a three-input spine CT image according to claim 1, wherein: in the step 4, the added multi-scale feature extraction module is an incoptionv 2 module.
8. The three-input spine CT image segmentation method according to claim 1, wherein: in the step 5, attention AT module is integrated on the decoding side, and the attention AT module generates attention characteristic diagram with characteristic weight to highlight boundary characteristics by using adjacent slices.
9. The method for segmenting a three-input spine CT image according to claim 1, wherein: in the step 6, an acceptance V2 module and an inventive AT module are added into the DD-Net network through an ablation experiment to obtain a network optimal model.
CN202310338594.6A 2023-03-31 2023-03-31 Three-input spine CT image segmentation method Pending CN116452618A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310338594.6A CN116452618A (en) 2023-03-31 2023-03-31 Three-input spine CT image segmentation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310338594.6A CN116452618A (en) 2023-03-31 2023-03-31 Three-input spine CT image segmentation method

Publications (1)

Publication Number Publication Date
CN116452618A true CN116452618A (en) 2023-07-18

Family

ID=87129559

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310338594.6A Pending CN116452618A (en) 2023-03-31 2023-03-31 Three-input spine CT image segmentation method

Country Status (1)

Country Link
CN (1) CN116452618A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116630466A (en) * 2023-07-26 2023-08-22 济南大学 Spine CT-MR conversion method and system based on generation antagonism network
CN116958556A (en) * 2023-08-01 2023-10-27 东莞理工学院 Dual-channel complementary spine image segmentation method for vertebral body and intervertebral disc segmentation

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116630466A (en) * 2023-07-26 2023-08-22 济南大学 Spine CT-MR conversion method and system based on generation antagonism network
CN116630466B (en) * 2023-07-26 2023-10-24 济南大学 Spine CT-MR conversion method and system based on generation antagonism network
CN116958556A (en) * 2023-08-01 2023-10-27 东莞理工学院 Dual-channel complementary spine image segmentation method for vertebral body and intervertebral disc segmentation
CN116958556B (en) * 2023-08-01 2024-03-19 东莞理工学院 Dual-channel complementary spine image segmentation method for vertebral body and intervertebral disc segmentation

Similar Documents

Publication Publication Date Title
US11887311B2 (en) Method and apparatus for segmenting a medical image, and storage medium
CN110889853B (en) Tumor segmentation method based on residual error-attention deep neural network
TWI715117B (en) Method, device and electronic apparatus for medical image processing and storage mdeium thereof
CN112927240B (en) CT image segmentation method based on improved AU-Net network
CN111798462B (en) Automatic delineation method of nasopharyngeal carcinoma radiotherapy target area based on CT image
CN105957063B (en) CT image liver segmentation method and system based on multiple dimensioned weighting similarity measure
CN109063710A (en) Based on the pyramidal 3D CNN nasopharyngeal carcinoma dividing method of Analysis On Multi-scale Features
CN109389584A (en) Multiple dimensioned rhinopharyngeal neoplasm dividing method based on CNN
Wang et al. Laplacian pyramid adversarial network for face completion
CN116452618A (en) Three-input spine CT image segmentation method
CN110889852A (en) Liver segmentation method based on residual error-attention deep neural network
CN109447963A (en) A kind of method and device of brain phantom identification
CN110009656B (en) Target object determination method and device, storage medium and electronic device
CN114066913B (en) Heart image segmentation method and system
CN111080670A (en) Image extraction method, device, equipment and storage medium
CN113393469A (en) Medical image segmentation method and device based on cyclic residual convolutional neural network
CN110570394A (en) medical image segmentation method, device, equipment and storage medium
Shan et al. SCA-Net: A spatial and channel attention network for medical image segmentation
CN115830163A (en) Progressive medical image cross-mode generation method and device based on deterministic guidance of deep learning
CN113450359A (en) Medical image segmentation, display, model training methods, systems, devices, and media
CN112991365B (en) Coronary artery segmentation method, system and storage medium
CN114494289A (en) Pancreatic tumor image segmentation processing method based on local linear embedded interpolation neural network
CN117152173A (en) Coronary artery segmentation method and system based on DUNetR model
CN111967462A (en) Method and device for acquiring region of interest
CN113177938B (en) Method and device for segmenting brain glioma based on circular convolution kernel and related components

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination