WO2020119679A1 - 三维左心房分割方法、装置、终端设备及存储介质 - Google Patents

三维左心房分割方法、装置、终端设备及存储介质 Download PDF

Info

Publication number
WO2020119679A1
WO2020119679A1 PCT/CN2019/124311 CN2019124311W WO2020119679A1 WO 2020119679 A1 WO2020119679 A1 WO 2020119679A1 CN 2019124311 W CN2019124311 W CN 2019124311W WO 2020119679 A1 WO2020119679 A1 WO 2020119679A1
Authority
WO
WIPO (PCT)
Prior art keywords
roi region
left atrium
segmented
segmentation
magnetic resonance
Prior art date
Application number
PCT/CN2019/124311
Other languages
English (en)
French (fr)
Inventor
廖祥云
司伟鑫
孙寅紫
王琼
王平安
Original Assignee
深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳先进技术研究院 filed Critical 深圳先进技术研究院
Publication of WO2020119679A1 publication Critical patent/WO2020119679A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image

Definitions

  • the present application belongs to the technical field of medical image processing, and particularly relates to a three-dimensional left atrium segmentation method, device, terminal device, and computer-readable storage medium.
  • Medical image refers to the image data acquired by medical imaging equipment such as computed tomography CT, magnetic resonance imaging MRI, B-mode ultrasound or positron emission computed tomography PET, etc. It is generally three-dimensional image data composed of two-dimensional slices . Medical image segmentation is a key method for processing medical images. It refers to distinguishing different areas with special meanings in medical images. These areas do not cross each other, and each area meets the consistency of a specific area.
  • Atrial fibrillation also known as atrial fibrillation, is a common type of arrhythmia. Due to the lack of understanding of human atrial structure, the current treatment of atrial fibrillation is not good. Gadolinium contrast agents are used in MRI scans to improve the clarity of images of patients' internal structures. Gadolinium enhanced magnetic resonance imaging GE-MRI is an important tool for evaluating atrial fibrosis.
  • Atrial segmentation of MRI images is often required.
  • segmenting the left atrium LA from three-dimensional GE-MRI images is very challenging.
  • the poor contrast between the left atrium LA and the background reduces the visibility of the left atrium LA border; during the scan, the patient's respiratory rhythm is irregular and the heart rate variability, image quality may be affected.
  • several fully automatic methods of left atrium LA segmentation have been proposed.
  • the three-dimensional data can be parsed into two-dimensional components from the axial, sagittal and coronal planes respectively, and then each component can be convolved with multi-viewpoints.
  • Neural network analysis; the extended residual network and sequential learning network formed by ConvLSTM can also be used to expand the multi-view learning strategy.
  • the existing atrial method based on three-dimensional GE-MRI image segmentation of the left heart has poor performance.
  • embodiments of the present application provide a three-dimensional left atrium segmentation method, device, terminal device, and computer-readable storage medium to solve the problem of low performance of existing cardiac magnetic resonance images.
  • a first aspect of an embodiment of the present application provides a three-dimensional left atrium segmentation method, including:
  • An ROI region is segmented from the cardiac magnetic resonance image to be segmented, the ROI region is a region containing a three-dimensional left atrium;
  • the hierarchical aggregation network model is a U-Net convolutional neural network model including an encoder path and a decoder path.
  • the encoder path includes at least one hierarchical aggregation module, and the hierarchical aggregation module includes a branch as a backbone The hierarchical aggregation unit and the attention unit as a branch of the mask.
  • the segmenting the ROI region from the cardiac magnetic resonance image to be segmented includes:
  • the pre-trained U-Net convolutional neural network is used to detect the cardiac magnetic resonance image to be segmented to obtain the ROI region detection result; wherein, each stage of the pre-trained U-Net convolutional neural network has two convolutions Floor;
  • the hierarchical aggregation unit includes a first convolutional layer, a second convolutional layer, a third convolutional layer, a fourth convolutional layer, and a fifth convolutional layer, After each convolutional layer, a batch normalization and correction linear unit is connected;
  • the first convolution layer and the second convolution layer are cascaded, and the cascade result is input to the third convolution layer;
  • the third convolution layer is subjected to a convolution operation to obtain the fourth volume A convolution layer, the third convolution layer and the fourth convolution layer are connected to generate the fifth convolution layer;
  • the attention unit includes a sixth convolutional layer, a seventh convolutional layer, and a sigmoid structure layer, and the sixth convolutional layer is sequentially connected to the seventh convolutional layer through batch normalization and the correction linear unit ;
  • the sigmoid structure layer is exp represents the exponential function, ⁇ i , c represents the i th value of the feature map on the c th channel;
  • the output of the layered fusion module is M i,c ( ⁇ ) represents the output of the mask branch, and the range is [0,1]; F i,c ( ⁇ ) represents the output of the trunk branch, Represents the dot product, Means summing by element direction.
  • the method before the acquiring the magnetic resonance image of the heart to be segmented, the method further includes:
  • the segmenting the corresponding target ROI region from the training sample according to the label information includes:
  • the corresponding target ROI region is segmented from the training sample.
  • the segmenting the corresponding target ROI region from the training sample according to the detection result of the target ROI region and the label information includes:
  • the target ROI area is expanded to a preset target area
  • the target ROI region is cropped from the training sample.
  • a second aspect of an embodiment of the present application provides a three-dimensional left atrium segmentation device, including:
  • the acquisition module is used to acquire the magnetic resonance image of the heart to be segmented
  • An ROI region segmentation module configured to segment an ROI region from the cardiac magnetic resonance image to be segmented, the ROI region being a region containing a three-dimensional left atrium;
  • a segmentation module for inputting the ROI region into a pre-trained hierarchical aggregation network model to obtain the segmentation result of the cardiac magnetic resonance image to be segmented;
  • the hierarchical aggregation network model is a U-Net convolutional neural network model including an encoder path and a decoder path.
  • the encoder path includes at least one hierarchical aggregation module, and the hierarchical aggregation module includes a branch as a backbone The hierarchical aggregation unit and the attention unit as a branch of the mask.
  • the ROI region segmentation module includes:
  • the ROI area detection unit is used to detect the cardiac magnetic resonance image to be segmented through the pre-trained U-Net convolutional neural network to obtain the ROI area detection result; wherein, each of the pre-trained U-Net convolutional neural network One level has two convolutional layers;
  • the ROI region cropping unit is configured to crop the cardiac magnetic resonance image to be divided according to the detection result of the ROI region to obtain the ROI region.
  • the hierarchical aggregation unit includes a first convolutional layer, a second convolutional layer, a third convolutional layer, a fourth convolutional layer, and a fifth convolutional layer, After each convolutional layer, a batch normalization and correction linear unit is connected;
  • the first convolution layer and the second convolution layer are cascaded, and the cascade result is input to the third convolution layer;
  • the third convolution layer is subjected to a convolution operation to obtain the fourth volume A convolution layer, the third convolution layer and the fourth convolution layer are connected to generate the fifth convolution layer;
  • the attention unit includes a sixth convolutional layer, a seventh convolutional layer, and a sigmoid structure layer, and the sixth convolutional layer is sequentially connected to the seventh convolutional layer through batch normalization and the correction linear unit ;
  • the sigmoid structure layer is exp represents the exponential function, ⁇ i , c represents the i th value of the feature map on the c th channel;
  • the output of the layered fusion module is M i,c ( ⁇ ) represents the output of the mask branch, and the range is [0,1]; F i,c ( ⁇ ) represents the output of the trunk branch, Represents the dot product, Means summing by element direction.
  • the method further includes:
  • a training sample acquisition module for acquiring training samples and label information corresponding to the training samples
  • a target ROI region segmentation module used to segment the corresponding target ROI region from the training sample according to the label information
  • the training module is configured to perform model training on the pre-established hierarchical aggregation network model according to the target ROI region.
  • the target ROI region segmentation module includes:
  • a first adjusting unit configured to uniformly adjust the training samples to a first preset shape
  • a detection unit configured to input the training samples of the first preset shape into a pre-trained U-Net convolutional neural network to obtain the detection result of the target ROI region;
  • a second adjustment unit configured to adjust each training sample from the first preset shape to the original shape of each training sample
  • a segmentation unit is used to segment the corresponding target ROI region from the training sample according to the detection result of the ROI region and the label information.
  • the segmentation unit includes:
  • a judgment subunit used for judging whether the detection result of the target ROI region includes the tag information
  • An expansion subunit configured to expand the target ROI area to a preset target area when the detection result of the target ROI area does not include the tag information
  • a first cropping subunit configured to crop out the preset target area from the training sample, and use the preset target area as the target ROI area;
  • the second cropping subunit is configured to crop the target ROI area from the training sample when the detection result of the target ROI area includes the tag information.
  • a third aspect of the embodiments of the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, which is implemented when the processor executes the computer program The steps of the three-dimensional left atrium segmentation method according to any one of the above-mentioned first aspects.
  • a fourth aspect of the embodiments of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the three-dimensional as described in any one of the first aspects above is implemented The steps of the left atrium segmentation method.
  • the calculation amount can be greatly reduced, and the interference of the image background can also be greatly reduced, thereby Improve the efficiency and accuracy of the three-dimensional left atrium segmentation; use the hierarchical aggregation network model to segment the ROI region.
  • the hierarchical aggregation network model iteratively merges consecutive layers of different depths at the same stage through the hierarchical aggregation module to improve the network model.
  • the shallow and deep feature fusion ability can get better fusion information, and also attach the attention unit as a mask branch to each stage of the encoder part, so that shallow spatial information can be used to gradually enhance the deep layer with rich semantic information
  • the contour feature by combining the level aggregation and attention mechanism, greatly improves the efficiency and accuracy of 3D left atrium segmentation.
  • FIG. 1 is a schematic flowchart of a three-dimensional left atrium segmentation method according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of an attention-based hierarchical aggregation network structure provided by an embodiment of the present application
  • FIG. 3 is a schematic block diagram of another process of a three-dimensional left atrium segmentation method according to an embodiment of the present application.
  • step S302 is a schematic block diagram of a specific process of step S302 provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a comparison result of the dice values of UNet-2 and HAANet-3 provided by an embodiment of the present application;
  • FIG. 6 is a schematic diagram of a comparison between a two-dimensional segmentation result and a three-dimensional segmentation result provided by an embodiment of this application;
  • FIG. 7 is a schematic structural block diagram of a three-dimensional left atrium segmentation device according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a terminal device provided by an embodiment of the present application.
  • FIG. 1 is a schematic flowchart of a three-dimensional left atrium segmentation method according to an embodiment of the present application.
  • the method may include the following steps:
  • Step S101 Acquire a cardiac magnetic resonance image to be segmented.
  • Step S102 an ROI region is segmented from the cardiac magnetic resonance image to be segmented, and the ROI region is a region including a three-dimensional left atrium.
  • the above ROI region contains the entire three-dimensional volume of the left atrium region.
  • the left atrium accounts for a large percentage
  • the image background region accounts for a very small percentage.
  • the percentage of the left atrium is small, and the percentage of the background area is large.
  • most of the volume data in the cardiac magnetic resonance image to be segmented is useless for the left atrium segmentation task. If the entire cardiac magnetic resonance image to be segmented is used as the input of the network model, a lot of useless data will be involved in the calculation. The large amount of calculation affects the calculation efficiency, and it is also a huge waste of calculation resources.
  • the ROI region After segmenting the ROI region from the image, the ROI region is used as the basis for subsequent calculation and segmentation, which can greatly reduce the amount of calculation, help reduce the impact of surrounding tissues and background on segmentation, and thus greatly improve the segmentation efficiency and Accuracy.
  • the U-Net convolutional neural network may be used to detect the cardiac magnetic resonance image to be segmented to obtain the estimated result of the left atrium, and then the corresponding region is cropped out as the ROI region. Therefore, the above specific process of segmenting the ROI region from the cardiac magnetic resonance image to be segmented may include:
  • the pre-trained U-Net convolutional neural network is used to detect the cardiac magnetic resonance image to be segmented to obtain the ROI region detection result; wherein, each stage of the pre-trained U-Net convolutional neural network has two convolutional layers; according to the ROI region
  • the detection result cuts the cardiac magnetic resonance image to be segmented to obtain the ROI region.
  • the U-Net convolutional neural network refers to a convolutional neural network whose overall network structure is similar to the letter "U", which can be regarded as a deformation of the convolutional neural network, specifically including a contraction path and an expansion path.
  • U convolutional neural network
  • Each level of the U-Net network here has two convolutional layers.
  • the specific network structure of U-Net has been well known to those skilled in the art, and will not be repeated here.
  • the U-Net convolutional neural network is trained in advance using training samples. After the training is completed, the U-Net convolutional neural network is used as an ROI detection network to segment the corresponding ROI area.
  • the cardiac magnetic resonance image to be segmented can be adjusted to a fixed shape, and then input to the trained U-Net convolutional neural network to obtain the output of the neural network.
  • the output is a rough prediction.
  • the position of the left atrium region in the whole region of the cardiac magnetic resonance image to be segmented can be located.
  • the cardiac magnetic resonance image to be segmented can be adjusted back to its original shape, and then the corresponding area is cropped, and the cropped area is the ROI area.
  • the U-Net convolutional neural network is used as the ROI detection network for the first time here.
  • the efficiency and accuracy of left atrial segmentation can be further improved.
  • ROI region segmentation methods can also be used, which is not limited here.
  • Step S103 Input the ROI region into the pre-trained hierarchical aggregation network model to obtain the segmentation result of the cardiac magnetic resonance image to be segmented.
  • the hierarchical aggregation network model is a U-Net convolutional neural network model including an encoder path and a decoder path.
  • the encoder path includes at least one hierarchical aggregation module.
  • the hierarchical aggregation module includes a hierarchical aggregation unit as a main branch and as Attention unit of the mask branch.
  • Hierarchical aggregation network model refers to a three-dimensional convolutional neural network that combines layered fusion and attention mechanisms.
  • the network can be named as an attention-based hierarchical aggregation network model (Attention-based hierarchical aggregation network, HAANet). .
  • the hierarchical aggregation network model is a model based on U-Net convolutional neural network, which includes an encoder path and a decoder path.
  • the encoder path includes at least one hierarchical aggregation module (Attention Based Hierarchical Module, HAAM).
  • the hierarchical aggregation module includes a hierarchical aggregation unit (Hierarchical Aggregaition Unit, HAU) as a backbone branch and an attention unit (Attention Unit) as a mask branch. , AU).
  • HAU Hierarchical Aggregaition Unit
  • the decoder path is the same as U-Net, which is composed of multiple repeated convolutional layers, followed by batch normalization (BN) and correction linear unit ReLU.
  • the above-mentioned hierarchical aggregation unit HAU may include a first convolutional layer, a second convolutional layer, a third convolutional layer, a fourth convolutional layer, and a fifth convolutional layer, each convolutional layer is connected to a batch normalization BN and corrected linear unit ReLU; where the first convolutional layer is cascaded with the second convolutional layer, and the cascade result is input to the third convolutional layer to make the third convolutional layer yield; the third convolutional layer is convoluted The fourth convolution layer is obtained by operation, and the third convolution layer and the fourth convolution layer are connected to generate a fifth convolution layer.
  • the kernel size of all convolution operations can be set to 3x3x3, and the step size can be (2, 2, 2).
  • the deeper layers in the neural network contain more semantic information and the shallower layers contain more spatial information.
  • the use of layered fusion can improve the ability of layered feature representation.
  • different layers can be gathered in each stage, and three layers with different depths can be gathered in each stage to form a HAU.
  • the above attention unit AU includes a sixth convolutional layer, a seventh convolutional layer, and a sigmoid structure layer.
  • the sixth convolutional layer is sequentially connected to the seventh convolutional layer through batch normalized BN and correction linear unit ReLU;
  • the sigmoid structure Layer is exp represents the exponential function, ⁇ i , c represents the i th value of the feature map on the c th channel.
  • the attention mechanism is integrated into the encoder network as a mask branch at each stage. Through the attention unit, the value of the feature map can be normalized to obtain the attention mask.
  • the output of the above layered fusion module HAAM is M i,c ( ⁇ ) represents the output of the mask branch, and the range is [0,1]; F i,c ( ⁇ ) represents the output of the trunk branch, Represents the dot product, Means summing by element direction.
  • a point generation operation can be performed between the mask branch and the trunk branch. Since the value of the attention mask ranges from 0 to 1, the repeated use of the mask branch to make points will reduce the value of the feature map, and the attention mask may destroy the good performance of the trunk branch. In order to solve this problem, this time the method of residual learning is used to perform a completely consistent mapping between the input and output of the trunk branch.
  • HAAM includes HAU and AU
  • AU includes 3x3x3 convolutional layer, batch normalization, ReLU, 1x1x1 convolutional layer and sigmoid
  • sigmoid is also called Logistic function, used for hidden layer neuron output, the value range is ( 0,1), which can map a real number to the interval of (0,1), can be used for binary classification, and the effect is better when the feature difference is more complicated or the difference is not particularly large.
  • the B-layer is not shown in the figure.
  • the HAU includes convolution layers 11, 12, 13, 14, and 15, and each convolution layer Conv is connected with BN and ReLU. Concat is one of the commonly used functions in convolutional neural networks.
  • the hierarchical aggregation network model uses the hierarchical aggregation module to iteratively merge continuous layers of different depths at the same stage, which improves the network model.
  • the shallow and deep feature fusion capabilities of the system can obtain better fusion information, and also attach the attention unit as a mask branch to each stage of the encoder part, so that shallow spatial information can be used to gradually enhance the rich semantic information.
  • the deep contour feature by combining the level aggregation and attention mechanism, greatly improves the efficiency and accuracy of 3D left atrium segmentation.
  • FIG. 3 is another schematic block diagram of a three-dimensional left atrium segmentation method according to an embodiment of the present application.
  • the method may include the following steps:
  • Step S301 Obtain training samples and label information corresponding to the training samples.
  • the above training sample refers to a cardiac magnetic resonance image including a three-dimensional left atrium region
  • the label information refers to correct segmentation result information corresponding to the training sample.
  • Step S302 according to the label information, segment the corresponding target ROI region from the training sample.
  • the segmented target ROI region meets the training requirements. If it matches, the segmented target ROI region can be used for subsequent training. If it does not, it can be Reposition the target area, and crop the repositioned area as the target ROI area.
  • the above process of segmenting the corresponding target ROI region from the training sample according to the label information specifically includes:
  • Step S401 The training samples are uniformly adjusted to the first preset shape.
  • the above first preset shape can be set according to actual needs, and its shape is determined by (z-axis, height, width). For example, it can be set to (64, 128, 128).
  • Step S402 Input the training sample of the first preset shape into the pre-trained U-Net convolutional neural network to obtain the detection result of the target ROI region.
  • Step S403 Adjust each training sample from the first preset shape to the original shape of each training sample.
  • Step S404 According to the detection result of the ROI region and the label information, the corresponding target ROI region is segmented from the training sample.
  • the shape of each training sample can be adjusted to its original shape, that is, the z-axis, height, width and other parameters of each sample can be adjusted back to the original parameters. Then, based on the detection result of the ROI region and the label information, the target ROI region is segmented.
  • the output of the pre-trained U-Net convolutional neural network may have errors, that is, the localized area is not the actual area of the left atrium. If you continue to crop this area as the target ROI area for subsequent training, it will make the trained The network model is inaccurate. At this time, you can relocate the target area, that is, re-determine the area where the left atrium is located, and then cut the corresponding area as the target ROI area to ensure the training accuracy of the subsequent model.
  • the above specific process of segmenting the corresponding target ROI region from the training sample based on the target ROI region detection result and the label information includes: determining whether the target ROI region detection result includes tag information; when the target ROI region detection result When the label information is not included, the target ROI area is expanded to the preset target area; the preset target area is cropped from the training sample, and the preset target area is used as the target ROI area; when the detection result of the target ROI area contains the label information, from The target ROI region is cropped from the training sample.
  • the sticky note is the correct segmentation result corresponding to the training sample
  • the determined target ROI area does not contain the sticky note, it may be considered that the positioning is wrong, and the located area is not the actual left atrium area.
  • the target area can be expanded to ensure that the entire original target, that is, the three-dimensional left atrium area, is clearly cut out.
  • the sticky note is included in the target ROI area, it can be considered that the positioning is correct, that is, the estimated left atrium area is correct. In this case, the estimated area can be directly cropped as the target ROI area.
  • the above expansion of the target ROI area to the preset target area refers to expansion based on the estimated target ROI area. After expanding to a certain area, the preset target area can be cropped as the target ROI area. Set the target area as the input of the training model.
  • Step S303 Perform model training on the pre-established hierarchical aggregation network model according to the target ROI area.
  • the model is used to perform three-dimensional left atrium segmentation of the cardiac magnetic resonance image to be segmented.
  • Step S304 Acquire a cardiac magnetic resonance image to be segmented.
  • Step S305 Segment the ROI region from the cardiac magnetic resonance image to be segmented.
  • the ROI region is a region including a three-dimensional left atrium.
  • Step S306 Input the ROI area into the pre-trained hierarchical aggregation network model to obtain the segmentation result of the cardiac magnetic resonance image to be segmented.
  • steps S304 to S306 are the same as steps S101 to S103 in the first embodiment described above.
  • steps S304 to S306 are the same as steps S101 to S103 in the first embodiment described above.
  • steps S304 to S306 are the same as steps S101 to S103 in the first embodiment described above.
  • the data set used in this experiment provided 100 training data for the left atrial segmentation in the 2018 atrial segmentation challenge.
  • the original resolution of the data is 0.625 ⁇ 0.625 ⁇ 0.625 mm3, and each sample has 88 slices on the Z axis.
  • 10 patient data were randomly separated from the training data set as verification data to test the proposed model. Therefore, there were 90 patient data for training and 10 patient data for verification.
  • the data rotates 0 to 2 ⁇ degrees along the Z axis, scales within the range of 0.8 to 1.2, and performs mirror image and translation transformation along the Z axis.
  • the ⁇ transform is also used, and the range is 0.8 to 1.3.
  • the Z scoring criterion is adopted. It is worth noting that all conversions are not performed with a 50% probability.
  • the number of channels of the convolutional layer was set to 8 in the first stage of the network, and the number of channels was doubled. Each down-sampling operation continues to the fifth stage.
  • the training batch size is set to 8, and the HAANet network is optimized by the Adam algorithm. Each period has 100 iterations.
  • the learning rate is initialized to 1e-3 and decays with a factor of 0.1 when no update occurs within 5 cycles.
  • the dice loss is a loss function and is expressed as:
  • y true represents the training label
  • y pred represents the output of our network.
  • the HAANet network is implemented by Keras 2.1.5, which uses TensorFlow 1.4 as the backend.
  • the model was trained and tested on NVIDIA GeForce GTX 1080Ti GPU, which was developed on a 64-bit ubuntu 16.04 platform, using Intel R CoreTM i5-7640X CPU@4.00GHZ ⁇ 4, 32GB memory (RAM).
  • the purpose of the experiment is to evaluate the effectiveness of HAU and AU.
  • the evaluation standard is the dice coefficient, which shows the degree of conformity between the predicted value and the true value.
  • Six LA segmentation experiments were done, namely UNET-2, Hanet-2, HAANet-2, UNET-3, Hanet-3, HAANet-3, the suffix number represents the number of convolutional layers in each stage of the network, by the way , UNET-2 is also used for ROI detection, but the input shape is different.
  • HANet represents the architecture without paying attention to the unit, and HAANet represents our layered aggregation network based on attention. In order to compare the differences of these models on an equal basis, all six networks are set with the same hyperparameters and the same training protocol mentioned above.
  • HANet-2 and HANet-3 represent the proposed network, as shown in Figure 2(a), but no attention is paid to the module.
  • HAU has 2 and 3 convolutional layers, respectively, and has The HAU of the two-layer convolutional layer is the sub-module including l1, l2, and l3 shown in FIG. 2(b).
  • HAANet-2 and HAANet-3 are obtained by integrating AU into HANet-2 and HANet-3, thereby further optimizing HAANet-2 and HAANet-3.
  • Table 1 shows the comparison results of the six network methods on the verification data.
  • the results of six experiments show that, combined with the classic medical image segmentation structure U-net, the combination of HAU and AU can obtain a better segmentation effect.
  • the results show that the attention-based aggregation model is a promising LA segmentation strategy.
  • the dice value of HANets is higher than that of UNets.
  • the hierarchical aggregation module fuses the traditional stacked convolutional layers into a tree structure, and improves the performance of the network by learning richer features. Comparing feature maps at different levels can preserve shallow features and integrate convolutional layers with different receive field sizes, which is very important for semantic segmentation.
  • the attention mechanism is an effective method to force the network to focus on the goal of the left atrium.
  • the normalized intermediate mask is generated by the sigmoid structure function.
  • the proposed HAANets uses a residual attention learning strategy to attach the attention map of the shallow convolutional layer to the output of the entire block at each stage, which not only avoids having Breaking through the potential for good performance of the backbone branch also further improves the performance of the HAANet network.
  • HAANets performed better than HANets, showing the effectiveness of residual attention units.
  • HAANet-3 obtained the highest dice of 93.00.
  • the dice values for the ten validated patient data for UNET-2 and HAANet-3 are shown in Figure 5.
  • the vertical axis represents the dice value
  • the horizontal axis represents 10 patient samples (A to J). It can be observed that in almost all verification data, our HAANet-3 dice is slightly higher than UNET-2, except that HAANet-3's dice value is slightly lower than that of sample I's UNET-2.
  • This comparison result further illustrates the application prospect of the proposed HAANet in the three-dimensional segmentation of the left atrium.
  • Figure 6(a) is a comparison diagram of the segmentation results of UNet-2 and HAANet-3, which illustrates the proposed automatic 2D LA segmentation results of HAANet-3 and UNET-2. It can be seen that HAANet-3 has a good advantage over UNET-2, especially in the hard part of MRI.
  • Figure 6(b) is a schematic diagram of the three-dimensional segmentation results, which describes the three-dimensional LA segmentation results reconstructed by ITK-SNAP for 6 different patients. The upper column shows the ground truth, and the lower column shows the three-dimensional segmentation results of HAANet-3.
  • the U-Net network model and label information are used to crop the ROI region, which can make the model training more accurate. And by segmenting the ROI region containing the three-dimensional left atrium from the cardiac magnetic resonance image to be segmented, and using the ROI region as the input of the network model, the calculation amount can be greatly reduced, and the interference of the image background can also be greatly reduced, thereby improving the three-dimensional The efficiency and accuracy of left atrium segmentation; the hierarchical aggregation network model is used to segment the ROI region.
  • the hierarchical aggregation network model iteratively merges consecutive layers of different depths at the same stage through the layered aggregation module, which improves the shallow and The deep feature fusion ability can get better fusion information, and also attach the attention unit as a mask branch to each stage of the encoder part, so that shallow spatial information can be used to gradually enhance deep contour features with rich semantic information.
  • the level aggregation and attention mechanism By combining the level aggregation and attention mechanism, the efficiency and accuracy of 3D left atrium segmentation are greatly improved.
  • FIG. 7 is a schematic structural block diagram of a three-dimensional left atrium segmentation device according to an embodiment of the present application.
  • the device may include:
  • the obtaining module 71 is used to obtain a magnetic resonance image of the heart to be segmented
  • the ROI region segmentation module 72 is used to segment the ROI region from the cardiac magnetic resonance image to be segmented, and the ROI region is a region containing a three-dimensional left atrium;
  • the segmentation module 73 is used to input the ROI region into the pre-trained hierarchical aggregation network model to obtain the segmentation result of the cardiac magnetic resonance image to be segmented;
  • the hierarchical aggregation network model is a U-Net convolutional neural network model including an encoder path and a decoder path.
  • the encoder path includes at least one hierarchical aggregation module.
  • the hierarchical aggregation module includes a hierarchical aggregation unit as a main branch and as Attention unit of the mask branch.
  • the above ROI region segmentation module includes:
  • the ROI area detection unit is used to detect the cardiac magnetic resonance image to be segmented through the pre-trained U-Net convolutional neural network to obtain the ROI area detection result; wherein, each level of the pre-trained U-Net convolutional neural network has two Convolutional layer
  • the ROI region cropping unit is used to crop the cardiac magnetic resonance image to be segmented according to the detection result of the ROI region to obtain the ROI region.
  • the hierarchical aggregation unit includes a first convolutional layer, a second convolutional layer, a third convolutional layer, a fourth convolutional layer, and a fifth convolutional layer, each convolutional layer Connected with batch normalization and correction linear unit;
  • the first convolution layer and the second convolution layer are concatenated, and the concatenation result is input to the third convolution layer;
  • the third convolution layer is subjected to a convolution operation to obtain the fourth convolution layer, the third convolution layer and the third convolution layer.
  • Four convolutional layers are connected to generate a fifth convolutional layer;
  • the unit includes the sixth convolutional layer, the seventh convolutional layer, and the sigmoid structure layer.
  • the sixth convolutional layer is connected to the seventh convolutional layer in sequence through batch normalization and correction linear units; exp represents the exponential function, ⁇ i , c represents the i th value of the feature map on the c th channel;
  • the output of the layered fusion module is M i,c ( ⁇ ) represents the output of the mask branch, and the range is [0,1]; F i,c ( ⁇ ) represents the output of the trunk branch, Represents the dot product, Means summing by element direction.
  • the foregoing device further includes:
  • Training sample acquisition module used to obtain training samples and label information corresponding to the training samples
  • Target ROI region segmentation module used to segment the corresponding target ROI region from the training sample according to the label information
  • the training module is used to perform model training on the pre-established hierarchical aggregation network model according to the target ROI region.
  • the above target ROI region segmentation module includes:
  • the first adjustment unit is used to uniformly adjust the training samples to the first preset shape
  • the detection unit is used to input the training samples of the first preset shape into the pre-trained U-Net convolutional neural network to obtain the detection result of the target ROI region;
  • a second adjustment unit configured to adjust each training sample from the first preset shape to the original shape of each training sample
  • the segmentation unit is used to segment the corresponding target ROI region from the training sample according to the ROI region detection result and label information.
  • the above segmentation unit includes:
  • the judging subunit is used to judge whether the detection result of the target ROI area contains label information
  • the expansion subunit is used to expand the target ROI area to a preset target area when the detection result of the target ROI area does not include label information
  • the first cropping subunit is used to crop out the preset target area from the training sample, and use the preset target area as the target ROI area;
  • the second cropping subunit is used to crop the target ROI area from the training sample when the detection result of the target ROI area contains label information.
  • this embodiment corresponds to each of the above three-dimensional left atrial segmentation method embodiments.
  • the hierarchical aggregation network model uses the hierarchical aggregation module to iteratively merge continuous layers of different depths at the same stage, which improves the network model.
  • the shallow and deep feature fusion capabilities of the system can obtain better fusion information, and also attach the attention unit as a mask branch to each stage of the encoder part, so that shallow spatial information can be used to gradually enhance the rich semantic information.
  • the deep contour feature by combining the level aggregation and attention mechanism, greatly improves the efficiency and accuracy of 3D left atrium segmentation.
  • the terminal device 8 of this embodiment includes: a processor 80, a memory 81, and a computer program 82 stored in the memory 81 and executable on the processor 80.
  • the processor 80 executes the computer program 82, the steps in the above embodiments of the three-dimensional left atrium segmentation method are implemented, for example, steps S101 to S103 shown in FIG. 1.
  • the processor 80 executes the computer program 82, the functions of each module or unit in the foregoing device embodiments are realized, for example, the functions of the modules 71 to 73 shown in FIG. 7.
  • the computer program 82 may be divided into one or more modules or units, and the one or more modules or units are stored in the memory 81 and executed by the processor 80 to complete This application.
  • the one or more modules or units may be a series of computer program instruction segments capable of performing specific functions, and the instruction segments are used to describe the execution process of the computer program 82 in the terminal device 8.
  • the computer program 82 may be divided into an acquisition module, an ROI region division module, and a division module.
  • the specific functions of each module are as follows:
  • the acquisition module is used to acquire the cardiac magnetic resonance image to be segmented;
  • the ROI region segmentation module is used to segment the ROI region from the cardiac magnetic resonance image to be segmented, and the ROI region is a region containing a three-dimensional left atrium;
  • Segmentation module which is used to input the ROI region into a pre-trained hierarchical aggregation network model to obtain the segmentation result of the cardiac magnetic resonance image to be segmented; wherein, the hierarchical aggregation network model is a U-Net convolutional nerve including an encoder path and a decoder path
  • the encoder path includes at least one hierarchical aggregation module.
  • the hierarchical aggregation module includes a hierarchical aggregation unit as a backbone branch and an attention unit as a mask branch.
  • the terminal device 8 may be a computing device such as a desktop computer, a notebook, a palmtop computer and a cloud server.
  • the terminal device may include, but is not limited to, a processor 80 and a memory 81.
  • FIG. 8 is only an example of the terminal device 8 and does not constitute a limitation on the terminal device 8, and may include more or less components than the illustration, or a combination of certain components or different components.
  • the terminal device may further include an input and output device, a network access device, a bus, and the like.
  • the so-called processor 80 may be a central processing unit (Central Processing Unit, CPU), or other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the memory 81 may be an internal storage unit of the terminal device 8, such as a hard disk or a memory of the terminal device 8.
  • the memory 81 may also be an external storage device of the terminal device 8, such as a plug-in hard disk equipped on the terminal device 8, a smart memory card (Smart, Media, Card, SMC), and a secure digital (SD) Cards, flash cards, etc.
  • the memory 81 may also include both an internal storage unit of the terminal device 8 and an external storage device.
  • the memory 81 is used to store the computer program and other programs and data required by the terminal device.
  • the memory 81 can also be used to temporarily store data that has been or will be output.
  • each functional unit and module is used as an example for illustration.
  • the above-mentioned functions may be allocated by different functional units
  • Module completion means that the internal structure of the device is divided into different functional units or modules to complete all or part of the functions described above.
  • the functional units and modules in the embodiments may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above integrated unit may use hardware It can also be implemented in the form of software functional units.
  • the specific names of each functional unit and module are only for the purpose of distinguishing each other, and are not used to limit the protection scope of the present application.
  • the disclosed device, terminal device, and method may be implemented in other ways.
  • the device and terminal device embodiments described above are only schematic.
  • the division of the module or unit is only a logical function division, and in actual implementation, there may be other division modes, such as multiple units Or components can be combined or integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above integrated unit may be implemented in the form of hardware or software functional unit.
  • the integrated module or unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium.
  • the present application can implement all or part of the processes in the methods of the above embodiments, and can also be completed by a computer program instructing relevant hardware.
  • the computer program can be stored in a computer-readable storage medium. When the program is executed by the processor, the steps of the foregoing method embodiments may be implemented.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate form.
  • the computer-readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a USB flash drive, a mobile hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM, Read-Only Memory) , Random Access Memory (RAM, Random Access Memory), electrical carrier signals, telecommunications signals and software distribution media, etc.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • electrical carrier signals telecommunications signals and software distribution media, etc.
  • the content contained in the computer-readable medium can be appropriately increased or decreased according to the requirements of legislation and patent practice in jurisdictions. For example, in some jurisdictions, according to legislation and patent practice, computer-readable media Does not include electrical carrier signals and telecommunications signals.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

一种三维左心房分割方法、装置、终端设备及计算机可读存储介质,所述方法包括:获取待分割心脏磁共振图像(S101);从待分割心脏磁共振图像中分割出ROI区域,ROI区域为包含三维左心房的区域(S102);将ROI区域输入预先训练的层次聚合网络模型,得到待分割心脏磁共振图像的分割结果(S103);其中,层次聚合网络模型为包括编码器路径和解码器路径的U-Nct卷积神经网络模型,编码器路径包括至少一个分层聚合模块,分层聚合模块包括作为主干分支的层次聚合单元和作为掩码分支的注意单元。通过以上方式可以提高三维左心房分割的效率和准确率。

Description

三维左心房分割方法、装置、终端设备及存储介质 技术领域
本申请属于医学图像处理技术领域,尤其涉及一种三维左心房分割方法、装置、终端设备及计算机可读存储介质。
背景技术
医学图像是指利用计算机断层扫描CT、磁共振成像MRI、B超或者正电子发射型计算机断层显像PET等等医疗成像设备获取的图像数据,其一般是以二维切片形式组成的三维图像数据。医学图像的分割是处理医学图像的关键手段,其是指将医学图像中具有特殊涵义的不同区域区分开,这些区域是互相不交叉的,每一个区域都满足特定区域的一致性。
心房颤动又名房颤是常见的心律失常类型,由于对人心房结构缺乏了解,导致目前房颤治疗效果不佳。钆造影剂被应用于MRI扫描,以提高患者内部结构图像的清晰度,钆增强磁共振成像GE-MRI是评价心房纤维化的重要工具。
而为了更加了解心房结构,往往需要对MRI图像进行心房分割。目前,从三维GE-MRI图像中分割左心房LA是很有挑战性的。左心房LA与背景的对比度差,降低了左心房LA边界的可见性;在扫描过程中,患者呼吸节律不规律和心率变异性,图像质量可能受到影响。近年来,随着深度学习的迅速发展,人们提出了几种全自动的左心房LA分割方法。例如,可以在具有自适应融合策略的多视图卷积神经网络中,分别将三维数据从轴向、矢状面和冠状面分别解析为二维分量,然后对每个分量分别进行多视点卷积神经网络分析;也可以采用ConvLSTM构成的扩展残差网络和序贯学习网络,扩展了多视点学习策略。但是,现有基于三维GE-MRI图像分割左心的房方法性能低下。
技术问题
有鉴于此,本申请实施例提供一种三维左心房分割方法、装置、终端设备及计算机可读存储介质,以解决现有心脏磁共振图像的性能低下的问题。
技术解决方案
本申请实施例的第一方面提供了一种三维左心房分割方法,包括:
获取待分割心脏磁共振图像;
从所述待分割心脏磁共振图像中分割出ROI区域,所述ROI区域为包含三维左心房的区域;
将所述ROI区域输入预先训练的层次聚合网络模型,得到所述待分割心脏磁共振图像的分割结果;
其中,所述层次聚合网络模型为包括编码器路径和解码器路径的U-Net卷积神经网络模型,所述编码器路径包括至少一个分层聚合模块,所述分层聚合模块包括作为主干分支的层次聚合单元和作为掩码分支的注意单元。
结合第一方面,在一种可能的实现方式中,所述从所述待分割心脏磁共振图像中分割出ROI区域,包括:
通过预训练U-Net卷积神经网络对所述待分割心脏磁共振图像进行检测,得到ROI区域检测结果;其中,所述预训练U-Net卷积神经网络的每一级具有两个卷积层;
根据所述ROI区域检测结果对所述待分割心脏磁共振图像进行裁剪,得到所述ROI区域。
结合第一方面,在一种可能的实现方式中,所述层次聚合单元包括第一卷积层、第二卷积层、第三卷积层、第四卷积层和第五卷积层,每个卷积层后均连接有批量归一化和校正线性单元;
其中,所述第一卷积层与所述第二卷积层级联,将级联结果输入所述第三卷积层;对所述第三卷积层进行卷积运算得到所述第四卷积层,所述第三卷积层和所述第四卷积层连接生成所述第五卷积层;
所述注意单元包括第六卷积层、第七卷积层和乙状结构层,所述第六卷积层依次通过批量归一化和所述校正线性单元与所述第七卷积层连接;所述乙状结构层为
Figure PCTCN2019124311-appb-000001
exp表示指数函数,χ i,c表示c th通道上特征映射的i th值;
所述分层融合模块的输出为
Figure PCTCN2019124311-appb-000002
M i,c(χ)表示掩码分支的输出,范围为[0,1];F i,c(χ)表示主干分支的输出,
Figure PCTCN2019124311-appb-000003
表示点乘积,
Figure PCTCN2019124311-appb-000004
表示按元素方向求和。
结合第一方面,在一种可能的实现方式中,在所述获取待分割心脏磁共振图像之前,还包括:
获取训练样本和与所述训练样本对应的标签信息;
根据所述标签信息,从所述训练样本中分割出相应的目标ROI区域;
根据所述目标ROI区域,对预先建立的所述层次聚合网络模型进行模型训练。
结合第一方面,在一种可能的实现方式中,所述根据所述标签信息,从所述训练样本中分割出相应的目标ROI区域,包括:
将所述训练样本统一调整至第一预设形状;
将所述第一预设形状的训练样本输入预先训练的U-Net卷积神经网络,得到目标ROI区域检测结果;
将各所述训练样本从所述第一预设形状调整至各训练样本的原本形状;
根据所述ROI区域检测结果和所述标签信息,从所述训练样本中分割出相应的所述目标ROI区域。
结合第一方面,在一种可能的实现方式中,所述根据所述目标ROI区域检测结果和所述标签信息,从所述训练样本中分割出相应的所述目标ROI区域,包括:
判断所述目标ROI区域检测结果是否包含所述标签信息;
当所述目标ROI区域检测结果未包含所述标签信息时,则将所述目标ROI区域扩展为预设目标区域;
从所述训练样本中裁剪出所述预设目标区域,将所述预设目标区域作为所述目标ROI区域;
当所述目标ROI区域检测结果包含所述标签信息时,从所述训练样本中裁剪出所述目标ROI区域。
本申请实施例的第二方面提供一种三维左心房分割装置,包括:
获取模块,用于获取待分割心脏磁共振图像;
ROI区域分割模块,用于从所述待分割心脏磁共振图像中分割出ROI区域,所述ROI区域为包含三维左心房的区域;
分割模块,用于将所述ROI区域输入预先训练的层次聚合网络模型,得到所述待分割心脏磁共振图像的分割结果;
其中,所述层次聚合网络模型为包括编码器路径和解码器路径的U-Net卷积神经网络模型,所述编码器路径包括至少一个分层聚合模块,所述分层聚合模块包括作为主干分支的层次聚合单元和作为掩码分支的注意单元。
结合第二方面,在一种可能的实现方式中,所述ROI区域分割模块包括:
ROI区域检测单元,用于通过预训练U-Net卷积神经网络对所述待分割心脏磁共振图像进行检测,得到ROI区域检测结果;其中,所述预训练U-Net卷积神经网络的每一级具有两个卷积层;
ROI区域裁剪单元,用于根据所述ROI区域检测结果对所述待分割心脏磁共振图像进行裁剪,得到所述ROI区域。
结合第二方面,在一种可能的实现方式中,所述层次聚合单元包括第一卷积层、第二 卷积层、第三卷积层、第四卷积层和第五卷积层,每个卷积层后均连接有批量归一化和校正线性单元;
其中,所述第一卷积层与所述第二卷积层级联,将级联结果输入所述第三卷积层;对所述第三卷积层进行卷积运算得到所述第四卷积层,所述第三卷积层和所述第四卷积层连接生成所述第五卷积层;
所述注意单元包括第六卷积层、第七卷积层和乙状结构层,所述第六卷积层依次通过批量归一化和所述校正线性单元与所述第七卷积层连接;所述乙状结构层为
Figure PCTCN2019124311-appb-000005
exp表示指数函数,χ i,c表示c th通道上特征映射的i th值;
所述分层融合模块的输出为
Figure PCTCN2019124311-appb-000006
M i,c(χ)表示掩码分支的输出,范围为[0,1];F i,c(χ)表示主干分支的输出,
Figure PCTCN2019124311-appb-000007
表示点乘积,
Figure PCTCN2019124311-appb-000008
表示按元素方向求和。
结合第二方面,在一种可能的实现方式中,还包括:
训练样本获取模块,用于获取训练样本和与所述训练样本对应的标签信息;
目标ROI区域分割模块,用于根据所述标签信息,从所述训练样本中分割出相应的目标ROI区域;
训练模块,用于根据所述目标ROI区域,对预先建立的所述层次聚合网络模型进行模型训练。
结合第二方面,在一种可能的实现方式中,所述目标ROI区域分割模块包括:
第一调整单元,用于将所述训练样本统一调整至第一预设形状;
检测单元,用于将所述第一预设形状的训练样本输入预先训练的U-Net卷积神经网络,得到目标ROI区域检测结果;
第二调整单元,用于将各所述训练样本从所述第一预设形状调整至各训练样本的原本形状;
分割单元,用于根据所述ROI区域检测结果和所述标签信息,从所述训练样本中分割出相应的所述目标ROI区域。
结合第二方面,在一种可能的实现方式中,所述分割单元包括:
判断子单元,用于判断所述目标ROI区域检测结果是否包含所述标签信息;
扩展子单元,用于当所述目标ROI区域检测结果未包含所述标签信息时,则将所述目 标ROI区域扩展为预设目标区域;
第一裁剪子单元,用于从所述训练样本中裁剪出所述预设目标区域,将所述预设目标区域作为所述目标ROI区域;
第二裁剪子单元,用于当所述目标ROI区域检测结果包含所述标签信息时,从所述训练样本中裁剪出所述目标ROI区域。
本申请实施例的第三方面提供一种终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如上述第一方面任一项所述三维左心房分割方法的步骤。
本申请实施例的第四方面提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如上述第一方面任一项所述三维左心房分割方法的步骤。
有益效果
本申请实施例与现有技术相比存在的有益效果是:
本申请实施例通过从待分割心脏磁共振图像中分割出包含三维左心房的ROI区域,将该ROI区域作为网络模型的输入,可以大大减少计算量,同时也可以大大降低图像背景的干扰,从而提高了三维左心房分割的效率和准确率;利用层次聚合网络模型对ROI区域进行分割,该层次聚合网络模型通过分层聚合模块对同一阶段不同深度的连续层进行迭代融合,提高了网络模型的浅层和深层的特征融合能力,能得到更好融合信息,还将注意单元作为掩码分支附加至编码器部分的每个阶段,从而可以利用浅层的空间信息逐步增强具有丰富语义信息的深层轮廓特征,通过将层次聚集和注意力机制相结合,大大提高了三维左心房分割的效率和准确率。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的一种三维左心房分割方法的流程示意框图;
图2为本申请实施例提供的基于注意力的层次聚合网络结构示意图;
图3为本申请实施例提供的一种三维左心房分割方法的另一种流程示意框图;
图4为本申请实施例提供的步骤S302的具体流程示意框图;
图5为本申请实施例提供的UNet-2和HAANet-3的骰子值的比较结果示意图;
图6为本申请实施例提供的二维分割结果和三维分割结果的比较示意图;
图7为本申请实施例提供的一种三维左心房分割装置的结构示意框图;
图8为本申请实施例提供的终端设备的示意图。
本发明的实施方式
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。
为了说明本申请所述的技术方案,下面通过具体实施例来进行说明。
实施例一
请参见图1,为本申请实施例提供的一种三维左心房分割方法的流程示意框图,该方法可以包括以下步骤:
步骤S101、获取待分割心脏磁共振图像。
步骤S102、从待分割心脏磁共振图像中分割出ROI区域,ROI区域为包含三维左心房的区域。
需要说明的是,上述ROI区域(region of interest,感兴趣区)包含整个三维体积的左心房区域,在该ROI区域中,左心房所占的百分比较大,图像背景区域所占的百分比极小。一般情况下,在每个待分割心脏磁共振图像中,左心房所占的百分比很小,背景区域所占的百分比较大。也就是说,待分割心脏磁共振图像中的大部分体积数据对于左心房分割任务是无用,假如以整幅待分割心脏磁共振图像作为网络模型的输入的话,会让很多的无用数据参与计算,计算量较大从而影响计算效率,同时也是对计算资源的巨大浪费。此外,由于在图像中左心房的区域和背景之间的对比都很低,假如以整幅待分割心脏磁共振图像作为网络模型的输入的话,大范围背景区域、周围组织等会对左心房分割造成极大的干扰,从而降低分割准确率。
而从图像中分割出ROI区域后,再以ROI区域作为后续计算和分割的基础,这样能极大地减少计算量,有助于减轻周围组织、背景对分割的影响,从而极大地提高分割效率和准确率。
在一些实施例中,可以利用U-Net卷积神经网络对待分割心脏磁共振图像进行检测,以得到左心房的预估结果,然后再裁剪出相应的区域作为ROI区域。故上述从待分割心脏磁共振图像中分割出ROI区域的具体过程可以包括:
通过预训练U-Net卷积神经网络对待分割心脏磁共振图像进行检测,得到ROI区域检 测结果;其中,预训练U-Net卷积神经网络的每一级具有两个卷积层;根据ROI区域检测结果对待分割心脏磁共振图像进行裁剪,得到ROI区域。
可以理解的是,U-Net卷积神经网络是指整体网络结构与字母“U”类似的卷积神经网络,其可以看作卷积神经网络的变形,具体包括收缩路径和扩展路径。此处的U-Net网络的每一级均具有两个卷积层。关于U-Net的具体网络构造已被本领域技术人员所熟知,在此不再赘述。
预先利用训练样本对该U-Net卷积神经网络进行训练,训练完成后,将该U-Net卷积神经网络作为ROI检测网络,以分割出相应的ROI区域。
具体应用中,可以将待分割心脏磁共振图像调整至一个固定的形状,然后再输入至训练好的U-Net卷积神经网络,得到神经网络的输出结果,该输出结果是一个粗略的预测结果,根据该预测结果可以定位出左心房区域在整幅待分割心脏磁共振图像的区域位置。定位出左心房所在区域之后,可以将待分割心脏磁共振图像调整回其原始形状,然后再将相应的区域进行裁剪,裁剪得到的区域即为ROI区域。
可以看出,此处首次利用U-Net卷积神经网络作为ROI检测网络,通过U-Net在医学图像处理上的特点,可以进一步提高左心房分割的效率和准确率。
当然,也可以使用其他ROI区域分割方法,在此不再限定。
步骤S103、将ROI区域输入预先训练的层次聚合网络模型,得到待分割心脏磁共振图像的分割结果。其中,层次聚合网络模型为包括编码器路径和解码器路径的U-Net卷积神经网络模型,编码器路径包括至少一个分层聚合模块,分层聚合模块包括作为主干分支的层次聚合单元和作为掩码分支的注意单元。
需要说明的是,上述层次聚合网络模型是指结合分层融合和注意力机制的三维卷积神经网络,可以将该网络命名为基于注意力的层次聚合网络模型(Attention based hierarchical aggregation network,HAANet)。
该层次聚合网络模型是基于U-Net卷积神经网络的模型,其包括编码器路径和解码器路径。编码器路径包括至少一个分层聚合模块(Attention based Hierarchical Aggregaition Module,HAAM),分层聚合模块包括作为主干分支的层次聚合单元(Hierarchical Aggregaition Unit,HAU)和作为掩码分支的注意单元(Attention Unit,AU)。其解码器路径与U-Net相同,具体由多个重复的卷积层组成,然后是批量归一化(Batch Normalization,BN)和校正线性单元ReLU。
上述层次聚合单元HAU可以包括第一卷积层、第二卷积层、第三卷积层、第四卷积层和第五卷积层,每个卷积层后均连接有批量归一化BN和校正线性单元ReLU;其中,第 一卷积层与第二卷积层级联,将级联结果输入第三卷积层,使第三卷积层屈服;对第三卷积层进行卷积运算得到第四卷积层,第三卷积层和第四卷积层连接生成第五卷积层。
在该层次聚合单元中,所有卷积运算的核大小可以设置为3x3x3,步长可以为(2,2,2)。利用BN和ReLU对各阶段进行采样。
需要说明的是,神经网络中的较深层次包含更多的语义信息,较浅层包含更多空间信息,采用分层融合可以提高分层特征表示能力。而由于三维图像分割任务的计算量较大,可以在每隔阶段聚集不同的层数,在每个阶段聚集了三个不同深度的层,以形成HAU。
上述注意单元AU包括第六卷积层、第七卷积层和乙状结构层,第六卷积层依次通过批量归一化BN和校正线性单元ReLU与第七卷积层连接;乙状结构层为
Figure PCTCN2019124311-appb-000009
exp表示指数函数,χ i,c表示c th通道上特征映射的i th值。
需要说明的是,不同的病人的左心房形状可能有很大的差异,以及卷积神经网络的下采样操作容易随着深度的加深而丢失空间信息,这两个因素可以影响分割精度。为了缓解上述两个因素对分割精度的影响,此处在每个阶段都将注意力机制作为一个掩码分支整合至编码器网络。通过该注意单元,可以将特征映射的值进行归一化,以获得注意掩码。
上述分层融合模块HAAM的输出为
Figure PCTCN2019124311-appb-000010
M i,c(χ)表示掩码分支的输出,范围为[0,1];F i,c(χ)表示主干分支的输出,
Figure PCTCN2019124311-appb-000011
表示点乘积,
Figure PCTCN2019124311-appb-000012
表示按元素方向求和。
需要说明的是,在获得注意掩码之后,可以在掩码分支和主干分支之间执行点产生操作。而由于注意力掩码的值是从0到1不等,故使用掩码分支重复制作点会降低特征映射的值,而注意力掩码可能会破坏主干分支的良好性能。为了解决这一问题,此次采用残差学习的方法,在主干分支的输入和输出之间进行完全一致的映射。
为了更好地介绍HAANet的具体网络结构,下面将结合图2所示的基于注意力的层次聚合网络结构示意图进行介绍说明。
如图2所示,(a)为HAANet的整体网络结构图,(b)为HAU的网络结构图,(c)为AU的网络结构图,(d)为HAAM的网络结构图。其中,HAAM包括HAU和AU,AU包括3x3x3的卷积层、批量归一化、ReLU、1x1x1的卷积层以及sigmoid,sigmoid也叫Logistic函数,用于隐层神经元输出,取值范围为(0,1),它可以将一个实数映射到(0,1)的区间,可以用来做二分类,在特征相差比较复杂或是相差不是特别大时效果比较好。乙 状结构层在图中未示出。HAU包括卷积层l1、l2、l3、l4、l5,每个卷积层Conv均连接有BN和ReLU。Concat是卷积神经网络中常用的函数之一。
本实施例中,通过从待分割心脏磁共振图像中分割出包含三维左心房的ROI区域,将该ROI区域作为网络模型的输入,可以大大减少计算量,同时也可以大大降低图像背景的干扰,从而提高了三维左心房分割的效率和准确率;利用层次聚合网络模型对ROI区域进行分割,该层次聚合网络模型通过分层聚合模块对同一阶段不同深度的连续层进行迭代融合,提高了网络模型的浅层和深层的特征融合能力,能得到更好融合信息,还将注意单元作为掩码分支附加至编码器部分的每个阶段,从而可以利用浅层的空间信息逐步增强具有丰富语义信息的深层轮廓特征,通过将层次聚集和注意力机制相结合,大大提高了三维左心房分割的效率和准确率。
实施例二
请参见图3,为本申请实施例提供的一种三维左心房分割方法的另一种流程示意框图,该方法可以包括以下步骤:
步骤S301、获取训练样本和与训练样本对应的标签信息。
需要说明的是,上述训练样本是指包含三维左心房区域的心脏磁共振图像,标签信息是指训练样本对应的正确分割结果信息。
步骤S302、根据标签信息,从训练样本中分割出相应的目标ROI区域。
具体地,基于该训练样本对应的正确分割结果信息,判断所分割出的目标ROI区域是否符合训练要求,如果符合,则可以利用所分割出的目标ROI区域进行后续训练,如果不符合,则可以重新定位目标区域,裁剪重新定位的区域作为目标ROI区域。
在一些实施例中,参见图4示出的步骤S302的具体流程框图,上述根据标签信息,从训练样本中分割出相应的目标ROI区域的过程具体包括:
步骤S401、将训练样本统一调整至第一预设形状。
可以理解的是,上述第一预设形状可以根据实际需要进行设定,其形状由(z轴,高度,宽度)决定。例如,可以设为(64,128,128)。
步骤S402、将第一预设形状的训练样本输入预先训练的U-Net卷积神经网络,得到目标ROI区域检测结果。
步骤S403、将各训练样本从第一预设形状调整至各训练样本的原本形状。
步骤S404、根据ROI区域检测结果和标签信息,从训练样本中分割出相应的目标ROI区域。
也就是说,在得到目标ROI区域的预估结果之后,可以将各个训练样本的形状调整成 各自的原始形状,即将各样本的z轴、高度、宽度等参数调整回原来的参数。然后,在根据该ROI区域检测结果和标签信息,分割目标ROI区域。
其中,预训练U-Net卷积神经网络的输出结果可能存在误差,即所定位的区域不是实际的左心房所在区域,如果还继续裁剪该区域作为目标ROI区域进行后续训练,会使得所训练的网络模型不准确,此时,则可以重新定位目标区域,即重新确定左心房所在区域,然后再裁剪相应的区域作为目标ROI区域,以保证后续模型的训练准确性。
在一些实施例中,上述根据目标ROI区域检测结果和标签信息,从训练样本中分割出相应的目标ROI区域的具体过程包括:判断目标ROI区域检测结果是否包含标签信息;当目标ROI区域检测结果未包含标签信息时,则将目标ROI区域扩展为预设目标区域;从训练样本中裁剪出预设目标区域,将预设目标区域作为目标ROI区域;当目标ROI区域检测结果包含标签信息,从训练样本中裁剪出目标ROI区域。
可以理解的是,由于便签是训练样本对应的正确分割结果,如果所确定的目标ROI区域内没有包含该便签,则可以认为定位错误,所定位出的区域不是实际的左心房区域。在预估错误时,可以扩展目标区域,以确保整个原始目标即三维左心房区域被明确地裁剪出来。如果目标ROI区域内包含该便签,则可以认为定位正确,即所预估的左心房区域是正确的,此时可以直接裁剪所预估的区域作为目标ROI区域。
上述将目标ROI区域扩展至预设目标区域是指基于所预估的目标ROI区域进行扩展,扩展至一定的区域后,则可以将该预设目标区域进行裁剪,作为目标ROI区域,即将该预设目标区域作为训练模型的输入。
可以看出,通过便签信息进行训练可以保证模型训练的准确性。
步骤S303、根据目标ROI区域,对预先建立的层次聚合网络模型进行模型训练。
当然,在训练之后,可以通过利用测试样本进行测试,当模型的相关参数符合一定的要求后再利用该模型进行待分割心脏磁共振图像的三维左心房分割。
步骤S304、获取待分割心脏磁共振图像。
步骤S305、从待分割心脏磁共振图像中分割出ROI区域,ROI区域为包含三维左心房的区域。
步骤S306、将ROI区域输入预先训练的层次聚合网络模型,得到待分割心脏磁共振图像的分割结果。
需要说明的是,步骤S304~S306与上述实施例一的步骤S101~S103相同,具体介绍请参见上文相应内容,在此不再赘述。
为了比较本申请实施例的三维左心房分割方法与其他分割方法的准确率,下面将结合 一个具体的实验进行说明。
本实验采用的数据集为2018年心房分割挑战中为左心房分割提供了100个训练数据。数据的原始分辨率为0.625×0.625×0.625mm3,每个样品在Z轴上有88片。因为没有发布验证数据,从训练数据集中随机分离出10个病人数据作为验证数据来测试所提出的模型,因此,有90个病人数据进行培训,10个病人数据进行验证。在训练过程中,我们将ROI区域的形状调整为固定大小(88,80,128),以输入HAANet模型。数据增强可以用于扩展我们的训练集,以减轻训练过程中的过度适应。在随机的基础上,数据沿Z轴旋转0~2π度,在0.8~1.2范围内进行缩放,并沿Z轴进行镜像和平移变换。此外,还采用了γ变换,其范围为0.8~1.3。最后,在完成上述转换后,采用Z评分标准.值得注意的是,所有转换都不是以50%的概率执行的。通过应用这些随机数据增强方法,理论上可以有无限的训练数据,这些随机数据增强方法是在每次迭代的训练阶段执行的。
关于实验的培训协议:在此次实验中,对于三维U-net和HAANet网络,在网络的第一阶段将卷积层的通道数设置为8,并将信道数增加了一倍。每一次下采样操作一直进行到第五阶段。训练批次大小设置为8,HAANet网络通过Adam算法进行优化。每个时期都有100次迭代。学习速率初始化为1e-3,并随因子0.1衰减当在5个周期内没有发生任何更新时。骰子损失作为损失函数,并表示为:
Figure PCTCN2019124311-appb-000013
其中,y true表示训练标签,y pred表示我们网络的输出。
HAANet网络由Keras 2.1.5实现,它使用TensorFlow 1.4作为后端,该模型在NVIDIA GeForce GTX 1080Ti GPU上进行了训练和测试,该GPU是在一个64位的ubuntu 16.04平台上开发的,使用Intel R CoreTM i5-7640X CPU@4.00GHZ×4,32GB内存(RAM)。
关于此次实验的设置:实验的目的是评估HAU和AU的有效性,评价标准为骰子系数,显示预测值与真实值的符合程度。做了六个LA分割实验,即UNET-2,Hanet-2,HAANet-2,UNET-3,Hanet-3,HAANet-3,后缀数表示网络中每个阶段的卷积层数,顺便说一句,UNET-2也用于ROI检测,但输入形状不同。HANet表示没有注意单元的体系结构,HAANet表示我们基于注意力的分层聚合网络。为了平等地比较这些模型的不同之处,所有六个网络都设置了相同的超参数和上文提到的相同的培训协议。
关于定量结果:将上述实验设置相同的6种网络方法应用于LA分割。在这六个网络中,UNet-2代表基线,每个阶段有两个卷积层,UNet-3表示每个阶段三个相对于UNet-2 的卷积层。HANet-2和HANet-3表示拟议的网络,如图2(a)所示,但没有注意模块,在HANet-2和HANet-3中,HAU分别有2层和3层卷积层,而具有两层卷积层的HAU是图2(b)中所表示的包括l1、l2和l3在内的子模块。通过将AU集成到HANet-2和HANet-3中获得HAANet-2和HAANet-3,从而进一步优化HAANet-2和HAANet-3.将逐步测试这六个网络,以证明HAU和AU的有效性。
下文表1显示了六种网络方法在验证数据上的比较结果。六个实验结果表明,与经典医学图像分割结构U-net相结合,HAU和AU相结合可以获得更好的分割效果。结果表明,基于注意的聚集模型是一种很有前途的LA分割策略。
表1 六种网络方法对验证数据的比较结果
Figure PCTCN2019124311-appb-000014
关于分层聚合单元的性能:如表1所示,无论每个阶段有2层或3层,HANets的骰子值都高于UNets的骰子值。分层聚合模块将传统的层叠卷积层融合为树结构,通过学习更丰富的特征来提高网络的性能。将不同层次的特征映射进行对比,可以保留浅层特征,集成不同接收场大小的卷积层,这对于语义分割是非常重要的。
关于注意力单元的性能:注意力机制是迫使网络聚焦于左心房这一目标的一种有效方法。通过乙状结构函数生成归一化中间掩码,所提出的HAANets采用剩余注意力学习策略,将浅层卷积层的注意映射附加到每个阶段的整个块的输出上,这样不仅可以避免具有突破主干支路良好性能的潜力,也进一步提高了HAANet网络的性能。在表1中,HAANets比HANets表现得更好,显示了残余注意单元的有效性。
关于分割结果:通过将HAU和AU组合在一起,HAANet-3获得了93.00的最高骰子。对UNET-2和HAANet-3的十个验证病人数据的骰子值如图5所示,其垂直轴表示骰子值,水平轴表示10个病人样本(A~J)。可以观察到,几乎所有的验证数据我们的HAANet-3的骰子比UNET-2要高一些,除了HAANet-3的骰子值略低于样本I的UNET-2。这一比较结果进一步说明了所提出的HAANet在左心房三维分割中的应用前景。
图6(a)为UNet-2和HAANet-3的分割结果比较示意图,其说明了拟议的HAANet-3和UNET-2自动2D LA分割结果。可以看到,HAANet-3比UNET-2有很好的优势,尤其是在MRI的硬部分。图6(b)为三维分割结果示意图,其描述了ITK-SNAP为6种不同患者 重建的三维LA分割结果,上边一列表示地面真相,下边一列表示HAANet-3的三维分割结果。
本实施例中,利用U-Net网络模型和标签信息裁剪ROI区域,可以使得模型训练更加准确。且通过从待分割心脏磁共振图像中分割出包含三维左心房的ROI区域,将该ROI区域作为网络模型的输入,可以大大减少计算量,同时也可以大大降低图像背景的干扰,从而提高了三维左心房分割的效率和准确率;利用层次聚合网络模型对ROI区域进行分割,该层次聚合网络模型通过分层聚合模块对同一阶段不同深度的连续层进行迭代融合,提高了网络模型的浅层和深层的特征融合能力,能得到更好融合信息,还将注意单元作为掩码分支附加至编码器部分的每个阶段,从而可以利用浅层的空间信息逐步增强具有丰富语义信息的深层轮廓特征,通过将层次聚集和注意力机制相结合,大大提高了三维左心房分割的效率和准确率。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
实施例三
请参见图7,为本申请实施例提供的一种三维左心房分割装置的结构示意框图,该装置可以包括:
获取模块71,用于获取待分割心脏磁共振图像;
ROI区域分割模块72,用于从待分割心脏磁共振图像中分割出ROI区域,ROI区域为包含三维左心房的区域;
分割模块73,用于将ROI区域输入预先训练的层次聚合网络模型,得到待分割心脏磁共振图像的分割结果;
其中,层次聚合网络模型为包括编码器路径和解码器路径的U-Net卷积神经网络模型,编码器路径包括至少一个分层聚合模块,分层聚合模块包括作为主干分支的层次聚合单元和作为掩码分支的注意单元。
在一种可能的实现方式中,上述ROI区域分割模块包括:
ROI区域检测单元,用于通过预训练U-Net卷积神经网络对待分割心脏磁共振图像进行检测,得到ROI区域检测结果;其中,预训练U-Net卷积神经网络的每一级具有两个卷积层;
ROI区域裁剪单元,用于根据ROI区域检测结果对待分割心脏磁共振图像进行裁剪,得到ROI区域。
在一种可能的实现方式中,层次聚合单元包括第一卷积层、第二卷积层、第三卷积层、 第四卷积层和第五卷积层,每个卷积层后均连接有批量归一化和校正线性单元;
其中,第一卷积层与第二卷积层级联,将级联结果输入第三卷积层;对第三卷积层进行卷积运算得到第四卷积层,第三卷积层和第四卷积层连接生成第五卷积层;
注意单元包括第六卷积层、第七卷积层和乙状结构层,第六卷积层依次通过批量归一化和校正线性单元与第七卷积层连接;乙状结构层为
Figure PCTCN2019124311-appb-000015
exp表示指数函数,χ i,c表示c th通道上特征映射的i th值;
分层融合模块的输出为
Figure PCTCN2019124311-appb-000016
M i,c(χ)表示掩码分支的输出,范围为[0,1];F i,c(χ)表示主干分支的输出,
Figure PCTCN2019124311-appb-000017
表示点乘积,
Figure PCTCN2019124311-appb-000018
表示按元素方向求和。
在一种可能的实现方式中,上述装置还包括:
训练样本获取模块,用于获取训练样本和与训练样本对应的标签信息;
目标ROI区域分割模块,用于根据标签信息,从训练样本中分割出相应的目标ROI区域;
训练模块,用于根据目标ROI区域,对预先建立的层次聚合网络模型进行模型训练。
在一种可能的实现方式中,上述目标ROI区域分割模块包括:
第一调整单元,用于将训练样本统一调整至第一预设形状;
检测单元,用于将第一预设形状的训练样本输入预先训练的U-Net卷积神经网络,得到目标ROI区域检测结果;
第二调整单元,用于将各训练样本从第一预设形状调整至各训练样本的原本形状;
分割单元,用于根据ROI区域检测结果和标签信息,从训练样本中分割出相应的目标ROI区域。
在一种可能的实现方式中,上述分割单元包括:
判断子单元,用于判断目标ROI区域检测结果是否包含标签信息;
扩展子单元,用于当目标ROI区域检测结果未包含标签信息时,则将目标ROI区域扩展为预设目标区域;
第一裁剪子单元,用于从训练样本中裁剪出预设目标区域,将预设目标区域作为目标ROI区域;
第二裁剪子单元,用于当目标ROI区域检测结果包含标签信息,从训练样本中裁剪出 目标ROI区域。
需要说明的是,本实施例与上述各个三维左心房分割方法的实施例对应,相关介绍请参见上文相应内容,在此不再赘述。
本实施例中,通过从待分割心脏磁共振图像中分割出包含三维左心房的ROI区域,将该ROI区域作为网络模型的输入,可以大大减少计算量,同时也可以大大降低图像背景的干扰,从而提高了三维左心房分割的效率和准确率;利用层次聚合网络模型对ROI区域进行分割,该层次聚合网络模型通过分层聚合模块对同一阶段不同深度的连续层进行迭代融合,提高了网络模型的浅层和深层的特征融合能力,能得到更好融合信息,还将注意单元作为掩码分支附加至编码器部分的每个阶段,从而可以利用浅层的空间信息逐步增强具有丰富语义信息的深层轮廓特征,通过将层次聚集和注意力机制相结合,大大提高了三维左心房分割的效率和准确率。
实施例四
图8是本申请一实施例提供的终端设备的示意图。如图8所示,该实施例的终端设备8包括:处理器80、存储器81以及存储在所述存储器81中并可在所述处理器80上运行的计算机程序82。所述处理器80执行所述计算机程序82时实现上述各个三维左心房分割方法实施例中的步骤,例如图1所示的步骤S101至S103。或者,所述处理器80执行所述计算机程序82时实现上述各装置实施例中各模块或单元的功能,例如图7所示模块71至73的功能。
示例性的,所述计算机程序82可以被分割成一个或多个模块或单元,所述一个或者多个模块或单元被存储在所述存储器81中,并由所述处理器80执行,以完成本申请。所述一个或多个模块或单元可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述所述计算机程序82在所述终端设备8中的执行过程。例如,所述计算机程序82可以被分割成获取模块、ROI区域分割模块以及分割模块,各模块具体功能如下:
获取模块,用于获取待分割心脏磁共振图像;ROI区域分割模块,用于从待分割心脏磁共振图像中分割出ROI区域,ROI区域为包含三维左心房的区域;
分割模块,用于将ROI区域输入预先训练的层次聚合网络模型,得到待分割心脏磁共振图像的分割结果;其中,层次聚合网络模型为包括编码器路径和解码器路径的U-Net卷积神经网络模型,编码器路径包括至少一个分层聚合模块,分层聚合模块包括作为主干分支的层次聚合单元和作为掩码分支的注意单元。
所述终端设备8可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。所述终端设备可包括,但不仅限于,处理器80、存储器81。本领域技术人员可以理解,图 8仅仅是终端设备8的示例,并不构成对终端设备8的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述终端设备还可以包括输入输出设备、网络接入设备、总线等。
所称处理器80可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
所述存储器81可以是所述终端设备8的内部存储单元,例如终端设备8的硬盘或内存。所述存储器81也可以是所述终端设备8的外部存储设备,例如所述终端设备8上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器81还可以既包括所述终端设备8的内部存储单元也包括外部存储设备。所述存储器81用于存储所述计算机程序以及所述终端设备所需的其他程序和数据。所述存储器81还可以用于暂时地存储已经输出或者将要输出的数据。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的实施例中,应该理解到,所揭露的装置、终端设备和方法,可以通过其它的方式实现。例如,以上所描述的装置、终端设备实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的模块或单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,也可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质等。需要说明的是,所述计算机可读介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,计算机可读介质不包括电载波信号和电信信号。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。

Claims (10)

  1. 一种三维左心房分割方法,其特征在于,包括:
    获取待分割心脏磁共振图像;
    从所述待分割心脏磁共振图像中分割出ROI区域,所述ROI区域为包含三维左心房的区域;
    将所述ROI区域输入预先训练的层次聚合网络模型,得到所述待分割心脏磁共振图像的分割结果;
    其中,所述层次聚合网络模型为包括编码器路径和解码器路径的U-Net卷积神经网络模型,所述编码器路径包括至少一个分层聚合模块,所述分层聚合模块包括作为主干分支的层次聚合单元和作为掩码分支的注意单元。
  2. 根据权利要求1所述的三维左心房分割方法,其特征在于,所述从所述待分割心脏磁共振图像中分割出ROI区域,包括:
    通过预训练U-Net卷积神经网络对所述待分割心脏磁共振图像进行检测,得到ROI区域检测结果;其中,所述预训练U-Net卷积神经网络的每一级具有两个卷积层;
    根据所述ROI区域检测结果对所述待分割心脏磁共振图像进行裁剪,得到所述ROI区域。
  3. 根据权利要求1所述的三维左心房分割方法,其特征在于,所述层次聚合单元包括第一卷积层、第二卷积层、第三卷积层、第四卷积层和第五卷积层,每个卷积层后均连接有批量归一化和校正线性单元;
    其中,所述第一卷积层与所述第二卷积层级联,将级联结果输入所述第三卷积层;对所述第三卷积层进行卷积运算得到所述第四卷积层,所述第三卷积层和所述第四卷积层连接生成所述第五卷积层;
    所述注意单元包括第六卷积层、第七卷积层和乙状结构层,所述第六卷积层依次通过批量归一化和所述校正线性单元与所述第七卷积层连接;所述乙状结构层为
    Figure PCTCN2019124311-appb-100001
    exp表示指数函数,χ i,c表示c th通道上特征映射的i th值;
    所述分层融合模块的输出为
    Figure PCTCN2019124311-appb-100002
    M i,c(χ)表示掩码分支的输出,范围为[0,1];F i,c(χ)表示主干分支的输出,
    Figure PCTCN2019124311-appb-100003
    表示点乘积,
    Figure PCTCN2019124311-appb-100004
    表示按元素方向求和。
  4. 根据权利要求1至3任一项所述的三维左心房分割方法,其特征在于,在所述获取待分割心脏磁共振图像之前,还包括:
    获取训练样本和与所述训练样本对应的标签信息;
    根据所述标签信息,从所述训练样本中分割出相应的目标ROI区域;
    根据所述目标ROI区域,对预先建立的所述层次聚合网络模型进行模型训练。
  5. 根据权利要求4所述的三维左心房分割方法,其特征在于,所述根据所述标签信息,从所述训练样本中分割出相应的目标ROI区域,包括:
    将所述训练样本统一调整至第一预设形状;
    将所述第一预设形状的训练样本输入预先训练的U-Net卷积神经网络,得到目标ROI区域检测结果;
    将各所述训练样本从所述第一预设形状调整至各训练样本的原本形状;
    根据所述ROI区域检测结果和所述标签信息,从所述训练样本中分割出相应的所述目标ROI区域。
  6. 根据权利要求5所述的三维左心房分割方法,其特征在于,所述根据 所述目标ROI区域检测结果和所述标签信息,从所述训练样本中分割出相应的所述目标ROI区域,包括:
    判断所述目标ROI区域检测结果是否包含所述标签信息;
    当所述目标ROI区域检测结果未包含所述标签信息时,则将所述目标ROI区域扩展为预设目标区域;
    从所述训练样本中裁剪出所述预设目标区域,将所述预设目标区域作为所述目标ROI区域;
    当所述目标ROI区域检测结果包含所述标签信息时,从所述训练样本中裁剪出所述目标ROI区域。
  7. 一种三维左心房分割装置,其特征在于,包括:
    获取模块,用于获取待分割心脏磁共振图像;
    ROI区域分割模块,用于从所述待分割心脏磁共振图像中分割出ROI区域,所述ROI区域为包含三维左心房的区域;
    分割模块,用于将所述ROI区域输入预先训练的层次聚合网络模型,得到所述待分割心脏磁共振图像的分割结果;
    其中,所述层次聚合网络模型为包括编码器路径和解码器路径的U-Net卷积神经网络模型,所述编码器路径包括至少一个分层聚合模块,所述分层聚合模块包括作为主干分支的层次聚合单元和作为掩码分支的注意单元。
  8. 根据权利要求7所述的三维左心房分割装置,其特征在于,所述ROI区域分割模块包括:
    ROI区域检测单元,用于通过预训练U-Net卷积神经网络对所述待分割心脏磁共振图像进行检测,得到ROI区域检测结果;其中,所述预训练U-Net卷积神经网络的每一级具有两个卷积层;
    ROI区域裁剪单元,用于根据所述ROI区域检测结果对所述待分割心脏磁共振图像进行裁剪,得到所述ROI区域。
  9. 一种终端设备,其特征在于,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如权利要求1至6任一项所述三维左心房分割方法的步骤。
  10. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至6任一项所述三维左心房分割方法的步骤。
PCT/CN2019/124311 2018-12-14 2019-12-10 三维左心房分割方法、装置、终端设备及存储介质 WO2020119679A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811535118.9 2018-12-14
CN201811535118.9A CN109801294A (zh) 2018-12-14 2018-12-14 三维左心房分割方法、装置、终端设备及存储介质

Publications (1)

Publication Number Publication Date
WO2020119679A1 true WO2020119679A1 (zh) 2020-06-18

Family

ID=66556774

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/124311 WO2020119679A1 (zh) 2018-12-14 2019-12-10 三维左心房分割方法、装置、终端设备及存储介质

Country Status (2)

Country Link
CN (1) CN109801294A (zh)
WO (1) WO2020119679A1 (zh)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111754534A (zh) * 2020-07-01 2020-10-09 杭州脉流科技有限公司 基于深度神经网络的ct左心室短轴图像分割方法、装置、计算机设备和存储介质
CN111968122A (zh) * 2020-08-27 2020-11-20 广东工业大学 一种基于卷积神经网络的纺织材料ct图像分割方法和装置
CN111986204A (zh) * 2020-07-23 2020-11-24 中山大学 一种息肉分割方法、装置及存储介质
CN112348780A (zh) * 2020-10-26 2021-02-09 首都医科大学附属北京安贞医院 一种胎儿心脏的测量方法及装置
CN112766313A (zh) * 2020-12-29 2021-05-07 厦门贝启科技有限公司 基于U-net结构的水晶体分割及定位方法、装置、设备和介质
CN112802034A (zh) * 2021-02-04 2021-05-14 精英数智科技股份有限公司 图像分割、识别方法、模型构建方法、装置及电子设备
CN112966687A (zh) * 2021-02-01 2021-06-15 深圳市优必选科技股份有限公司 图像分割模型训练方法、装置及通信设备
CN112967294A (zh) * 2021-03-11 2021-06-15 西安智诊智能科技有限公司 一种肝脏ct图像分割方法及系统
CN113112475A (zh) * 2021-04-13 2021-07-13 五邑大学 一种基于机器学习的中医耳部五脏区域分割方法和装置
CN113223014A (zh) * 2021-05-08 2021-08-06 中国科学院自动化研究所 基于数据增强的脑部图像分析系统、方法及设备
CN113505535A (zh) * 2021-07-08 2021-10-15 重庆大学 基于门控自适应分层注意力单元网络的电机寿命预测方法
CN113592771A (zh) * 2021-06-24 2021-11-02 深圳大学 一种图像分割方法
CN113808143A (zh) * 2021-09-06 2021-12-17 沈阳东软智能医疗科技研究院有限公司 图像分割方法、装置、可读存储介质及电子设备
CN114663431A (zh) * 2022-05-19 2022-06-24 浙江大学 基于强化学习和注意力的胰腺肿瘤图像分割方法及系统
CN116797787A (zh) * 2023-05-22 2023-09-22 中国地质大学(武汉) 基于跨模态融合与图神经网络的遥感影像语义分割方法
CN117079080A (zh) * 2023-10-11 2023-11-17 青岛美迪康数字工程有限公司 冠脉cta智能分割模型的训练优化方法、装置和设备
CN117349714A (zh) * 2023-12-06 2024-01-05 中南大学 阿尔茨海默症医学图像的分类方法、系统、设备及介质
CN111754534B (zh) * 2020-07-01 2024-05-31 杭州脉流科技有限公司 基于深度神经网络的ct左心室短轴图像分割方法、装置、计算机设备和存储介质

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109801294A (zh) * 2018-12-14 2019-05-24 深圳先进技术研究院 三维左心房分割方法、装置、终端设备及存储介质
CN110210487A (zh) * 2019-05-30 2019-09-06 上海商汤智能科技有限公司 一种图像分割方法及装置、电子设备和存储介质
CN110288609B (zh) * 2019-05-30 2021-06-08 南京师范大学 一种注意力机制引导的多模态全心脏图像分割方法
CN110310280B (zh) * 2019-07-10 2021-05-11 广东工业大学 肝胆管及结石的图像识别方法、系统、设备及存储介质
CN110428431B (zh) * 2019-07-12 2022-12-16 广东省人民医院(广东省医学科学院) 一种心脏医学图像的分割方法、装置、设备及存储介质
CN110599502B (zh) * 2019-09-06 2023-07-11 江南大学 一种基于深度学习的皮肤病变分割方法
CN110570416B (zh) * 2019-09-12 2020-06-30 杭州海睿博研科技有限公司 多模态心脏图像的可视化和3d打印的方法
CN110853045B (zh) * 2019-09-24 2022-02-11 西安交通大学 基于核磁共振图像的血管壁分割方法、设备及存储介质
CN110910364B (zh) * 2019-11-16 2023-04-28 应急管理部沈阳消防研究所 基于深度神经网络的三切面火场易引发起火电器设备检测方法
CN111281387B (zh) * 2020-03-09 2024-03-26 中山大学 基于人工神经网络的左心房与心房瘢痕的分割方法及装置
CN111553895B (zh) * 2020-04-24 2022-08-02 中国人民解放军陆军军医大学第二附属医院 基于多尺度细粒度的磁共振左心房分割方法
CN112435247B (zh) * 2020-11-30 2022-03-25 中国科学院深圳先进技术研究院 一种卵圆孔未闭检测方法、系统、终端以及存储介质
CN112508949B (zh) * 2021-02-01 2021-05-11 之江实验室 一种spect三维重建图像左心室自动分割的方法
CN114155208B (zh) * 2021-11-15 2022-07-08 中国科学院深圳先进技术研究院 一种基于深度学习的心房颤动评估方法和装置
CN114066913B (zh) * 2022-01-12 2022-04-22 广东工业大学 一种心脏图像分割方法及系统
CN116385468B (zh) * 2023-06-06 2023-09-01 浙江大学 一种基于斑马鱼心脏参数图像分析软件生成的系统
CN117456191B (zh) * 2023-12-15 2024-03-08 武汉纺织大学 一种基于三分支网络结构的复杂环境下语义分割方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108537793A (zh) * 2018-04-17 2018-09-14 电子科技大学 一种基于改进的u-net网络的肺结节检测方法
CN108537784A (zh) * 2018-03-30 2018-09-14 四川元匠科技有限公司 一种基于深度学习的ct图肺结节检测方法
CN108615236A (zh) * 2018-05-08 2018-10-02 上海商汤智能科技有限公司 一种图像处理方法及电子设备
CN108765369A (zh) * 2018-04-20 2018-11-06 平安科技(深圳)有限公司 肺结节的检测方法、装置、计算机设备和存储介质
CN109801294A (zh) * 2018-12-14 2019-05-24 深圳先进技术研究院 三维左心房分割方法、装置、终端设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108537784A (zh) * 2018-03-30 2018-09-14 四川元匠科技有限公司 一种基于深度学习的ct图肺结节检测方法
CN108537793A (zh) * 2018-04-17 2018-09-14 电子科技大学 一种基于改进的u-net网络的肺结节检测方法
CN108765369A (zh) * 2018-04-20 2018-11-06 平安科技(深圳)有限公司 肺结节的检测方法、装置、计算机设备和存储介质
CN108615236A (zh) * 2018-05-08 2018-10-02 上海商汤智能科技有限公司 一种图像处理方法及电子设备
CN109801294A (zh) * 2018-12-14 2019-05-24 深圳先进技术研究院 三维左心房分割方法、装置、终端设备及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
OKTAY, OZAN ET AL.: "Attention U-Net: Learning Where to Look for the Pancreas", 1ST CONFERENCE ON MEDICAL IMAGING WITH DEEP LEARNING (MIDL 2018), 20 May 2018 (2018-05-20), XP081233130, DOI: 20200229120931X *

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111754534B (zh) * 2020-07-01 2024-05-31 杭州脉流科技有限公司 基于深度神经网络的ct左心室短轴图像分割方法、装置、计算机设备和存储介质
CN111754534A (zh) * 2020-07-01 2020-10-09 杭州脉流科技有限公司 基于深度神经网络的ct左心室短轴图像分割方法、装置、计算机设备和存储介质
CN111986204A (zh) * 2020-07-23 2020-11-24 中山大学 一种息肉分割方法、装置及存储介质
CN111986204B (zh) * 2020-07-23 2023-06-16 中山大学 一种息肉分割方法、装置及存储介质
CN111968122A (zh) * 2020-08-27 2020-11-20 广东工业大学 一种基于卷积神经网络的纺织材料ct图像分割方法和装置
CN111968122B (zh) * 2020-08-27 2023-07-28 广东工业大学 一种基于卷积神经网络的纺织材料ct图像分割方法和装置
CN112348780A (zh) * 2020-10-26 2021-02-09 首都医科大学附属北京安贞医院 一种胎儿心脏的测量方法及装置
CN112766313B (zh) * 2020-12-29 2023-11-14 厦门贝启科技有限公司 基于U-net结构的水晶体分割及定位方法、装置、设备和介质
CN112766313A (zh) * 2020-12-29 2021-05-07 厦门贝启科技有限公司 基于U-net结构的水晶体分割及定位方法、装置、设备和介质
CN112966687B (zh) * 2021-02-01 2024-01-19 深圳市优必选科技股份有限公司 图像分割模型训练方法、装置及通信设备
CN112966687A (zh) * 2021-02-01 2021-06-15 深圳市优必选科技股份有限公司 图像分割模型训练方法、装置及通信设备
CN112802034B (zh) * 2021-02-04 2024-04-12 精英数智科技股份有限公司 图像分割、识别方法、模型构建方法、装置及电子设备
CN112802034A (zh) * 2021-02-04 2021-05-14 精英数智科技股份有限公司 图像分割、识别方法、模型构建方法、装置及电子设备
CN112967294A (zh) * 2021-03-11 2021-06-15 西安智诊智能科技有限公司 一种肝脏ct图像分割方法及系统
CN113112475A (zh) * 2021-04-13 2021-07-13 五邑大学 一种基于机器学习的中医耳部五脏区域分割方法和装置
CN113112475B (zh) * 2021-04-13 2023-04-18 五邑大学 一种基于机器学习的中医耳部五脏区域分割方法和装置
CN113223014B (zh) * 2021-05-08 2023-04-28 中国科学院自动化研究所 基于数据增强的脑部图像分析系统、方法及设备
CN113223014A (zh) * 2021-05-08 2021-08-06 中国科学院自动化研究所 基于数据增强的脑部图像分析系统、方法及设备
CN113592771A (zh) * 2021-06-24 2021-11-02 深圳大学 一种图像分割方法
CN113592771B (zh) * 2021-06-24 2023-12-15 深圳大学 一种图像分割方法
CN113505535B (zh) * 2021-07-08 2023-11-10 重庆大学 基于门控自适应分层注意力单元网络的电机寿命预测方法
CN113505535A (zh) * 2021-07-08 2021-10-15 重庆大学 基于门控自适应分层注意力单元网络的电机寿命预测方法
CN113808143B (zh) * 2021-09-06 2024-05-17 沈阳东软智能医疗科技研究院有限公司 图像分割方法、装置、可读存储介质及电子设备
CN113808143A (zh) * 2021-09-06 2021-12-17 沈阳东软智能医疗科技研究院有限公司 图像分割方法、装置、可读存储介质及电子设备
CN114663431B (zh) * 2022-05-19 2022-08-30 浙江大学 基于强化学习和注意力的胰腺肿瘤图像分割方法及系统
CN114663431A (zh) * 2022-05-19 2022-06-24 浙江大学 基于强化学习和注意力的胰腺肿瘤图像分割方法及系统
CN116797787B (zh) * 2023-05-22 2024-01-02 中国地质大学(武汉) 基于跨模态融合与图神经网络的遥感影像语义分割方法
CN116797787A (zh) * 2023-05-22 2023-09-22 中国地质大学(武汉) 基于跨模态融合与图神经网络的遥感影像语义分割方法
CN117079080A (zh) * 2023-10-11 2023-11-17 青岛美迪康数字工程有限公司 冠脉cta智能分割模型的训练优化方法、装置和设备
CN117079080B (zh) * 2023-10-11 2024-01-30 青岛美迪康数字工程有限公司 冠脉cta智能分割模型的训练优化方法、装置和设备
CN117349714A (zh) * 2023-12-06 2024-01-05 中南大学 阿尔茨海默症医学图像的分类方法、系统、设备及介质
CN117349714B (zh) * 2023-12-06 2024-02-13 中南大学 阿尔茨海默症医学图像的分类方法、系统、设备及介质

Also Published As

Publication number Publication date
CN109801294A (zh) 2019-05-24

Similar Documents

Publication Publication Date Title
WO2020119679A1 (zh) 三维左心房分割方法、装置、终端设备及存储介质
US10565707B2 (en) 3D anisotropic hybrid network: transferring convolutional features from 2D images to 3D anisotropic volumes
Lei et al. Ultrasound prostate segmentation based on multidirectional deeply supervised V‐Net
WO2021238438A1 (zh) 肿瘤图像的处理方法和装置、电子设备、存储介质
CN112508965B (zh) 医学影像中正常器官的轮廓线自动勾画系统
CN111105424A (zh) 淋巴结自动勾画方法及装置
US11315254B2 (en) Method and device for stratified image segmentation
WO2022213654A1 (zh) 一种超声图像的分割方法、装置、终端设备和存储介质
CN111583246A (zh) 利用ct切片图像对肝脏肿瘤进行分类的方法
CN111260701B (zh) 多模态视网膜眼底图像配准方法及装置
CN113159040A (zh) 医学图像分割模型的生成方法及装置、系统
CN113538209A (zh) 一种多模态医学影像配准方法、配准系统、计算设备和存储介质
Xie et al. Contextual loss based artifact removal method on CBCT image
EP3608872B1 (en) Image segmentation method and system
Zhang et al. Topology-preserving segmentation network: A deep learning segmentation framework for connected component
WO2020007026A1 (zh) 分割模型训练方法、装置及计算机可读存储介质
KR102280047B1 (ko) 딥 러닝 기반 종양 치료 반응 예측 방법
WO2024051018A1 (zh) 一种pet参数图像的增强方法、装置、设备及存储介质
WO2021081771A1 (zh) 基于vrds ai医学影像的心脏冠脉的分析方法和相关装置
CN116309806A (zh) 一种基于CSAI-Grid RCNN的甲状腺超声图像感兴趣区域定位方法
CN114419375B (zh) 图像分类方法、训练方法、装置、电子设备以及存储介质
WO2022227193A1 (zh) 肝脏区域分割方法、装置、电子设备及存储介质
WO2021081839A1 (zh) 基于vrds 4d的病情分析方法及相关产品
Henderson et al. Accurate segmentation of head and neck radiotherapy CT scans with 3D CNNs: consistency is key
WO2021081850A1 (zh) 基于vrds 4d医学影像的脊椎疾病识别方法及相关装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19895984

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03.11.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19895984

Country of ref document: EP

Kind code of ref document: A1