CN114612404B - Blood vessel segmentation method and device, storage medium and electronic equipment - Google Patents
Blood vessel segmentation method and device, storage medium and electronic equipment Download PDFInfo
- Publication number
- CN114612404B CN114612404B CN202210207828.9A CN202210207828A CN114612404B CN 114612404 B CN114612404 B CN 114612404B CN 202210207828 A CN202210207828 A CN 202210207828A CN 114612404 B CN114612404 B CN 114612404B
- Authority
- CN
- China
- Prior art keywords
- dimensional
- blood vessel
- image
- layer
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 210000004204 blood vessel Anatomy 0.000 title claims abstract description 134
- 230000011218 segmentation Effects 0.000 title claims abstract description 72
- 238000000034 method Methods 0.000 title claims abstract description 61
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 36
- 238000010586 diagram Methods 0.000 claims description 25
- 238000012549 training Methods 0.000 claims description 22
- 238000002372 labelling Methods 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 8
- 238000002583 angiography Methods 0.000 claims description 4
- 238000002759 z-score normalization Methods 0.000 claims description 4
- 238000003384 imaging method Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims 4
- 230000002792 vascular Effects 0.000 claims 1
- 230000008569 process Effects 0.000 description 12
- 230000035755 proliferation Effects 0.000 description 5
- 238000013135 deep learning Methods 0.000 description 4
- 238000013519 translation Methods 0.000 description 4
- 206010008190 Cerebrovascular accident Diseases 0.000 description 3
- 208000006011 Stroke Diseases 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 230000002490 cerebral effect Effects 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000003709 image segmentation Methods 0.000 description 2
- KLDZYURQCUYZBL-UHFFFAOYSA-N 2-[3-[(2-hydroxyphenyl)methylideneamino]propyliminomethyl]phenol Chemical compound OC1=CC=CC=C1C=NCCCN=CC1=CC=CC=C1O KLDZYURQCUYZBL-UHFFFAOYSA-N 0.000 description 1
- 208000014882 Carotid artery disease Diseases 0.000 description 1
- 238000005481 NMR spectroscopy Methods 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 210000001367 artery Anatomy 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 208000037876 carotid Atherosclerosis Diseases 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 201000001098 delayed sleep phase syndrome Diseases 0.000 description 1
- 208000033921 delayed sleep phase type circadian rhythm sleep disease Diseases 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000004445 quantitative analysis Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/155—Segmentation; Edge detection involving morphological operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biophysics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a blood vessel segmentation method, a blood vessel segmentation device, a storage medium and electronic equipment, wherein the blood vessel segmentation method comprises the following steps: acquiring a three-dimensional blood vessel image; cutting the three-dimensional blood vessel image to obtain a plurality of three-dimensional cut blocks with preset sizes; and respectively inputting each three-dimensional block into a pre-trained convolutional neural network model to obtain a blood vessel segmentation result corresponding to each three-dimensional block, and splicing the blood vessel segmentation results corresponding to each three-dimensional block to obtain a blood vessel segmentation result of the three-dimensional blood vessel image. The invention can realize full-automatic and accurate segmentation of the three-dimensional blood vessel image, and the segmentation result has higher image quality; the image characteristics are fully utilized, and the method can be used for segmenting scenes of high-resolution images.
Description
Technical Field
The present invention relates to the field of image processing, and in particular, to a method and apparatus for segmenting blood vessels, a storage medium, and an electronic device.
Background
Based on the latest statistics of world health organization, cerebral apoplexy has become the second leading cause of death in the world, wherein carotid atherosclerosis is an important cause of cerebral apoplexy. The blood vessel morphology of the cervical artery is accurately estimated, quantitative analysis and diagnosis are carried out, and the method is beneficial to the prevention of cerebral apoplexy.
The Time of FLIGHT MAGNETIC resonance angiography (TOF MRA) is a nuclear magnetic resonance angiography technology which enables static tissues to generate low signals but flowing blood to generate high signals based on inflow enhancement effect, has the advantages of safety, no radiation, high imaging speed, high contrast, high spatial resolution, large coverage and the like, and is an important examination means of carotid lesions in clinic. Vessel segmentation and visualization based on three-dimensional TOF MRA images are key to describing vessel morphology. With the increasing number of patients and the lack of the number of professionals, the segmentation of blood vessels by computer-aided magnetic resonance blood vessel images has become an important development direction.
Along with the continuous improvement of the traditional method and the development of artificial intelligence technology, some full-automatic blood vessel segmentation methods based on two-dimensional TOF MRA or three-dimensional TOF MRA images are sequentially proposed, but the methods are difficult to mine the image characteristics of the TOF MRA, and the segmentation effect is unsatisfactory.
In the related technology, the full-automatic blood vessel segmentation method based on the two-dimensional TOF MRA or the three-dimensional TOF MRA image is mainly divided into a traditional algorithm and a deep learning algorithm, wherein the traditional algorithm uses an Ojin threshold value to segment the foreground and the background, a fitting mixed distribution model is used for carrying out blood vessel segmentation and automatically selecting seed points to carry out region growth, and the deep learning algorithm is mainly used for carrying out learning and modeling of segmentation tasks based on an advanced U-Net or 3D U-Net model.
However, the prior art has the following disadvantages:
a. the traditional full-automatic segmentation method has low accuracy and high requirement on image quality.
B. The existing deep learning segmentation method does not fully utilize the image characteristics of TOF MRA, and has very limited improvement of accuracy.
C. The portability of the model is poor, and the model is easily limited by computer hardware, so that the model is difficult to be used for segmenting scenes of high-resolution images.
Disclosure of Invention
In order to solve the problems, the embodiment of the invention provides a blood vessel segmentation method, a blood vessel segmentation device, a storage medium and electronic equipment, which can realize full-automatic and accurate segmentation of three-dimensional blood vessel images and have higher image quality of segmentation results; the image characteristics are fully utilized, and the method can be used for segmenting scenes of high-resolution images.
In a first aspect, an embodiment of the present invention provides a blood vessel segmentation method, including:
acquiring a three-dimensional blood vessel image;
Cutting the three-dimensional blood vessel image to obtain a plurality of three-dimensional cut blocks with preset sizes;
And respectively inputting each three-dimensional block into a pre-trained convolutional neural network model to obtain a blood vessel segmentation result corresponding to each three-dimensional block, and splicing the blood vessel segmentation results corresponding to each three-dimensional block to obtain a blood vessel segmentation result of the three-dimensional blood vessel image.
In some embodiments, the convolutional neural network model comprises a 3D U-Net model; the 3DU-Net model comprises four coding layers and four decoding layers, and each coding layer and each decoding layer introduce a residual structure so as to combine the characteristic diagram of the upper layer with the characteristic diagram of the layer after convolution and then transmit the combined characteristic diagram to the next layer.
In some embodiments, each layer of the 3D U-Net model skip connection is added with a three-dimensional block down-sampled to the layer, so that the feature map output by each layer of encoder is combined with the features down-sampled to the three-dimensional block of the layer and then sent to the decoder of the corresponding level.
In some embodiments, the method further comprises:
acquiring an original three-dimensional blood vessel image and a corresponding marked blood vessel marking image;
Cutting the original three-dimensional blood vessel image and the corresponding marked blood vessel marking image to obtain a plurality of three-dimensional cut blocks with preset sizes;
and training a convolutional neural network model by taking the three-dimensional cut of the original three-dimensional blood vessel image as input and the three-dimensional cut of the marked blood vessel marking image as output.
In some embodiments, the cropping the original three-dimensional blood vessel image and the corresponding annotated blood vessel labeling image to obtain a plurality of three-dimensional cut blocks with preset sizes includes:
And carrying out translation with different step sizes randomly in three directions of the original three-dimensional blood vessel image and the corresponding marked blood vessel marking image respectively in a data proliferation mode, and cutting out a plurality of three-dimensional cut blocks with preset sizes.
In some embodiments, before training the convolutional neural network model with the three-dimensional cut of the original three-dimensional blood vessel image as input and the three-dimensional cut of the labeled blood vessel labeling image as output, the method further comprises:
Each three-dimensional cut was Z-score normalized.
In some embodiments, the Z-score normalization of each three-dimensional cut piece comprises:
subtracting the average value of all voxel values of the current three-dimensional block from each voxel value of each three-dimensional block, dividing the average value by the standard deviation of all voxel values of the current three-dimensional block, and replacing the obtained result with the current voxel value of the current three-dimensional block.
In a second aspect, an embodiment of the present invention provides a blood vessel segmentation device, including:
the image acquisition module is used for acquiring three-dimensional blood vessel images;
The image cutting module is used for cutting the three-dimensional blood vessel image to obtain a plurality of three-dimensional cut blocks with preset sizes;
the blood vessel segmentation module is used for inputting each three-dimensional block into a pre-trained convolutional neural network model respectively to obtain blood vessel segmentation results corresponding to each three-dimensional block, and the blood vessel segmentation results corresponding to each three-dimensional block are spliced to obtain blood vessel segmentation results of the three-dimensional blood vessel image.
In a third aspect, embodiments of the present invention provide a computer-readable storage medium comprising: the computer readable storage medium has stored thereon a computer program which, when executed by one or more processors, implements a vessel segmentation method as described in the first aspect.
In a fourth aspect, an embodiment of the present invention provides an electronic device, including: comprising a memory and one or more processors, the memory having stored thereon a computer program which, when executed by the one or more processors, implements the vessel segmentation method as described in the first aspect.
Compared with the prior art, one or more embodiments of the invention have at least the following advantages:
According to the blood vessel segmentation method, the blood vessel segmentation device, the storage medium and the electronic equipment, because of the training and the prediction based on the three-dimensional block cutting, the improved convolutional neural network can train and predict images with different sizes, and output segmented mask images with the same size as the input images, so that the problem of the universality of the size of the images predicted by model training is solved; the convolutional neural network structure is effectively improved, and image features are introduced more, so that the image data is systematically and purposefully learned and predicted, and the accuracy of vessel segmentation is improved. The three-dimensional blood vessel image is learned and modeled through the improved convolutional neural network, so that the automatic and correct blood vessel segmentation of the new image data is realized, help is provided for a doctor to acquire morphological characteristics of blood vessels, and the working efficiency of the doctor can be greatly improved. The method can be popularized to the 3D TOF MRA vessel segmentation process of other parts.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate certain embodiments of the present invention and therefore should not be considered as limiting the scope.
FIG. 1 is a flow chart of a method for segmenting blood vessels according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a 3D U-Net model structure according to an embodiment of the present invention;
Fig. 3 is a block diagram of a blood vessel segmentation device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present invention.
Example 1
Fig. 1 is a flowchart of a blood vessel segmentation method according to an embodiment of the present invention, and as shown in fig. 1, the blood vessel segmentation method according to the present embodiment at least includes steps S101 to S103:
step S101, acquiring a three-dimensional blood vessel image.
In practical applications, the three-dimensional vessel image may be, but is not limited to, a three-dimensional TOF MRA image.
Step S102, cutting the three-dimensional blood vessel image to obtain a plurality of three-dimensional cut blocks with preset sizes.
The three-dimensional cutting block with the size meeting the operation of the computer hardware is cut out from the original three-dimensional TOF MRA image due to the limitation of the computer hardware (such as the memory size of a display card), the selected cutting block size is as large as possible, and the preset size can be determined according to the actual situation. And after cutting, predicting each three-dimensional block cutting and dividing result by using the convolutional neural network model, and splicing the predicted dividing result to obtain a complete dividing mask image as a blood vessel dividing result.
Step S103, inputting each three-dimensional block into a pre-trained convolutional neural network model respectively to obtain a blood vessel segmentation result corresponding to each three-dimensional block, and splicing the blood vessel segmentation results corresponding to each three-dimensional block to obtain a blood vessel segmentation result of the three-dimensional blood vessel image.
After each three-dimensional cut block cut out by the three-dimensional TOF MRA image is input into a convolutional neural network model, the characteristics of different levels are continuously extracted through convolution and pooling operation, so that final prediction is completed. In the convolutional neural network model, the front convolutional layer can extract the preliminary local information of the (three-dimensional block) image, and the rear convolutional layer can extract the global features of higher layers, and the features can be used for distinguishing the boundaries of blood vessels and other tissues and acquiring the continuity between the blood vessels.
According to the method, because three-dimensional block-based training and prediction are adopted, for images with different sizes, the convolutional neural network can train and predict, and a segmentation mask image with the same size as an input image is output, so that the problem of image size universality of model training prediction is solved.
The convolutional neural network model described above includes a 3D U-Net model, as shown in figure 2.
The 3D U-Net model comprises four coding layers and four decoding layers, and each coding layer and each decoding layer introduce a residual structure so as to combine the characteristic diagram of the upper layer with the characteristic diagram of the layer after convolution and then transmit the combined characteristic diagram to the next layer. Each convolution operation employs Relu activation functions.
In this embodiment, by adding one encoding layer and one decoding layer to the original 3D U-Net structure and modifying the encoding layer and the decoding layer to four encoding layers and four decoding layers, the large-size input image is handled, so that the downsampling rate of the network is equal to 16 times to the power of 2, and deeper features can be obtained in the large-size image. The residual error structure is added in the convolution operation of each layer, the characteristic diagram of the upper layer can be combined with the characteristic diagram of the layer after convolution, and then the combined characteristic diagram is transmitted to the next layer.
In some embodiments, three-dimensional cut blocks downsampled to the layer are added to each layer of skip connection of the 3D U-Net model, so that the characteristic diagram output by each layer of encoder is combined with the characteristic downsampled to the three-dimensional cut blocks of the layer and then sent to the decoder of the corresponding level.
And respectively carrying out 1-time, 2-time, 4-time and 8-time downsampling on the input TOF MRA image cutting blocks, combining the downsampled TOF MRA image cutting blocks with the characteristic images output by each layer of encoder, and conveying the self-contained characteristics of the TOF MRA image cutting blocks and the characteristics extracted by the encoder to decoders of corresponding layers through skip connection for training and learning. The structure is added so that the network can receive more TOF MRA image information, better learn the image characteristics of the original image and better promote the generalization capability of the network to TOF MRA image segmentation tasks.
Further, the method further comprises steps S201 to S203:
step S201, an original three-dimensional blood vessel image and a corresponding marked blood vessel marking image are obtained.
The convolutional neural network model in the embodiment belongs to the category of supervised learning deep learning, and in the training process of the model, an original TOF MRA image and a corresponding professional blood vessel labeling image are required to be provided, and the convolutional neural network establishes a high-level relation between the original image and the corresponding labeling image through a large number of sample learning. In the model training and learning process, the labeled blood vessel labeling image is input into a convolutional neural network to train a model, and the model obtained after model training can be used for predicting a new sample in the future, so that a blood vessel segmentation result of a TOF MRA image (such as a neck three-dimensional TOF MRA image) is obtained.
Step S202, cutting the original three-dimensional blood vessel image and the corresponding marked blood vessel marking image to obtain a plurality of three-dimensional cut blocks with preset sizes.
The three-dimensional cutting block with the size meeting the operation of the computer hardware is cut out from the original three-dimensional TOF MRA image due to the limitation of the computer hardware (such as the memory size of a display card), the selected cutting block size is as large as possible, and the preset size can be determined according to the actual situation.
In some embodiments, clipping the original three-dimensional blood vessel image and the corresponding marked blood vessel labeling image to obtain a plurality of three-dimensional cut blocks with preset sizes, including:
and carrying out translation with different step sizes randomly in three directions of the original three-dimensional blood vessel image and the corresponding marked blood vessel marking image respectively in a data proliferation mode, and cutting out a plurality of three-dimensional cut blocks with preset sizes.
By carrying out data proliferation on the original three-dimensional TOF MRA image and the blood vessel labeling image, the generalization capability of the model is improved, and the problem that the model is fitted in the training process is avoided.
It should be understood that the translation of different steps is randomly performed in three directions of the three-dimensional blood vessel image in a data proliferation manner, and a plurality of three-dimensional cut pieces with preset sizes are cut out, so that the method is suitable for the original three-dimensional blood vessel image and the marked blood vessel marking image.
And step S203, training a convolutional neural network model by taking the three-dimensional cut of the original three-dimensional blood vessel image as input and the three-dimensional cut of the marked blood vessel marking image as output.
The convolutional neural network is used as an end-to-end training reasoning method, can automatically extract and process the characteristics in the model, and does not need any manual operation in the operation process. Therefore, the original TOF MRA image is input into the convolutional neural network, learning modeling can be automatically performed, and a segmentation result is output. Complex convolution modules and post-processing steps are avoided, and end-to-end segmentation tasks are realized.
In some cases, before training the convolutional neural network model with the three-dimensional cut of the original three-dimensional blood vessel image as input and the three-dimensional cut of the labeled blood vessel labeling image as output, the method further comprises: each three-dimensional cut was Z-score normalized.
Further, each three-dimensional cut is subjected to a Z-score normalization process comprising:
subtracting the average value of all voxel values of the current three-dimensional block from each voxel value of each three-dimensional block, dividing the average value by the standard deviation of all voxel values of the current three-dimensional block, and replacing the obtained result with the current voxel value of the current three-dimensional block.
The 3D U-Net model in this example is an improvement over the existing 3D U-Net in at least the following respects:
(1) For the input image with high resolution and large size, a layer of encoder and decoder are added, the network structure is deepened, and higher-level features are extracted.
(2) And a residual structure is introduced into each layer of encoder and decoder, so that the problem of network degradation in the network deepening process is avoided.
(3) By adding downsampling to the original image with the same size as the hierarchical feature image in each layer of skip connection, the added original image can play a role in gating of enhancing regional response, information of irrelevant regions is restrained, and target regions are highlighted more, so that feature learning capacity of a network on TOF MRA images is improved, and generalization capacity of the network on TOF MRA image segmentation tasks is improved.
The convolutional neural network model obtained through training in the steps can realize full-automatic and accurate segmentation of the three-dimensional blood vessel image, and the segmentation result has higher image quality; the image characteristics of TOF MRA are fully utilized, the accuracy is obviously improved, in addition, the portability of the model is good, the model is not limited by computer hardware, and the model can be used for segmenting scenes of high-resolution images.
Example two
Fig. 3 is a block diagram of a blood vessel segmentation device according to the present embodiment, and the blood vessel segmentation device according to the present embodiment, as shown in fig. 3, includes:
an image acquisition module 301, configured to acquire a three-dimensional blood vessel image;
the image clipping module 302 is configured to clip the three-dimensional blood vessel image to obtain a plurality of three-dimensional cut blocks with preset sizes;
The blood vessel segmentation module 303 is configured to input each three-dimensional segment into a pre-trained convolutional neural network model, to obtain a blood vessel segmentation result corresponding to each three-dimensional segment, and to splice the blood vessel segmentation results corresponding to each three-dimensional segment to obtain a blood vessel segmentation result of the three-dimensional blood vessel image.
In some embodiments, the convolutional neural network model described above comprises a 3D U-Net model. The 3D U-Net model comprises four coding layers and four decoding layers, and each coding layer and each decoding layer introduce a residual structure so as to combine the characteristic diagram of the upper layer with the characteristic diagram of the layer after convolution and then transmit the combined characteristic diagram to the next layer.
In some embodiments, three-dimensional cut blocks downsampled to the layer are added to each layer of skip connection of the 3D U-Net model, so that the characteristic diagram output by each layer of encoder is combined with the characteristic downsampled to the three-dimensional cut blocks of the layer and then sent to the decoder of the corresponding level.
Further, the device may further include:
The model training module is used for acquiring an original three-dimensional blood vessel image and a corresponding marked blood vessel marking image; cutting the original three-dimensional blood vessel image and the corresponding marked blood vessel marking image to obtain a plurality of three-dimensional cut blocks with preset sizes; and training the convolutional neural network model by taking the three-dimensional cut of the original three-dimensional blood vessel image as input and the three-dimensional cut of the marked blood vessel marking image as output.
The method for cutting the original three-dimensional blood vessel image and the corresponding marked blood vessel marking image to obtain a plurality of three-dimensional cut blocks with preset sizes comprises the following steps: and carrying out translation with different step sizes randomly in three directions of the original three-dimensional blood vessel image and the corresponding marked blood vessel marking image respectively in a data proliferation mode, and cutting out a plurality of three-dimensional cut blocks with preset sizes.
In some cases, before training the convolutional neural network model with the three-dimensional cut of the original three-dimensional blood vessel image as input and the three-dimensional cut of the labeled blood vessel labeling image as output, the method further comprises: each three-dimensional cut was Z-score normalized.
Further, each three-dimensional cut is subjected to a Z-score normalization process comprising:
subtracting the average value of all voxel values of the current three-dimensional block from each voxel value of each three-dimensional block, dividing the average value by the standard deviation of all voxel values of the current three-dimensional block, and replacing the obtained result with the current voxel value of the current three-dimensional block.
It should be appreciated that the apparatus of this embodiment provides all of the benefits of the method embodiments.
It will be appreciated by those skilled in the art that the modules or steps described above may be implemented in a general purpose computing device, they may be concentrated on a single computing device, or distributed across a network of computing devices, or they may alternatively be implemented in program code executable by computing devices, such that they may be stored in a memory device for execution by the computing devices, or they may be separately fabricated into individual integrated circuit modules, or multiple modules or steps within them may be fabricated into a single integrated circuit module. The present invention is not limited to any defined combination of hardware and software.
Example III
The embodiment of the invention provides a computer readable storage medium, and a computer program is stored on the computer readable storage medium, and when the computer program is executed by one or more processors, the blood vessel segmentation method of the first embodiment is realized.
In this embodiment, the storage medium may be implemented by any type of volatile or nonvolatile Memory device or combination thereof, such as static random access Memory (Static Random Access Memory, SRAM for short), electrically erasable programmable Read-Only Memory (ELECTRICALLY ERASABLE PROGRAMMABLE READ-Only Memory, EPROM for short), programmable Read-Only Memory (Programmable Read-Only Memory, PROM for short), read-Only Memory (ROM for short), magnetic Memory, flash Memory, magnetic disk, or optical disk.
Example IV
The present embodiment provides an electronic device including a memory and one or more processors, the memory storing a computer program that when executed by the one or more processors implements the vessel segmentation method of the first embodiment.
In practical applications, the electronic device may be a terminal device such as a mobile phone, a tablet computer, and the like. In this embodiment, the Processor may be an Application SPECIFIC INTEGRATED Circuit (ASIC), a digital signal Processor (DIGITAL SIGNAL Processor, DSP), a digital signal processing device (DIGITAL SIGNAL Processing Device, DSPD), a programmable logic device (Programmable Logic Device, PLD), a field programmable gate array (Field Programmable GATE ARRAY, FPGA), a controller, a microcontroller, a microprocessor, or other electronic component for implementing the method in the above embodiment. The method implemented when the computer program running on the processor is executed may refer to the specific embodiment of the method provided in the foregoing embodiment of the present invention, and will not be described herein.
In the several embodiments provided in the embodiments of the present invention, it should be understood that the disclosed system and method may be implemented in other manners. The system and method embodiments described above are merely illustrative.
It should be noted that, in this document, the terms "first," "second," and the like in the description and the claims of the present application and the above drawings are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Although the embodiments of the present invention are described above, the embodiments are only used for facilitating understanding of the present invention, and are not intended to limit the present invention. Any person skilled in the art can make any modification and variation in form and detail without departing from the spirit and scope of the present disclosure, but the scope of the present disclosure is still subject to the scope of the appended claims.
Claims (6)
1. A method of vessel segmentation, comprising:
Acquiring a three-dimensional blood vessel image based on magnetic resonance angiography imaging of a time flight method;
Cutting the three-dimensional blood vessel image to obtain a plurality of three-dimensional cut blocks with preset sizes;
Inputting each three-dimensional block into a pre-trained convolutional neural network model respectively to obtain a blood vessel segmentation result corresponding to each three-dimensional block, and splicing the blood vessel segmentation results corresponding to each three-dimensional block to obtain a blood vessel segmentation result of the three-dimensional blood vessel image;
the convolutional neural network model comprises four coding layers and four decoding layers, and each coding layer and each decoding layer introduce a residual structure so as to combine the characteristic diagram of the upper layer with the characteristic diagram of the layer after convolution and then transmit the combined characteristic diagram to the next layer;
Down-sampling each three-dimensional block 1 times, 2 times, 4 times and 8 times, adding corresponding down-sampling to the three-dimensional block of the layer in each layer of skip connection of the convolutional neural network model, combining the characteristic diagram output by each layer of encoder with the characteristics of the three-dimensional block down-sampled to the layer, and then sending to a decoder of the corresponding level;
acquiring an original three-dimensional blood vessel image and a corresponding marked blood vessel marking image;
Randomly translating in different step sizes in three directions of the original three-dimensional blood vessel image and the corresponding marked blood vessel marking image respectively, and cutting out three-dimensional cut blocks with a plurality of preset sizes;
and training a convolutional neural network model by taking the three-dimensional cut of the original three-dimensional blood vessel image as input and the three-dimensional cut of the marked blood vessel marking image as output.
2. The vessel segmentation method as set forth in claim 1, further comprising, prior to training the convolutional neural network model with the three-dimensional cut of the original three-dimensional vessel image as an input and the three-dimensional cut of the labeled vessel labeling image as an output:
Each three-dimensional cut was Z-score normalized.
3. The vessel segmentation method according to claim 2, wherein the performing the Z-score normalization on each three-dimensional segment comprises:
subtracting the average value of all voxel values of the current three-dimensional block from each voxel value of each three-dimensional block, dividing the average value by the standard deviation of all voxel values of the current three-dimensional block, and replacing the obtained result with the current voxel value of the current three-dimensional block.
4. A vascular segmentation device, comprising:
The image acquisition module is used for acquiring a three-dimensional blood vessel image based on time-leaping magnetic resonance angiography imaging;
The image cutting module is used for cutting the three-dimensional blood vessel image to obtain a plurality of three-dimensional cut blocks with preset sizes;
The blood vessel segmentation module is used for inputting each three-dimensional block into a pre-trained convolutional neural network model respectively to obtain blood vessel segmentation results corresponding to each three-dimensional block, and the blood vessel segmentation results corresponding to each three-dimensional block are spliced to obtain blood vessel segmentation results of the three-dimensional blood vessel image;
the convolutional neural network model comprises four coding layers and four decoding layers, and each coding layer and each decoding layer introduce a residual structure so as to combine the characteristic diagram of the upper layer with the characteristic diagram of the layer after convolution and then transmit the combined characteristic diagram to the next layer;
Down-sampling each three-dimensional block 1 times, 2 times, 4 times and 8 times, adding corresponding down-sampling to the three-dimensional block of the layer in each layer of skip connection of the convolutional neural network model, combining the characteristic diagram output by each layer of encoder with the characteristics of the three-dimensional block down-sampled to the layer, and then sending to a decoder of the corresponding level;
the model training module is used for acquiring an original three-dimensional blood vessel image and a corresponding marked blood vessel marking image; randomly translating in different step sizes in three directions of the original three-dimensional blood vessel image and the corresponding marked blood vessel marking image respectively, and cutting out three-dimensional cut blocks with a plurality of preset sizes; and training a convolutional neural network model by taking the three-dimensional cut of the original three-dimensional blood vessel image as input and the three-dimensional cut of the marked blood vessel marking image as output.
5. A computer-readable storage medium, comprising: the computer-readable storage medium having stored thereon a computer program which, when executed by one or more processors, implements the vessel segmentation method as claimed in any one of claims 1 to 3.
6. An electronic device, comprising: comprising a memory and one or more processors, the memory having stored thereon a computer program which, when executed by the one or more processors, implements the vessel segmentation method as claimed in any one of claims 1 to 3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210207828.9A CN114612404B (en) | 2022-03-04 | 2022-03-04 | Blood vessel segmentation method and device, storage medium and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210207828.9A CN114612404B (en) | 2022-03-04 | 2022-03-04 | Blood vessel segmentation method and device, storage medium and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114612404A CN114612404A (en) | 2022-06-10 |
CN114612404B true CN114612404B (en) | 2024-07-26 |
Family
ID=81861788
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210207828.9A Active CN114612404B (en) | 2022-03-04 | 2022-03-04 | Blood vessel segmentation method and device, storage medium and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114612404B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115457024A (en) * | 2022-10-10 | 2022-12-09 | 水木未来(杭州)科技有限公司 | Method and device for processing cryoelectron microscope image, electronic equipment and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112950643A (en) * | 2021-02-26 | 2021-06-11 | 东北大学 | New coronary pneumonia focus segmentation method based on feature fusion deep supervision U-Net |
CN113012166A (en) * | 2021-03-19 | 2021-06-22 | 北京安德医智科技有限公司 | Intracranial aneurysm segmentation method and device, electronic device, and storage medium |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108629784A (en) * | 2018-05-08 | 2018-10-09 | 上海嘉奥信息科技发展有限公司 | A kind of CT image intracranial vessel dividing methods and system based on deep learning |
US11164067B2 (en) * | 2018-08-29 | 2021-11-02 | Arizona Board Of Regents On Behalf Of Arizona State University | Systems, methods, and apparatuses for implementing a multi-resolution neural network for use with imaging intensive applications including medical imaging |
CN109816666B (en) * | 2019-01-04 | 2023-06-02 | 三峡大学 | Symmetrical full convolution neural network model construction method, fundus image blood vessel segmentation device, computer equipment and storage medium |
US20210142470A1 (en) * | 2019-11-12 | 2021-05-13 | International Intelligent Informatics Solution Laboratory LLC | System and method for identification of pulmonary arteries and veins depicted on chest ct scans |
-
2022
- 2022-03-04 CN CN202210207828.9A patent/CN114612404B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112950643A (en) * | 2021-02-26 | 2021-06-11 | 东北大学 | New coronary pneumonia focus segmentation method based on feature fusion deep supervision U-Net |
CN113012166A (en) * | 2021-03-19 | 2021-06-22 | 北京安德医智科技有限公司 | Intracranial aneurysm segmentation method and device, electronic device, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN114612404A (en) | 2022-06-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11748879B2 (en) | Method and system for intracerebral hemorrhage detection and segmentation based on a multi-task fully convolutional network | |
EP3961484B1 (en) | Medical image segmentation method and device, electronic device and storage medium | |
CN111161279B (en) | Medical image segmentation method, device and server | |
CN114120102A (en) | Boundary-optimized remote sensing image semantic segmentation method, device, equipment and medium | |
CN111832570A (en) | Image semantic segmentation model training method and system | |
CN111161269B (en) | Image segmentation method, computer device, and readable storage medium | |
Wazir et al. | HistoSeg: Quick attention with multi-loss function for multi-structure segmentation in digital histology images | |
CN110866921A (en) | Weakly supervised vertebral body segmentation method and system based on self-training and slice propagation | |
CN112364933B (en) | Image classification method, device, electronic equipment and storage medium | |
CN114612404B (en) | Blood vessel segmentation method and device, storage medium and electronic equipment | |
CN112307991A (en) | Image recognition method, device and storage medium | |
CN112348819A (en) | Model training method, image processing and registering method, and related device and equipment | |
CN117437423A (en) | Weak supervision medical image segmentation method and device based on SAM collaborative learning and cross-layer feature aggregation enhancement | |
CN115861250A (en) | Self-adaptive data set semi-supervised medical image organ segmentation method and system | |
CN114972382A (en) | Brain tumor segmentation algorithm based on lightweight UNet + + network | |
CN110827341A (en) | Picture depth estimation method and device and storage medium | |
CN115546231A (en) | Self-adaptive brain glioma segmentation method based on semi-supervised deep learning | |
CN117474879A (en) | Aortic dissection true and false cavity segmentation method and device, electronic equipment and storage medium | |
CN113222209A (en) | Regional tail gas migration prediction method and system based on domain adaptation and storage medium | |
CN113554068A (en) | Semi-automatic labeling method and device for instance segmentation data set and readable medium | |
CN113177957A (en) | Cell image segmentation method and device, electronic equipment and storage medium | |
CN116486071A (en) | Image blocking feature extraction method, device and storage medium | |
US20240135679A1 (en) | Method for classifying images and electronic device | |
CN116597263A (en) | Training method and related device for image synthesis model | |
CN116563305A (en) | Segmentation method and device for abnormal region of blood vessel and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |