CN113223015A - Vascular wall image segmentation method, device, computer equipment and storage medium - Google Patents

Vascular wall image segmentation method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN113223015A
CN113223015A CN202110510482.5A CN202110510482A CN113223015A CN 113223015 A CN113223015 A CN 113223015A CN 202110510482 A CN202110510482 A CN 202110510482A CN 113223015 A CN113223015 A CN 113223015A
Authority
CN
China
Prior art keywords
dimensional
segmentation
sample
image
wall
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110510482.5A
Other languages
Chinese (zh)
Inventor
陈慧军
窦佳琦
李雨泽
赵锡海
王雅洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202110510482.5A priority Critical patent/CN113223015A/en
Publication of CN113223015A publication Critical patent/CN113223015A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Abstract

The present application relates to a vessel wall image segmentation method, apparatus, computer device and storage medium. The method comprises the following steps: acquiring a three-dimensional black blood T1 weighted magnetic resonance intracranial vessel wall image; performing lumen segmentation on the intracranial vascular wall image to obtain a three-dimensional vascular lumen segmentation result; then, extracting a central line according to the three-dimensional blood vessel cavity segmentation result, and performing re-cutting perpendicular to the central line to obtain a two-dimensional blood vessel wall cross section image; and (4) segmenting the two-dimensional vessel wall cross-section image to obtain a vessel cavity area and an outer tube wall area. According to the method, the three-dimensional lumen in the magnetic resonance image is firstly segmented, the two-dimensional vessel wall is further segmented on the two-dimensional vessel wall cross section image according to the three-dimensional lumen segmentation result, and the accuracy and the segmentation efficiency of the vessel wall image segmentation are improved through accurate segmentation of the lumen and the vessel wall.

Description

Vascular wall image segmentation method, device, computer equipment and storage medium
Technical Field
The present application relates to the field of medical imaging technologies, and in particular, to a method, an apparatus, a computer device, and a storage medium for segmenting a vascular wall image.
Background
Intracranial Atherosclerosis (intracoronary Atherosclerosis) is one of the most common causes of stroke. Vascular luminal stenosis, as determined by conventional angiographic imaging techniques, remains the primary means for diagnosing intracranial atherosclerosis, however, intracranial atherosclerosis cannot be accurately determined based on luminal stenosis alone. According to research, the degree of stenosis of the blood vessel wall and the thickness/cavity area of the blood vessel wall are obviously related to plaque symptoms, so that the accurate division of the intracranial blood vessel wall has important significance.
At present, the intracranial vessel wall image segmentation scheme generally needs to manually trace the vessel center line or is registered with an angiography image to obtain the vessel center line mapping, and the additional manual intervention or registration operation process is complicated and is easy to introduce errors; there is still no method for fully automated image segmentation of intracranial vessel walls based solely on three-dimensional black blood T1 weighted images.
Disclosure of Invention
In view of the above, it is necessary to provide a vessel wall image segmentation method, a vessel wall image segmentation apparatus, a computer device, and a storage medium, which can improve the accuracy and efficiency of vessel wall image segmentation.
In a first aspect, a tube wall image segmentation method is provided, and the method includes:
acquiring a three-dimensional black blood T1 weighted magnetic resonance intracranial vessel wall image;
performing lumen segmentation on the intracranial vascular wall image to obtain a three-dimensional vascular lumen segmentation result;
extracting a central line according to the three-dimensional blood vessel cavity segmentation result, and performing re-cutting perpendicular to the central line to obtain a two-dimensional blood vessel wall cross section image;
and (4) segmenting the two-dimensional vessel wall cross-section image to obtain a vessel cavity area and an outer tube wall area.
In one embodiment, performing lumen segmentation on the intracranial blood vessel wall image to obtain a three-dimensional blood vessel lumen segmentation result, including:
generating an image patch by adopting a cubic sliding window on the intracranial vascular wall image;
performing lumen segmentation on the image patch to obtain a binary three-dimensional blood vessel lumen segmentation result;
extracting a central line according to the blood vessel cavity segmentation result, comprising:
and extracting a central line according to the binaryzation three-dimensional vascular cavity segmentation result.
In one embodiment, after obtaining the blood vessel cavity region and the outer tube wall region, the method further comprises:
morphological information of the blood vessel is calculated according to the blood vessel cavity area and the outer tube wall area.
In one embodiment, the intracranial blood vessel wall segmentation model comprises a three-dimensional lumen segmentation sub-model, a central line extraction sub-model and a two-dimensional vessel wall segmentation sub-model;
the method further comprises the following steps:
performing lumen segmentation on the intracranial vascular wall image through the three-dimensional lumen segmentation submodel to obtain a three-dimensional vascular lumen segmentation result;
extracting a central line according to the three-dimensional vascular cavity segmentation result through the central line extraction submodel, and performing re-cutting perpendicular to the central line to obtain a two-dimensional vascular wall cross section image;
segmenting the two-dimensional vessel wall cross-section image through the two-dimensional vessel wall segmentation sub-model to obtain a vessel lumen area and an outer vessel wall area;
the training process of the intracranial vessel wall segmentation model comprises the following steps:
acquiring a three-dimensional black blood T1 weighted magnetic resonance intracranial blood vessel wall sample image and a label image of the intracranial blood vessel wall sample image;
inputting the intracranial vascular wall sample image into an initial three-dimensional lumen segmentation sub-model to obtain a three-dimensional vascular lumen sample segmentation result;
inputting the three-dimensional vascular cavity sample segmentation result into a center line extraction sub-model to obtain a sample center line, and performing re-cutting perpendicular to the sample center line to obtain a two-dimensional vascular wall sample cross section image;
inputting the two-dimensional vessel wall sample cross section image into the initial two-dimensional vessel wall segmentation sub-model to obtain a sample vessel cavity area and a sample outer tube wall area;
calculating the loss of a three-dimensional model according to the segmentation result of the three-dimensional blood vessel cavity sample and the marked image, and obtaining a trained three-dimensional lumen segmentation sub-model when the segmentation loss of the three-dimensional model is lower than a preset three-dimensional model loss threshold value or the training times reach a preset maximum value;
and calculating the loss of the two-dimensional model according to the sample vessel cavity area, the sample outer tube wall area and the marked image, and obtaining a trained two-dimensional tube wall segmentation sub-model when the segmentation loss of the two-dimensional model is lower than a preset two-dimensional model loss threshold value or the training times reach a preset maximum value.
In one embodiment, inputting the intracranial blood vessel wall sample image into an initial three-dimensional lumen segmentation sub-model to obtain a three-dimensional blood vessel lumen sample segmentation result, including:
generating a sample image patch by adopting a cubic sliding window for the intracranial blood vessel wall sample image;
and inputting the sample image patch into an initial three-dimensional lumen segmentation sub-model to obtain a binarization three-dimensional blood vessel lumen sample segmentation result.
In one embodiment, after the performing the re-cutting perpendicular to the sample center line to obtain the two-dimensional blood vessel wall sample cross-section image, the method further includes:
performing data enhancement on the two-dimensional vascular wall sample cross-section image to obtain an enhanced two-dimensional vascular wall sample cross-section image; the enhanced two-dimensional vessel wall sample cross-section image is used for training an initial two-dimensional vessel wall segmentation sub-model to obtain a trained two-dimensional vessel wall segmentation model.
In one embodiment, after obtaining the sample vessel lumen region and the sample outer tubular wall region, the method further comprises:
taking the sample blood vessel cavity area and the sample outer tube wall area as a sample blood vessel segmentation result;
calculating the segmentation accuracy according to the sample blood vessel segmentation result and the manual marking result of the sample;
and calculating the morphological information of the sample blood vessel according to the sample blood vessel cavity area and the sample outer tube wall area, and verifying the segmentation consistency of the morphological information of the sample blood vessel and the gold standard.
In a second aspect, there is provided a blood vessel wall image segmentation apparatus, comprising:
the magnetic resonance image acquisition module is used for acquiring a three-dimensional black blood T1 weighted magnetic resonance intracranial blood vessel wall image;
the three-dimensional lumen segmentation module is used for performing lumen segmentation on the intracranial vascular wall image to obtain a three-dimensional vascular lumen segmentation result;
the blood vessel central line extraction module is used for extracting a central line according to the three-dimensional blood vessel cavity segmentation result and performing re-cutting perpendicular to the central line to obtain a two-dimensional blood vessel wall cross section image;
and the two-dimensional vessel wall segmentation module is used for segmenting the two-dimensional vessel wall cross section image to obtain a vessel cavity area and an outer vessel wall area.
In a third aspect, a computer device is provided, comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring a three-dimensional black blood T1 weighted magnetic resonance intracranial vessel wall image;
performing lumen segmentation on the intracranial vascular wall image to obtain a three-dimensional vascular lumen segmentation result;
extracting a central line according to the three-dimensional blood vessel cavity segmentation result, and performing re-cutting perpendicular to the central line to obtain a two-dimensional blood vessel wall cross section image;
and (4) segmenting the two-dimensional vessel wall cross-section image to obtain a vessel cavity area and an outer tube wall area.
In a fourth aspect, there is provided a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring a three-dimensional black blood T1 weighted magnetic resonance intracranial vessel wall image;
performing lumen segmentation on the intracranial vascular wall image to obtain a three-dimensional vascular lumen segmentation result;
extracting a central line according to the three-dimensional blood vessel cavity segmentation result, and performing re-cutting perpendicular to the central line to obtain a two-dimensional blood vessel wall cross section image;
and (4) segmenting the two-dimensional vessel wall cross-section image to obtain a vessel cavity area and an outer tube wall area.
The blood vessel wall image segmentation method, the device, the computer equipment and the storage medium are used for weighting the intracranial blood vessel wall image of magnetic resonance by acquiring the three-dimensional black blood T1; performing lumen segmentation on the intracranial vascular wall image to obtain a three-dimensional vascular lumen segmentation result; then, extracting a central line according to the three-dimensional blood vessel cavity segmentation result, and performing re-cutting perpendicular to the central line to obtain a two-dimensional blood vessel wall cross section image; and (4) segmenting the two-dimensional vessel wall cross-section image to obtain a vessel cavity area and an outer tube wall area. According to the method, the three-dimensional lumen in the magnetic resonance image is firstly segmented, the two-dimensional vessel wall is further segmented on the two-dimensional vessel wall cross section image according to the three-dimensional lumen segmentation result, and the accuracy and the segmentation efficiency of the vessel wall image segmentation are improved through accurate segmentation of the lumen and the vessel wall.
Drawings
FIG. 1 is a diagram of an embodiment of an application environment of a method for segmenting an image of a blood vessel wall;
FIG. 2 is a flowchart illustrating a method for segmenting a vessel wall image according to an embodiment;
FIG. 3 is a block diagram of a method for segmenting an image of a blood vessel wall according to an embodiment;
FIG. 4 is a diagram illustrating the results of lumen segmentation in one embodiment;
FIG. 5 is a diagram illustrating the result of segmentation of the outer tube wall in one embodiment;
FIG. 6 is a graphical illustration of vessel wall image segmentation performance in one embodiment;
FIG. 7 is a block diagram showing the structure of a blood-vessel wall image segmentation apparatus according to an embodiment;
FIG. 8 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The segmentation of the vessel lumen based on MRI (Magnetic Resonance Imaging) is a very challenging task, noise and artifacts in MR (Magnetic Resonance) images easily cause low contrast between the vessel wall and surrounding tissues, and a complex vessel lumen shape-walking structure is also a technical difficulty to be considered in the segmentation. At present, the traditional vessel wall image segmentation algorithms such as a region growing algorithm, a fuzzy connection algorithm and a multiscale filter based on a Hessian matrix have certain difficulty in the image segmentation of the intracranial vessel wall. The neural network-based method is used for segmenting the intracranial vascular cavity area and the vascular wall area, and the accuracy and efficiency of the segmentation of the vascular wall image can be greatly improved in an end-to-end mode. At present, a method for segmenting an intracranial vascular wall area on a 2D cross section slice image generated on the basis of a manually-drawn blood vessel central line is researched and developed, and manual intervention is introduced; in addition, a study is performed on extracting a lumen based on an angiography TOF (Time of flight) image, and then the lumen segmentation and centerline extraction results are mapped to a black vessel wall image for further vessel wall segmentation.
Aiming at the limitations and improvement requirements of the related technology, the application provides the intracranial vascular wall image segmentation method based on the three-dimensional black blood T1 weighted magnetic resonance vascular wall image, the accurate and efficient segmentation of the vascular cavity region and the outer vascular wall region of the intracranial artery can be realized, and the segmentation accuracy is improved.
The vessel wall image segmentation method provided by the application can be applied to the application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The terminal 102 acquires images acquired by the magnetic resonance scanning device through a network. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, and the server 104 may be implemented by an independent server or a server cluster formed by a plurality of servers.
In one embodiment, as shown in fig. 2, a method for segmenting a blood vessel wall image is provided, which is described by taking the method as an example applied to the terminal in fig. 1, and includes the following steps:
step 202, a three-dimensional black blood T1 weighted magnetic resonance intracranial vessel wall image is acquired.
The intracranial vascular wall image is an intracranial 3D image acquired by a magnetic resonance scanning device.
Specifically, the 3T magnetic resonance scanning device acquires an intracranial image by using a 3D VISTA (Volume Isotropic Turbo echo Volume Acquisition) sequence, and the terminal acquires an intracranial 3D image acquired by the magnetic resonance scanning device. VISTA is a typical Black-Blood MR Imaging sequence (Black-Blood MR Imaging) and is an effective method to measure vessel wall thickness and vessel wall area and determine intracranial vascular information. As an effective black blood magnetic resonance imaging sequence, the 3D VSITA not only provides high isotropic spatial resolution, but also has excellent inhibition effect on blood flow, can accurately measure the thickness of the intracranial vascular wall, can quickly image, for example, a 3.0T magnetic resonance machine can be used within 7 minutes to obtain an intracranial image with a wide coverage range, can provide plaque load which cannot be diagnosed by the traditional angiography, and provides effective parameters for determining the vascular information. The intracranial image scanned by the 3T magnetic resonance scanning equipment results in a layer of brain slice, namely an MR image, a series of original MR images form an intracranial 3D image, and the terminal acquires a series of original MR images.
And 204, performing lumen segmentation on the intracranial blood vessel wall image to obtain a three-dimensional blood vessel lumen segmentation result.
Specifically, the terminal inputs a series of acquired original MR images into a three-dimensional lumen segmentation sub-model of a trained intracranial vascular wall segmentation model, and performs lumen segmentation according to lumen features by extracting the lumen features in the series of original MR images to obtain a three-dimensional lumen segmentation result. The intracranial vascular wall segmentation model is a model which firstly segments a vascular cavity according to an intracranial vascular wall image, then extracts a central line, and then segments a two-dimensional vascular wall cross section image perpendicular to the central line to obtain a vascular cavity region and an outer vascular wall region.
And step 206, extracting a central line according to the three-dimensional blood vessel cavity segmentation result, and performing re-cutting perpendicular to the central line to obtain a two-dimensional blood vessel wall cross section image.
Wherein, the recut refers to cutting perpendicular to the center line of the blood vessel to obtain a group of two-dimensional cross-sectional images of the blood vessel wall.
Specifically, the centerline extraction sub-model of the intracranial blood vessel wall segmentation model integrates an automatic skeletonization method to extract the blood vessel centerline, and other centerline extraction algorithms may also be used to extract the blood vessel centerline, such as an interactive detection algorithm, a matching detection algorithm, and the like, which is not limited herein. After the central line of the blood vessel is obtained, the blood vessel is re-cut perpendicular to the central line to obtain a group of two-dimensional lumen cross-section images perpendicular to the central line of the blood vessel.
And step 208, segmenting the two-dimensional blood vessel wall cross-section image to obtain a blood vessel cavity area and an outer tube wall area.
Specifically, a two-dimensional vessel wall segmentation sub-model of the intracranial vessel wall segmentation model further segments the two-dimensional vessel wall cross-section image to obtain a vessel cavity area and an outer vessel wall area in the intracranial vessel wall image.
And after the segmentation result is obtained, performing post-processing on the segmentation result by adopting a Conditional Random Field (CRF) method, removing an isolated region with a small area in the segmentation result image, optimizing rough and uncertain marks in the segmentation image, correcting a finely-broken wrong region, obtaining a finer segmentation boundary, obtaining a more accurate blood vessel cavity region and an outer tube wall region, and improving the accuracy of tube wall segmentation.
In the blood vessel wall image segmentation method, an intracranial blood vessel wall image of magnetic resonance is weighted by acquiring three-dimensional black blood T1; performing lumen segmentation on the intracranial vascular wall image to obtain a three-dimensional vascular lumen segmentation result; then, extracting a central line according to the three-dimensional blood vessel cavity segmentation result, and performing re-cutting perpendicular to the central line to obtain a two-dimensional blood vessel wall cross section image; and (4) segmenting the two-dimensional vessel wall cross-section image to obtain a vessel cavity area and an outer tube wall area. According to the method, the three-dimensional lumen in the magnetic resonance image is firstly segmented, the two-dimensional vessel wall is further segmented on the two-dimensional vessel wall cross section image according to the three-dimensional lumen segmentation result, and the accuracy and the segmentation efficiency of the vessel wall image segmentation are improved through accurate segmentation of the lumen and the vessel wall.
In an optional embodiment, performing lumen segmentation on the intracranial blood vessel wall image to obtain a three-dimensional blood vessel lumen segmentation result, including:
generating an image patch by adopting a cubic sliding window on the intracranial vascular wall image;
inputting the image patch into a trained intracranial vascular wall segmentation model to obtain a binary three-dimensional vascular cavity segmentation result;
extracting a central line according to the blood vessel cavity segmentation result, comprising:
and extracting a central line according to the binaryzation three-dimensional vascular cavity segmentation result.
Wherein the three-dimensional lumen segmentation result is a binary segmentation result. In one possible embodiment, the lumen is marked as a first marker and the background region is marked as a second marker in the original MR image, thereby segmenting the lumen from the MR image. Wherein the first mark and the second mark are different. For example, the first label is 1 and the second label is 0. The lumen is marked 1 and the background region is marked 0 in the original MR image, thereby segmenting the lumen from the image.
The sliding window of the cube is a three-dimensional sliding window and is used for dividing the intracranial vascular wall image into a plurality of image blocks with the size of the sliding window of the cube. The image patch refers to a plurality of image blocks obtained after being divided by a cubic sliding window.
Specifically, a cubic sliding window with a preset size is adopted to divide an intracranial 3D image into a plurality of image patches, the image patches are input into a three-dimensional lumen segmentation sub-model of a trained intracranial vascular wall segmentation model, a lumen is marked as 1 in the image patches, and a background area is marked as 0, so that the lumen is segmented from the image patches. And the lumen results output by the plurality of image patches jointly form a complete binary three-dimensional vessel lumen segmentation result. And then, extracting the center line of the blood vessel by an automatic skeleton method of a center line extraction sub-model based on the binarization three-dimensional blood vessel cavity segmentation result. The size of the cubic sliding window may be determined according to the size of the intracranial 3D image, and may be 128 × 128 or other sizes, and the embodiment is not limited herein.
The original intracranial vascular wall image is a 3-dimensional stereo image, the occupied memory space is large, the three-dimensional lumen segmentation sub-model of the trained intracranial vascular wall segmentation model is directly input, and the three-dimensional lumen segmentation result can be obtained only by spending a long time.
In this embodiment, the intracranial vascular wall image is divided into the plurality of image patches, and then the plurality of image patches are input into the trained three-dimensional lumen segmentation sub-model of the intracranial vascular wall segmentation model, so as to obtain a binarized three-dimensional vascular lumen segmentation result, which can accelerate the lumen segmentation speed, and further improve the vascular wall segmentation efficiency.
In an optional embodiment, after obtaining the blood vessel cavity region and the outer tube wall region, the method further comprises:
morphological information of the blood vessel is calculated according to the blood vessel cavity area and the outer tube wall area.
The blood vessel information comprises information such as lumen area, tube wall thickness and standardized tube wall index.
Specifically, the two-dimensional tube wall segmentation sub-model can also calculate the area of the blood vessel cavity according to the region of the blood vessel cavity; calculating the area of the outer tube wall according to the area of the outer tube wall, and subtracting the area of the outer tube wall from the area of the blood vessel cavity to obtain the area of the tube wall; subtracting the radius of the outer pipe wall from the radius of the pipe cavity to obtain the thickness of the pipe wall; determining a standardized tube wall index according to the area of the tube cavity and the area of the tube wall, wherein the specific determination formula is as follows: normalized wall index is 100% wall area/(wall area + lumen area). The morphological information of the blood vessel can be used to evaluate the accuracy of the segmentation result.
In an optional embodiment, the intracranial vessel wall segmentation model comprises a three-dimensional lumen segmentation sub-model, a centerline extraction sub-model and a two-dimensional vessel wall segmentation sub-model;
the method further comprises the following steps:
performing lumen segmentation on the intracranial vascular wall image through the three-dimensional lumen segmentation submodel to obtain a three-dimensional vascular lumen segmentation result;
extracting a central line according to the three-dimensional vascular cavity segmentation result through the central line extraction submodel, and performing re-cutting perpendicular to the central line to obtain a two-dimensional vascular wall cross section image;
segmenting the two-dimensional vessel wall cross-section image through the two-dimensional vessel wall segmentation sub-model to obtain a vessel lumen area and an outer vessel wall area;
the training process of the intracranial vessel wall segmentation model comprises the following steps:
acquiring a three-dimensional black blood T1 weighted magnetic resonance intracranial blood vessel wall sample image and a label image of the intracranial blood vessel wall sample image;
inputting the intracranial vascular wall sample image into an initial three-dimensional lumen segmentation sub-model to obtain a three-dimensional vascular lumen sample segmentation result;
inputting the three-dimensional vascular cavity sample segmentation result into a center line extraction sub-model to obtain a sample center line, and performing re-cutting perpendicular to the sample center line to obtain a two-dimensional vascular wall sample cross section image;
inputting the two-dimensional vessel wall sample cross section image into the initial two-dimensional vessel wall segmentation sub-model to obtain a sample vessel cavity area and a sample outer tube wall area;
calculating the loss of a three-dimensional model according to the segmentation result of the three-dimensional blood vessel cavity sample and the marked image, and obtaining a trained three-dimensional lumen segmentation sub-model when the segmentation loss of the three-dimensional model is lower than a preset three-dimensional model loss threshold value or the training times reach a preset maximum value;
and calculating the loss of the two-dimensional model according to the sample vessel cavity area, the sample outer tube wall area and the marked image, and obtaining a trained two-dimensional tube wall segmentation sub-model when the segmentation loss of the two-dimensional model is lower than a preset two-dimensional model loss threshold value or the training times reach a preset maximum value.
Specifically, as shown in fig. 3, the intracranial blood vessel wall segmentation model includes a three-dimensional lumen segmentation sub-model, a centerline extraction sub-model, and a two-dimensional vessel wall segmentation sub-model. Performing lumen segmentation on the intracranial vascular wall image through a three-dimensional lumen segmentation sub-model to obtain a three-dimensional vascular lumen segmentation result; extracting a central line according to the three-dimensional vascular cavity segmentation result through a central line extraction sub-model, and performing re-cutting perpendicular to the central line to obtain a two-dimensional vascular wall cross section image; and segmenting the two-dimensional vessel wall cross section image through a two-dimensional vessel wall segmentation sub-model to obtain a vessel cavity area and an outer vessel wall area.
The three-dimensional lumen segmentation sub-model is a 3D UNet model, the 3D UNet model is provided with an encoding path and a decoding path, and each path is provided with 4 resolution levels. Each layer in the analysis path contains two 3 × 3 × 3 convolutional layers, each of which is followed by an Instance normalization (Instance normalization) and leakage-corrected Linear Unit (leakage ReLU) as an active layer, and then a 2 × 2 × 2 max pooling layer with step size of 2 in each direction. In the synthesis path, each layer contains a 2 × 2 × 2 deconvolution layer of step size 2, followed by two 3 × 3 × 3 convolutional layers, and then an activation layer. And splicing the characteristic layers with the same resolution in the coding path and the characteristic layers of the decoding path through jumping connection to provide multi-scale and multi-level characteristic information for subsequent segmentation. The loss function of the three-dimensional lumen segmentation submodel is defined as the sum of Cross-entropy loss (Cross-entropy loss) and Dice loss (Dice loss).
Each layer is normalized using its mean and standard deviation and global statistics are updated using these values. The case standardization can avoid gradient extinction and gradient explosion, reduce the dependency of the gradient on the model parameters or the initial value scale thereof, and lead the network to be trained by using a larger learning rate, thereby accelerating the convergence of the network; since the mean and variance used for example batch normalization in the training process are calculated over the small batches of sample image patches, rather than over the entire data set, the mean and variance will have some small noise generation, while the scaling process will have some noise generation since noisy normalized values are used, which forces each layer of neuron units to be not overly dependent on the preceding neuron units. Therefore, the example standardization can also be regarded as a regularization means, and the generalization capability of the network is improved.
The two-dimensional pipe wall segmentation sub-model is a 2D Attention Unet model, a 2D Attention Unet model with soft Attention is used, the characteristics of the previous stage are monitored through the characteristics of the next stage to realize an Attention mechanism, namely, the activated part is limited to the area to be segmented, the activation value of the background is reduced to optimize segmentation, and end-to-end segmentation is realized. With the increase of the iteration times, the activation region is more and more concentrated in the region to be segmented, and the tube wall segmentation effect can be improved. The loss function of the two-dimensional wall segmentation submodel is defined as the Dice loss (Dice loss).
Training sample acquisition phase 50 healthy volunteers were used as group 1 and 3 intracranial atherosclerotic patients as group 2. Imaging with a 32-channel head coil on a 3T magnetic resonance scanner (Achieva CX, Philips medicine, the Netherlands) with a 3D VISTA acquisition sequence and an isotropic spatial resolution of 0.6mm to obtain an intracranial vessel wall sample image.
Data from 50 healthy volunteers were randomly divided into a training set (80%, 40/50) for training of the intracranial vessel wall segmentation model and a test set (20%, 10/50) for evaluation of the intracranial vessel wall segmentation model. The lumen and outer tubular wall boundaries of the intracranial artery, including the basilar artery, the middle cerebral artery M1 segment, the anterior cerebral artery a1 segment, and the posterior cerebral artery P1 segment, were manually delineated by an experienced physician for each healthy volunteer. For the patient, the non-narrow section of the patient is basically consistent with the normal tube wall of the healthy person, so that only the lumen and the outer tube wall boundary of the narrow section are drawn for further verifying the performance of the segmentation network on the abnormal tube wall.
Three-dimensional black blood T1 weighted magnetic resonance intracranial vessel wall sample images are acquired, and marker images of the intracranial vessel wall sample images obtained by delineating the lumen of the intracranial artery and the boundary of the outer tube wall by a doctor are acquired. Furthermore, a doctor delineates each MR sample image in a series of original MR sample images to obtain a series of marking images, and the series of two-dimensional marking images form a three-dimensional lumen surface as a three-dimensional lumen marking image; inputting the three-dimensional lumen marking image into an initial three-dimensional lumen segmentation sub-model to obtain a three-dimensional blood vessel lumen sample segmentation result; extracting a sample central line from a three-dimensional blood vessel cavity sample segmentation result by using an automatic skeletonization method of a central line extraction submodel, and performing recutting to obtain a two-dimensional blood vessel wall sample cross section image by being vertical to the sample central line; and inputting the two-dimensional vessel wall sample cross section image into an initial two-dimensional vessel wall segmentation sub-model to obtain a sample vessel cavity area and a sample outer vessel wall area.
And inputting the three-dimensional blood vessel cavity sample segmentation result and the marked image into a loss function of the three-dimensional cavity segmentation sub-model to obtain the loss of the three-dimensional model. And when the three-dimensional model segmentation loss is lower than a preset three-dimensional model loss threshold value or the training times reach a preset maximum value, obtaining a trained three-dimensional lumen segmentation sub-model.
And inputting the loss function of the two-dimensional pipe wall segmentation sub-model according to the sample blood vessel cavity area, the sample outer pipe wall area and the marked image to obtain the two-dimensional model loss. And when the two-dimensional model segmentation loss is lower than a preset two-dimensional model loss threshold value or the training times reach a preset maximum value, obtaining a trained two-dimensional pipe wall segmentation sub-model. The trained three-dimensional lumen segmentation sub-model, the automatic skeletonization method, the recutting method and the trained two-dimensional wall segmentation sub-model form a trained intracranial vessel wall segmentation model.
In the embodiment, gradient disappearance and gradient explosion can be avoided through batch standardization of the examples in the three-dimensional lumen segmentation submodel, convergence of the model is accelerated, and generalization of the model is improved. The accuracy of the tube wall segmentation is improved through an attention mechanism in the two-dimensional tube wall segmentation sub-model. The intracranial vascular wall segmentation model greatly reduces workload and subjective error of manual delineation, and has higher accuracy of vessel wall segmentation and better robustness.
In an alternative embodiment, the intracranial blood vessel wall sample image is input into an initial three-dimensional lumen segmentation sub-model, and a three-dimensional blood vessel lumen sample segmentation result is obtained, including:
generating a sample image patch by adopting a cubic sliding window for the intracranial blood vessel wall sample image;
and inputting the sample image patch into an initial three-dimensional lumen segmentation sub-model to obtain a binarization three-dimensional blood vessel lumen sample segmentation result.
Specifically, a cubic sliding window with a preset size is adopted to divide an intracranial vascular wall sample image into a plurality of sample image patches, and the plurality of sample image patches are input into a three-dimensional lumen segmentation sub-model in an initial intracranial vascular wall segmentation model to obtain a binarization three-dimensional vascular lumen sample segmentation result. And then extracting a sample central line according to a binarized three-dimensional blood vessel cavity sample segmentation result, and performing recutting to obtain a two-dimensional blood vessel wall sample cross section image perpendicular to the sample central line. The size of the cubic sliding window can be determined according to the size of the intracranial blood vessel wall sample image, and the embodiment is not limited herein.
In this embodiment, through dividing intracranial vascular wall sample image into a plurality of sample image patches, and then with the good intracranial vascular wall segmentation model of a plurality of image patch input training, can accelerate lumen segmentation speed, and then improve stage vascular wall segmentation efficiency for the training process.
In an optional embodiment, after the performing the re-cutting perpendicular to the sample center line to obtain the two-dimensional blood vessel wall sample cross-section image, the method further includes:
performing data enhancement on the two-dimensional vascular wall sample cross-section image to obtain an enhanced two-dimensional vascular wall sample cross-section image; the enhanced two-dimensional vessel wall sample cross-section image is used for training an initial two-dimensional vessel wall segmentation sub-model to obtain a trained two-dimensional vessel wall segmentation model.
Specifically, data enhancement is carried out on the two-dimensional blood vessel wall cross section image, except translation, overturning and rotation, the abnormal blood vessel wall which possibly appears in atherosclerosis is simulated by elastic deformation, and the enhanced two-dimensional blood vessel wall cross section image is obtained. And training the initial two-dimensional vessel wall segmentation sub-model according to the enhanced two-dimensional vessel wall sample cross section image until a trained two-dimensional vessel wall segmentation model is obtained.
In this embodiment, carry out data enhancement through carrying out the two-dimensional vascular wall cross section image, can obtain abundanter two-dimensional vascular wall cross section image data set, the unusual pipe wall that appears in the effective simulation pathological change improves the rate of accuracy and the generalization that the pipe wall was cut apart.
In an alternative embodiment, after obtaining the sample vessel lumen region and the sample outer tubular wall region, the method further comprises:
taking the sample blood vessel cavity area and the sample outer tube wall area as a sample blood vessel segmentation result;
calculating the segmentation accuracy according to the sample blood vessel segmentation result and the manual marking result of the sample;
and calculating morphological information of the sample blood vessel according to the sample blood vessel cavity area and the sample outer tube wall area, and verifying the segmentation consistency of the morphological information of the sample blood vessel and the gold standard.
The sample blood vessel morphological information comprises information such as a sample lumen area, a sample tube wall thickness, a sample standardized tube wall index and the like.
Specifically, the two-dimensional tube wall segmentation sub-model may further calculate a sample lumen area, a sample tube wall area, and a sample tube wall thickness according to the sample vessel lumen area and the sample outer tube wall area, and calculate a sample normalized tube wall index according to the sample lumen area and the sample tube wall area, where the specific calculation process refers to the above description and is not repeated here.
Comparing the automatic segmentation results of the intracranial three-dimensional lumen and the two-dimensional cross-section tube wall with manual marks, and calculating parameters such as Sensitivity (Sensitivity), Specificity (Specificity), Precision (Precision), Dice coefficient (Dice coefficient) and intersection-over-unity (intersection-over-unity) to quantitatively evaluate the segmentation performance, wherein the formula is as follows:
Sensitivity=TP/(TP+FN)
Specificity=TN/(TN+FP)
Precision=TP/(TP+FP)
Dice=2|Pre∩GT|/(|Pre|+|GT|)
IoU=|Pre∩GT|/|Pre∪GT|
wherein TP represents a true positive example, FN represents a false negative example, TN represents a true negative example, and FP represents a false positive example. The sensitivity indicates how many samples that are actually positive examples are predicted to be positive. Specificity represents the proportion of all counter cases predicted to be counter cases. The precision indicates how many of the samples predicted to be positive examples are actually positive. Pre in the dice coefficient and cross-over ratio represents the prediction result, and GT represents the result of manual marking.
And then, evaluating the result of the blood vessel morphology measurement, namely the consistency of the blood vessel information (including information such as lumen area, wall thickness and standardized wall index) between the manual and automatic two-dimensional wall segmentation results by adopting the Pearson correlation coefficient, and further evaluating the segmentation results.
As shown in fig. 4, an example result of a three-dimensional lumen segmentation result manually labeled as a reference is shown, where the line in the lumen is a blood vessel centerline extracted using an automated skeletonization algorithm. The good and complete segmentation of the blood vessel can be observed, and the dice coefficient of the three-dimensional lumen segmentation result obtained through calculation reaches 0.872.
The visualization of the results of the automatic segmentation of the normal vessel wall and the stenotic vessel wall and the results of the manual labeling are compared, as shown in fig. 5, showing the consistency between the results of the two-dimensional vessel wall segmentation of the normal artery and the stenotic artery and the results of the manual labeling. For the healthy group 1 dataset, the dice coefficients for the two-dimensional wall segmentation submodel reached the high segmentation dice coefficients of 0.941 (lumen) and 0.894 (wall). For the stenosis group 2 dataset, the dice coefficients for lumen and vessel wall segmentation reached 0.922 and 0.896, respectively, as shown in table 1.
TABLE 1 quantitative assessment of three-dimensional lumen and two-dimensional vessel wall segmentation Performance
Figure BDA0003060118380000141
DICE, DICE coefficients; sen, sensitivity; spe, specificity; prec, precision; MIoU, average cross-over ratio.
As for the result of the blood vessel morphology measurement, as shown in fig. 6, the two-dimensional lumen area, the blood vessel wall area, and the scatter plot of the normalized wall index of the respective results of the intracranial blood vessel wall segmentation model segmentation and the artificial labeling are shown, and the lower right corner is shown as the pearson correlation coefficient. The pearson correlation coefficients of the segmentation results of the manual segmentation and the intracranial vessel wall segmentation models are 0.983 of lumen area, 0.929 of vessel wall area and 0.906 of normalized vessel wall index. The regression analysis shows that the probability of the difference between samples caused by the sampling error is less than 0.001, the result has statistical significance, and the accuracy and the robustness of the intracranial vessel wall segmentation model are proved.
It should be understood that, although the steps in the flowchart of fig. 2 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 2 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
In one embodiment, as shown in fig. 7, there is provided a blood vessel wall image segmentation apparatus including: a magnetic resonance image acquisition module 702, a three-dimensional lumen segmentation module 704, a blood vessel centerline extraction module 706, and a two-dimensional tube wall segmentation module 708, wherein:
a magnetic resonance image acquisition module 702 for acquiring a three-dimensional black blood T1 weighted magnetic resonance intracranial vessel wall image.
And the three-dimensional lumen segmentation module 704 is used for performing lumen segmentation on the intracranial blood vessel wall image to obtain a three-dimensional blood vessel lumen segmentation result.
And the blood vessel centerline extraction module 706 is configured to extract a centerline according to the three-dimensional blood vessel lumen segmentation result, and perform re-cutting perpendicular to the centerline to obtain a two-dimensional blood vessel wall cross-section image.
A two-dimensional vessel wall segmentation module 708, configured to segment the two-dimensional vessel wall cross-section image to obtain a vessel lumen region and an outer vessel wall region.
In an alternative embodiment, the three-dimensional lumen segmentation module 704 is further configured to generate an image patch from the intracranial blood vessel wall image using a cubic sliding window;
performing lumen segmentation on the image patch to obtain a binary three-dimensional blood vessel lumen segmentation result;
the blood vessel centerline extraction module 706 is further configured to extract a centerline according to the binarized three-dimensional blood vessel lumen segmentation result.
In an alternative embodiment, the vessel wall image segmentation apparatus further comprises a vessel information determination module for calculating morphological information of the vessel based on the vessel lumen region and the outer tube wall region.
In an optional embodiment, the intracranial vessel wall segmentation model comprises a three-dimensional lumen segmentation sub-model, a centerline extraction sub-model and a two-dimensional vessel wall segmentation sub-model;
the three-dimensional lumen segmentation module 704 is further configured to perform lumen segmentation on the intracranial blood vessel wall image through the three-dimensional lumen segmentation sub-model to obtain a three-dimensional blood vessel lumen segmentation result;
the blood vessel centerline extraction module 706 is further configured to extract a centerline according to the three-dimensional blood vessel lumen segmentation result through the centerline extraction submodel, and perform re-cutting perpendicular to the centerline to obtain a two-dimensional blood vessel wall cross-section image;
the two-dimensional tube wall segmentation module 708 is further configured to segment the two-dimensional blood vessel wall cross-section image through the two-dimensional tube wall segmentation sub-model to obtain a blood vessel lumen region and an outer tube wall region;
the blood vessel wall image segmentation device also comprises a training module, a data acquisition module and a data processing module, wherein the training module is used for acquiring an intracranial blood vessel wall sample image of three-dimensional black blood T1 weighted magnetic resonance and a label image of the intracranial blood vessel wall sample image;
inputting the intracranial vascular wall sample image into an initial three-dimensional lumen segmentation sub-model to obtain a three-dimensional vascular lumen sample segmentation result;
inputting the three-dimensional vascular cavity sample segmentation result into a center line extraction sub-model to obtain a sample center line, and performing re-cutting perpendicular to the sample center line to obtain a two-dimensional vascular wall sample cross-section image;
inputting the two-dimensional vessel wall sample cross section image into the initial two-dimensional vessel wall segmentation sub-model to obtain a sample vessel cavity area and a sample outer tube wall area;
calculating the loss of a three-dimensional model according to the segmentation result of the three-dimensional blood vessel cavity sample and the marked image, and obtaining a trained three-dimensional lumen segmentation sub-model when the segmentation loss of the three-dimensional model is lower than a preset three-dimensional model loss threshold value or the training times reach a preset maximum value;
and calculating the loss of the two-dimensional model according to the sample vessel cavity area, the sample outer tube wall area and the marked image, and obtaining a trained two-dimensional tube wall segmentation sub-model when the segmentation loss of the two-dimensional model is lower than a preset two-dimensional model loss threshold value or the training times reach a preset maximum value.
In an optional embodiment, the training module is further configured to generate a sample image patch by using a cubic sliding window for the intracranial blood vessel wall sample image;
and inputting the sample image patch into an initial three-dimensional lumen segmentation sub-model to obtain a binary three-dimensional blood vessel lumen sample segmentation result.
In an optional embodiment, the training module is further configured to perform data enhancement on the two-dimensional blood vessel wall sample cross-sectional image to obtain an enhanced two-dimensional blood vessel wall sample cross-sectional image; the enhanced two-dimensional vessel wall sample cross-section image is used for training an initial two-dimensional vessel wall segmentation sub-model to obtain a trained two-dimensional vessel wall segmentation model.
In an optional embodiment, the training module further comprises a sample vessel information determination unit, configured to take the sample vessel lumen region and the sample outer tubular wall region as a sample vessel segmentation result;
calculating the segmentation accuracy according to the sample blood vessel segmentation result and the manual marking result of the sample;
and calculating the morphological information of the sample blood vessel according to the sample blood vessel cavity area and the sample outer tube wall area, and verifying the segmentation consistency of the morphological information of the sample blood vessel and the gold standard.
For specific definition of the blood vessel wall image segmentation device, reference may be made to the above definition of the blood vessel wall image segmentation method, which is not described herein again. The modules in the blood vessel wall image segmentation device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 8. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a method of vessel wall image segmentation. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 8 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring a three-dimensional black blood T1 weighted magnetic resonance intracranial vessel wall image;
performing lumen segmentation on the intracranial vascular wall image to obtain a three-dimensional vascular lumen segmentation result;
extracting a central line according to the three-dimensional blood vessel cavity segmentation result, and performing re-cutting perpendicular to the central line to obtain a two-dimensional blood vessel wall cross section image;
and (4) segmenting the two-dimensional vessel wall cross-section image to obtain a vessel cavity area and an outer tube wall area.
In one embodiment, the processor, when executing the computer program, further performs the steps of: continuing lumen segmentation on the intracranial vascular wall image to obtain a three-dimensional vascular lumen segmentation result, comprising the following steps of: generating an image patch by adopting a cubic sliding window on the intracranial vascular wall image; performing lumen segmentation on the image patch to obtain a binary three-dimensional blood vessel lumen segmentation result; extracting a central line according to the blood vessel cavity segmentation result, comprising: and extracting a central line according to the binaryzation three-dimensional vascular cavity segmentation result.
In one embodiment, the processor, when executing the computer program, further performs the steps of: after obtaining the blood vessel cavity area and the outer tube wall area, the method further comprises the following steps: morphological information of the blood vessel is calculated according to the blood vessel cavity area and the outer tube wall area.
In one embodiment, the processor, when executing the computer program, further performs the steps of: the intracranial vascular wall segmentation model comprises a three-dimensional lumen segmentation sub-model, a central line extraction sub-model and a two-dimensional vascular wall segmentation sub-model; the method further comprises the following steps: performing lumen segmentation on the intracranial vascular wall image through the three-dimensional lumen segmentation submodel to obtain a three-dimensional vascular lumen segmentation result; extracting a central line according to the three-dimensional vascular cavity segmentation result through the central line extraction submodel, and performing re-cutting perpendicular to the central line to obtain a two-dimensional vascular wall cross section image; segmenting the two-dimensional vessel wall cross-section image through the two-dimensional vessel wall segmentation sub-model to obtain a vessel lumen area and an outer vessel wall area; the training process of the intracranial vessel wall segmentation model comprises the following steps: acquiring a three-dimensional black blood T1 weighted magnetic resonance intracranial blood vessel wall sample image and a label image of the intracranial blood vessel wall sample image; inputting the intracranial vascular wall sample image into an initial three-dimensional lumen segmentation sub-model to obtain a three-dimensional vascular lumen sample segmentation result; inputting the three-dimensional vascular cavity sample segmentation result into a center line extraction sub-model to obtain a sample center line, and performing re-cutting perpendicular to the sample center line to obtain a two-dimensional vascular wall sample cross section image; inputting the two-dimensional vessel wall sample cross section image into the initial two-dimensional vessel wall segmentation sub-model to obtain a sample vessel cavity area and a sample outer tube wall area; calculating the loss of a three-dimensional model according to the segmentation result of the three-dimensional blood vessel cavity sample and the marked image, and obtaining a trained three-dimensional lumen segmentation sub-model when the segmentation loss of the three-dimensional model is lower than a preset three-dimensional model loss threshold value or the training times reach a preset maximum value; and calculating the loss of the two-dimensional model according to the sample vessel cavity area, the sample outer tube wall area and the marked image, and obtaining a trained two-dimensional tube wall segmentation sub-model when the segmentation loss of the two-dimensional model is lower than a preset two-dimensional model loss threshold value or the training times reach a preset maximum value.
In one embodiment, the processor, when executing the computer program, further performs the steps of: inputting the intracranial vascular wall sample image into an initial three-dimensional lumen segmentation sub-model to obtain a three-dimensional vascular lumen sample segmentation result, wherein the three-dimensional vascular lumen sample segmentation result comprises the following steps: generating a sample image patch by adopting a cubic sliding window for the intracranial blood vessel wall sample image; and inputting the sample image patch into an initial three-dimensional lumen segmentation sub-model to obtain a binarization three-dimensional blood vessel lumen sample segmentation result.
In one embodiment, the processor, when executing the computer program, further performs the steps of: after the two-dimensional vessel wall sample cross-section image is obtained by performing the re-cutting perpendicular to the sample central line, the method further comprises the following steps: performing data enhancement on the two-dimensional vascular wall sample cross-section image to obtain an enhanced two-dimensional vascular wall sample cross-section image; the enhanced two-dimensional vessel wall sample cross-section image is used for training an initial two-dimensional vessel wall segmentation sub-model to obtain a trained two-dimensional vessel wall segmentation model.
In one embodiment, the processor, when executing the computer program, further performs the steps of: after obtaining the sample blood vessel cavity area and the sample outer tube wall area, the method further comprises the following steps: taking the sample blood vessel cavity area and the sample outer tube wall area as a sample blood vessel segmentation result; calculating the segmentation accuracy according to the sample blood vessel segmentation result and the manual marking result of the sample; and calculating the morphological information of the sample blood vessel according to the sample blood vessel cavity area and the sample outer tube wall area, and verifying the segmentation consistency of the morphological information of the sample blood vessel and the gold standard.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a three-dimensional black blood T1 weighted magnetic resonance intracranial vessel wall image;
performing lumen segmentation on the intracranial vascular wall image to obtain a three-dimensional vascular lumen segmentation result;
extracting a central line according to the three-dimensional blood vessel cavity segmentation result, and performing re-cutting perpendicular to the central line to obtain a two-dimensional blood vessel wall cross section image;
and (4) segmenting the two-dimensional vessel wall cross-section image to obtain a vessel cavity area and an outer tube wall area.
In one embodiment, the computer program when executed by the processor further performs the steps of: carrying out lumen segmentation on the intracranial vascular wall image to obtain a three-dimensional vascular lumen segmentation result, comprising the following steps: generating an image patch by adopting a cubic sliding window on the intracranial vascular wall image; inputting the image patch into a trained intracranial vascular wall segmentation model to obtain a binary three-dimensional vascular cavity segmentation result; extracting a central line according to the blood vessel cavity segmentation result, comprising: and extracting a central line according to the binaryzation three-dimensional vascular cavity segmentation result.
In one embodiment, the computer program when executed by the processor further performs the steps of: after obtaining the blood vessel cavity area and the outer tube wall area, the method further comprises the following steps: morphological information of the blood vessel is calculated according to the blood vessel cavity area and the outer tube wall area.
In one embodiment, the computer program when executed by the processor further performs the steps of: the intracranial vascular wall segmentation model comprises a three-dimensional lumen segmentation sub-model, a central line extraction sub-model and a two-dimensional vascular wall segmentation sub-model; the method further comprises the following steps: performing lumen segmentation on the intracranial vascular wall image through the three-dimensional lumen segmentation submodel to obtain a three-dimensional vascular lumen segmentation result; extracting a central line according to the three-dimensional vascular cavity segmentation result through the central line extraction submodel, and performing re-cutting perpendicular to the central line to obtain a two-dimensional vascular wall cross section image; segmenting the two-dimensional vessel wall cross-section image through the two-dimensional vessel wall segmentation sub-model to obtain a vessel lumen area and an outer vessel wall area; the training process of the intracranial vessel wall segmentation model comprises the following steps: acquiring a three-dimensional black blood T1 weighted magnetic resonance intracranial blood vessel wall sample image and a label image of the intracranial blood vessel wall sample image; inputting the intracranial vascular wall sample image into an initial three-dimensional lumen segmentation sub-model to obtain a three-dimensional vascular lumen sample segmentation result; inputting the three-dimensional vascular cavity sample segmentation result into a center line extraction sub-model to obtain a sample center line, and performing re-cutting perpendicular to the sample center line to obtain a two-dimensional vascular wall sample cross section image; inputting the two-dimensional vessel wall sample cross section image into the initial two-dimensional vessel wall segmentation sub-model to obtain a sample vessel cavity area and a sample outer tube wall area; calculating the loss of a three-dimensional model according to the segmentation result of the three-dimensional blood vessel cavity sample and the marked image, and obtaining a trained three-dimensional lumen segmentation sub-model when the segmentation loss of the three-dimensional model is lower than a preset three-dimensional model loss threshold value or the training times reach a preset maximum value; and calculating the loss of the two-dimensional model according to the sample vessel cavity area, the sample outer tube wall area and the marked image, and obtaining a trained two-dimensional tube wall segmentation sub-model when the segmentation loss of the two-dimensional model is lower than a preset two-dimensional model loss threshold value or the training times reach a preset maximum value.
In one embodiment, the computer program when executed by the processor further performs the steps of: inputting the intracranial vascular wall sample image into an initial three-dimensional lumen segmentation sub-model to obtain a three-dimensional vascular lumen sample segmentation result, wherein the three-dimensional vascular lumen sample segmentation result comprises the following steps: generating a sample image patch by adopting a cubic sliding window for the intracranial blood vessel wall sample image; and inputting the sample image patch into an initial three-dimensional lumen segmentation sub-model to obtain a binarization three-dimensional blood vessel lumen sample segmentation result.
In one embodiment, the computer program when executed by the processor further performs the steps of: after the two-dimensional vessel wall sample cross-section image is obtained by performing the re-cutting perpendicular to the sample central line, the method further comprises the following steps: performing data enhancement on the two-dimensional vascular wall sample cross-section image to obtain an enhanced two-dimensional vascular wall sample cross-section image; the enhanced two-dimensional vessel wall sample cross-section image is used for training an initial two-dimensional vessel wall segmentation sub-model to obtain a trained two-dimensional vessel wall segmentation model. In one embodiment, the computer program when executed by the processor further performs the steps of: after obtaining the sample blood vessel cavity area and the sample outer tube wall area, the method further comprises the following steps: taking the sample blood vessel cavity area and the sample outer tube wall area as a sample blood vessel segmentation result; calculating the segmentation accuracy according to the sample blood vessel segmentation result and the manual marking result of the sample; and calculating the morphological information of the sample blood vessel according to the sample blood vessel cavity area and the sample outer tube wall area, and verifying the segmentation consistency of the morphological information of the sample blood vessel and the gold standard.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method of vessel wall image segmentation, the method comprising:
acquiring a three-dimensional black blood T1 weighted magnetic resonance intracranial vessel wall image;
performing lumen segmentation on the intracranial vascular wall image to obtain a three-dimensional vascular lumen segmentation result;
extracting a central line according to the three-dimensional vascular cavity segmentation result, and performing re-cutting perpendicular to the central line to obtain a two-dimensional vascular wall cross section image;
and segmenting the two-dimensional vessel wall cross-section image to obtain a vessel cavity area and an outer tube wall area.
2. The method according to claim 1, wherein the performing lumen segmentation on the intracranial blood vessel wall image to obtain a three-dimensional blood vessel lumen segmentation result comprises:
generating an image patch by adopting a cubic sliding window on the intracranial vascular wall image;
performing lumen segmentation on the image patch to obtain a binary three-dimensional blood vessel lumen segmentation result;
the extracting the central line according to the blood vessel cavity segmentation result comprises the following steps:
and extracting a central line according to the binaryzation three-dimensional vascular cavity segmentation result.
3. The method of claim 1, wherein after obtaining the vessel lumen region and the outer tubular wall region, further comprising:
and calculating morphological information of the blood vessel according to the blood vessel cavity area and the outer tube wall area.
4. The method of claim 1, wherein the intracranial vessel wall segmentation model comprises a three-dimensional lumen segmentation submodel, a centerline extraction submodel, and a two-dimensional vessel wall segmentation submodel;
the method further comprises the following steps:
performing lumen segmentation on the intracranial vascular wall image through the three-dimensional lumen segmentation submodel to obtain a three-dimensional vascular lumen segmentation result;
extracting a central line according to the three-dimensional vascular cavity segmentation result through the central line extraction submodel, and performing re-cutting perpendicular to the central line to obtain a two-dimensional vascular wall cross section image;
segmenting the two-dimensional vessel wall cross-section image through the two-dimensional vessel wall segmentation sub-model to obtain a vessel cavity area and an outer vessel wall area;
the training process of the intracranial vessel wall segmentation model comprises the following steps:
acquiring a three-dimensional black blood T1 weighted magnetic resonance intracranial blood vessel wall sample image and a label image of the intracranial blood vessel wall sample image;
inputting the intracranial vascular wall sample image into an initial three-dimensional lumen segmentation sub-model to obtain a three-dimensional vascular lumen sample segmentation result;
inputting the three-dimensional vascular cavity sample segmentation result into a center line extraction sub-model to obtain a sample center line, and performing re-cutting perpendicular to the sample center line to obtain a two-dimensional vascular wall sample cross section image;
inputting the two-dimensional vessel wall sample cross section image into the initial two-dimensional vessel wall segmentation sub-model to obtain a sample vessel cavity area and a sample outer tube wall area;
calculating the loss of a three-dimensional model according to the segmentation result of the three-dimensional blood vessel cavity sample and the marked image, and obtaining a trained three-dimensional lumen segmentation sub-model when the segmentation loss of the three-dimensional model is lower than a preset three-dimensional model loss threshold value or the training times reach a preset maximum value;
and calculating the loss of the two-dimensional model according to the sample vessel cavity area, the sample outer tube wall area and the marked image, and obtaining a trained two-dimensional tube wall segmentation model when the segmentation loss of the two-dimensional model is lower than a preset two-dimensional model loss threshold value or the training times reach a preset maximum value.
5. The method according to claim 4, wherein the inputting the intracranial blood vessel wall sample image into an initial three-dimensional lumen segmentation sub-model to obtain a three-dimensional blood vessel lumen sample segmentation result comprises:
generating a sample image patch by adopting a cubic sliding window for the intracranial blood vessel wall sample image;
and inputting the sample image patch into an initial three-dimensional lumen segmentation sub-model to obtain a binarization three-dimensional blood vessel lumen sample segmentation result.
6. The method of claim 4, wherein after said re-cutting perpendicular to the sample centerline to obtain a two-dimensional vessel wall sample cross-sectional image, further comprising:
performing data enhancement on the two-dimensional vascular wall sample cross-section image to obtain an enhanced two-dimensional vascular wall sample cross-section image; and the enhanced two-dimensional vessel wall sample cross section image is used for training the initial two-dimensional vessel wall segmentation sub-model to obtain a trained two-dimensional vessel wall segmentation model.
7. The method of claim 4, wherein after obtaining the sample vessel lumen region and the sample outer tubular wall region, further comprising:
taking the sample blood vessel cavity area and the sample outer tube wall area as a sample blood vessel segmentation result;
calculating the segmentation accuracy according to the sample blood vessel segmentation result and the manual marking result of the sample blood vessel;
and calculating morphological information of the sample blood vessel according to the sample blood vessel cavity area and the sample outer tube wall area, and verifying the segmentation consistency of the morphological information of the sample blood vessel and the gold standard.
8. A blood vessel wall image segmentation apparatus, characterized in that the apparatus comprises:
the magnetic resonance image acquisition module is used for acquiring a three-dimensional black blood T1 weighted magnetic resonance intracranial blood vessel wall image;
the three-dimensional lumen segmentation module is used for performing lumen segmentation on the intracranial vascular wall image to obtain a three-dimensional vascular lumen segmentation result;
the blood vessel central line extraction module is used for extracting a central line according to the three-dimensional blood vessel cavity segmentation result and performing re-cutting perpendicular to the central line to obtain a two-dimensional blood vessel wall cross section image;
and the two-dimensional vessel wall segmentation module is used for segmenting the two-dimensional vessel wall cross section image to obtain a vessel cavity area and an outer vessel wall area.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202110510482.5A 2021-05-11 2021-05-11 Vascular wall image segmentation method, device, computer equipment and storage medium Pending CN113223015A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110510482.5A CN113223015A (en) 2021-05-11 2021-05-11 Vascular wall image segmentation method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110510482.5A CN113223015A (en) 2021-05-11 2021-05-11 Vascular wall image segmentation method, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113223015A true CN113223015A (en) 2021-08-06

Family

ID=77094770

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110510482.5A Pending CN113223015A (en) 2021-05-11 2021-05-11 Vascular wall image segmentation method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113223015A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114359205A (en) * 2021-12-29 2022-04-15 推想医疗科技股份有限公司 Head and neck blood vessel analysis method and device, storage medium and electronic equipment
CN114677396A (en) * 2022-05-27 2022-06-28 天津远景科技服务有限公司 Image processing method, image processing apparatus, and computer-readable storage medium
WO2023124830A1 (en) * 2021-12-28 2023-07-06 深圳先进技术研究院 Blood vessel wall image automatic curved planar reformation method based on centerline extraction
CN116934741A (en) * 2023-09-11 2023-10-24 首都医科大学附属北京天坛医院 Method and device for acquiring composition and quantitative parameters of one-stop type blood vessel wall

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023124830A1 (en) * 2021-12-28 2023-07-06 深圳先进技术研究院 Blood vessel wall image automatic curved planar reformation method based on centerline extraction
CN114359205A (en) * 2021-12-29 2022-04-15 推想医疗科技股份有限公司 Head and neck blood vessel analysis method and device, storage medium and electronic equipment
CN114677396A (en) * 2022-05-27 2022-06-28 天津远景科技服务有限公司 Image processing method, image processing apparatus, and computer-readable storage medium
CN114677396B (en) * 2022-05-27 2022-09-20 天津远景科技服务有限公司 Image processing method, image processing apparatus, and computer-readable storage medium
CN116934741A (en) * 2023-09-11 2023-10-24 首都医科大学附属北京天坛医院 Method and device for acquiring composition and quantitative parameters of one-stop type blood vessel wall
CN116934741B (en) * 2023-09-11 2023-12-26 首都医科大学附属北京天坛医院 Method and device for acquiring composition and quantitative parameters of one-stop type blood vessel wall

Similar Documents

Publication Publication Date Title
Carass et al. Longitudinal multiple sclerosis lesion segmentation: resource and challenge
Cerri et al. A contrast-adaptive method for simultaneous whole-brain and lesion segmentation in multiple sclerosis
Soni et al. Light weighted healthcare CNN model to detect prostate cancer on multiparametric MRI
CN113223015A (en) Vascular wall image segmentation method, device, computer equipment and storage medium
US9959615B2 (en) System and method for automatic pulmonary embolism detection
Nalepa et al. Fully-automated deep learning-powered system for DCE-MRI analysis of brain tumors
US9123095B2 (en) Method for increasing the robustness of computer-aided diagnosis to image processing uncertainties
Coupé et al. LesionBrain: an online tool for white matter lesion segmentation
KR20230059799A (en) A Connected Machine Learning Model Using Collaborative Training for Lesion Detection
CN110910405A (en) Brain tumor segmentation method and system based on multi-scale cavity convolutional neural network
CN115496771A (en) Brain tumor segmentation method based on brain three-dimensional MRI image design
Pan et al. Prostate segmentation from 3d mri using a two-stage model and variable-input based uncertainty measure
CN116188485A (en) Image processing method, device, computer equipment and storage medium
CN113764101B (en) Novel auxiliary chemotherapy multi-mode ultrasonic diagnosis system for breast cancer based on CNN
Tong et al. Automatic lumen border detection in IVUS images using dictionary learning and kernel sparse representation
Ait Mohamed et al. Hybrid method combining superpixel, supervised learning, and random walk for glioma segmentation
Jalab et al. Fractional Renyi entropy image enhancement for deep segmentation of kidney MRI
CN116309640A (en) Image automatic segmentation method based on multi-level multi-attention MLMA-UNet network
CN113379770B (en) Construction method of nasopharyngeal carcinoma MR image segmentation network, image segmentation method and device
KR20190068254A (en) Method, Device and Program for Estimating Time of Lesion Occurrence
CN115345856A (en) Breast cancer chemotherapy curative effect prediction model based on image dynamic enhancement mode
Sun et al. Kidney tumor segmentation based on FR2PAttU-Net model
Thiruvenkadam et al. Fully automatic brain tumor extraction and tissue segmentation from multimodal MRI brain images
Keçeli et al. A GPU-Based approach for automatic segmentation of white matter lesions
Malkanthi et al. Brain tumor boundary segmentation of MR imaging using spatial domain image processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination