CN108805134B - Construction method and application of aortic dissection model - Google Patents

Construction method and application of aortic dissection model Download PDF

Info

Publication number
CN108805134B
CN108805134B CN201810664754.5A CN201810664754A CN108805134B CN 108805134 B CN108805134 B CN 108805134B CN 201810664754 A CN201810664754 A CN 201810664754A CN 108805134 B CN108805134 B CN 108805134B
Authority
CN
China
Prior art keywords
aortic dissection
image
lumen
segmentation
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810664754.5A
Other languages
Chinese (zh)
Other versions
CN108805134A (en
Inventor
柴象飞
郭伟
郭娜
葛阳阳
左盼莉
曹龙
孟博文
王成
李健宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huiying medical technology (Beijing) Co.,Ltd.
Original Assignee
Huiying Medical Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huiying Medical Technology Beijing Co ltd filed Critical Huiying Medical Technology Beijing Co ltd
Priority to CN201810664754.5A priority Critical patent/CN108805134B/en
Publication of CN108805134A publication Critical patent/CN108805134A/en
Application granted granted Critical
Publication of CN108805134B publication Critical patent/CN108805134B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Abstract

The invention provides a construction method and application of an aortic dissection model, comprising the following steps: A. acquiring CTA images of aortic regions of a specified number of aortic dissection patients; B. preprocessing the CTA image, and extracting the image characteristics of aorta, true lumen and false lumen of the aortic dissection of the preprocessed CTA image through a convolution neural network; acquiring position marking information of the aorta, the true lumen and the false lumen which are divided by the golden standard; C. and training through a Multi-task network Multi-task UNet according to the image characteristics and the position marking information to obtain a trained aortic dissection model. Therefore, the aortic dissection prediction method and device based on the model are beneficial to rapidly and effectively obtaining the segmentation prediction result of the aortic dissection, greatly reduce diagnosis time of doctors, and provide effective support for making an operation scheme.

Description

Construction method and application of aortic dissection model
Technical Field
The invention relates to the field of medical images, in particular to a construction method and application of an aortic dissection model.
Background
The treatment of aortic dissection is generally accomplished by graft stent graft surgery. Before operation, a doctor needs to make a prognosis and determine a specific operation scheme according to the morphological parameters (such as the maximum diameter of a real cavity) of the interlayer, such as selecting a bracket with a proper size. The doctor needs to judge the operation effect according to the shape parameters of the interlayer after the operation. Morphological parameters of the dissections (such as the diameters of true and false cavities) can be obtained by segmenting the aortic dissections. Currently, aortic dissection methods are mainly classified into traditional dissection methods and model-based methods such as hough circle detection and centerline extraction, and such methods are time-consuming to perform an example of aortic dissection.
With the maturity of the application conditions of the deep convolutional neural network in recent years, some deep learning methods based on the deep convolutional neural network appear in the field of aorta segmentation. However, no relevant work has been disclosed for the dissection of the dissections (true, false cavities).
Therefore, it is highly desirable to construct an aortic dissection model to quickly and effectively obtain the dissection prediction result of aortic dissection so as to greatly reduce the diagnosis time of a doctor and provide effective support for the planning of an operation scheme.
Disclosure of Invention
In view of this, the present application provides a method for constructing an aortic dissection model and an application thereof, which are beneficial to quickly and effectively obtaining a segmentation prediction result of an aortic dissection, so as to greatly reduce diagnosis time of a doctor and provide effective support for planning an operation scheme.
The application provides a method for constructing an aortic dissection model, which comprises the following steps:
A. acquiring CTA (CT angiography) images of the aortic region of a specified number of aortic dissection patients;
B. preprocessing the CTA image, and extracting the image characteristics of aorta, true lumen and false lumen of the aortic dissection of the preprocessed CTA image through a convolution neural network; acquiring position marking information of the aorta, the true lumen and the false lumen which are divided by the golden standard;
C. and training through a Multi-task network Multi-task UNet according to the image characteristics and the position marking information to obtain a trained aortic dissection model.
Therefore, the aortic dissection prediction model obtained through the steps can quickly and effectively obtain the aortic dissection prediction result, so that the diagnosis time of a doctor is greatly reduced, and effective support is provided for the formulation of an operation scheme.
Preferably, step C is followed by:
D. selecting a specified number of CTA images as a verification set, and verifying the aortic dissection model; wherein, the CTA image comprises the extracted image characteristics and position marking information; the method comprises the following steps:
d1, inputting the original unlabeled CTA image in the verification set into the aortic dissection model, and outputting the prediction results of the segmentation of the aorta, the true lumen and the false lumen of the aortic dissection part in the CTA image through the model;
d2, performing overlapping degree comparison on the prediction result and the position marking information of the gold standard segmentation of the CTA image corresponding to the prediction result, and acquiring a Hausdorff distance between the prediction result and the position marking information;
d3, optimizing the aortic dissection model by adopting a mixed loss function strategy which maximizes the overlapping degree and minimizes the Hausdorff distance, and continuing training the aortic dissection model until the overlapping degree is maximum and the Hausdorff distance is minimum.
From the above, for the segmentation task, the accuracy of the prediction of the model can be judged by comparing the overlap between the prediction result of the model and the gold standard segmentation of the vascular surgeon, wherein the higher the overlap, the higher the prediction accuracy. The smaller the Hausdorff distance, the higher the accuracy of the prediction. And optimizing the aortic dissection model by adopting a mixed loss function strategy which maximizes the overlapping degree and simultaneously minimizes the Hausdorff distance. So that the prediction precision of the aortic dissection model is higher.
Preferably, the obtaining manner of the mixing loss function includes:
differentiable processing is carried out on the overlapping degree, and differentiable processing is carried out on the Hausdorff distance;
obtaining a loss function for each segmentation task of the aorta, the true lumen and the false lumen according to the overlapping degree after the differentiable processing and the Hausdorff distance after the differentiable processing;
and acquiring a final mixed loss function according to the loss function of each segmentation task.
From the above, since both are not differentiable from the original definition of DSC and Hausdorff distance, they cannot be directly used as loss functions. Therefore, we firstly perform differentiable approximation processing on the two indexes, and then further obtain the final mixing loss function according to the two indexes.
Preferably, the differentiable expression of the degree of overlap is:
Figure GDA0003158410270000031
wherein the content of the first and second substances,
Figure GDA0003158410270000037
a label of each segmentation task input for the model; c is a category comprising background and foreground, respectively represented by 0, 1;
Figure GDA0003158410270000038
a label representing the background or the foreground,
Figure GDA0003158410270000032
then represent
Figure GDA0003158410270000039
The ith voxel of (1);
rho is a label of each segmentation task output by the model; c is a category comprising background and foreground, respectively represented by 0, 1; rhocA label representing the background or the foreground,
Figure GDA0003158410270000033
then represents pcThe ith voxel of (1);
if the size of the label is assumed to be a × b × c, the number of voxels is a × b × c;
wherein the content of the first and second substances,
Figure GDA0003158410270000034
and
Figure GDA0003158410270000035
respectively representing the multiplication of two voxel values;
the differentiable expression of the Hausdorff distance is as follows:
Figure GDA0003158410270000036
wherein, gamma isi 1And Γi 2Are respectively the gold standard gamma1And model prediction result Γ2The profile of the ith layer image of (1); d represents a binary image where the contour is located; f (x) represents
Figure GDA0003158410270000044
Figure GDA0003158410270000041
Represents an arbitrary continuous strictly monotonic function;
Figure GDA0003158410270000042
is a function of
Figure GDA0003158410270000045
Area integral over D; m (D) is a constant calculated from the binary image D.
Preferably, the expression of the loss function of each segmentation task is as follows:
Figure GDA0003158410270000046
wherein the expression of the mixing loss function is:
ltotal=Σli
i represents aorta, true lumen, false lumen; liThe method comprises the steps of representing an aorta segmentation task loss function, a true lumen segmentation task loss function and a false lumen segmentation task loss function.
Preferably, the pretreatment of step B comprises:
normalizing the image resolution to ensure that the resolution of x, y and z axes is 1 mm;
converting the image pixel value into a Hu value, limiting the Hu value in a (0, 600) range, and carrying out normalization processing on the Hu value to obtain an image value with a mean value of 0 and a variance of 1; and
the image is rotated between-10, 10 degrees at random for data augmentation.
Therefore, the normalization processing is beneficial to standardizing and unifying the data and the processing of subsequent data.
The application also provides an aortic dissection method based on the aortic dissection model, which comprises the following steps:
a ', inputting a CTA image of the patient's aortic region into the aortic dissection model;
and B', outputting the prediction results of the segmentation of the aorta, the true lumen and the false lumen at the aortic dissection part.
Therefore, the aortic dissection prediction model obtained through the steps can quickly and effectively obtain the aortic dissection prediction result, so that the diagnosis time of a doctor is greatly reduced, and effective support is provided for the formulation of an operation scheme.
Preferably, after the step B', the method further includes performing post-processing on the segmented prediction result, specifically:
selecting the largest connected region from the aorta part in the prediction result of the segmentation, and removing other mistaken segmentation; multiplying the true lumen and the false lumen by the maximum binarization communication area to ensure that the true lumen and the false lumen are all in the aorta area; and/or
Smoothing each layer of the segmented aorta, the Z-axis of the three-dimensional coordinates of the true lumen and the false lumen by adopting cv2. GaussianBlur; and/or
Processing the split and overlapped parts of the real cavity and the false cavity in the prediction result:
the probabilities of predicting a voxel V of an image as foreground and background when true lumen segmentation are set to be P and P respectively1And P2,P1>P2(ii) a The probabilities of predicting a voxel V in an image as a foreground and a background in the false cavity segmentation are respectively P3And P4,P3>P4(ii) a Calculating Delta1=P1-P2And Δ2=P3-P4If Δ1>Δ2Voxel V is classified as a true lumen and vice versa.
Therefore, through the processing, the real cavity and the false cavity can be ensured to be all in the aorta area, the image segmentation is smooth, and the segmentation overlapping of the real cavity and the false cavity in the prediction result is avoided.
In summary, the aortic dissection model of the application can rapidly and effectively obtain the dissection prediction result of the aortic dissection, the prediction result is accurate, the diagnosis time of a doctor can be greatly reduced, and effective support is provided for the formulation of an operation scheme. For example, parameters such as the maximum diameter of the aorta may be measured according to the predicted segmentation result, and an aortic stent of an appropriate size may be selected according to the parameters. For another example, it is desirable to calculate parameters such as the diameter of the true lumen according to the result of the automatic segmentation of the true lumen, so as to evaluate the severity of the dissection, and accordingly, to select different treatment strategies. The effect of the operation can be evaluated by calculating the parameters of the false cavity before and after the operation.
Drawings
Fig. 1 is a schematic flowchart of a method for constructing an aortic dissection model according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of an aortic dissection method based on an aortic dissection model according to an embodiment of the present disclosure.
Detailed Description
The present application will be described below with reference to the drawings in the embodiments of the present application.
Example one
As shown in fig. 1, a method for constructing an aortic dissection model provided by the present application includes:
s101, CTA images of the aortic region of a specified number of aortic dissection patients are acquired. The CTA image may use a large number of already available CTA images of the aortic region of an aortic dissection patient.
S102, preprocessing the CTA image, and extracting the preprocessed image characteristics of the aorta, the true lumen and the false lumen of the aortic dissection of the CTA image through a convolutional neural network; and acquiring position marking information of the aorta, the true lumen and the false lumen segmented by the golden standard.
Wherein the pre-treatment comprises:
normalizing the image resolution to ensure that the resolution of x, y and z axes is 1 mm;
converting the image pixel value into a Hu value, limiting the Hu value in a (0, 600) range, and carrying out normalization processing on the Hu value to obtain an image value with a mean value of 0 and a variance of 1; and
the image is rotated between-10, 10 degrees at random for data augmentation.
Through the normalization processing, the data can be standardized and unified, and the subsequent data processing can be facilitated.
S103, training through a Multi-task network Multi-task UNet according to the image characteristics and the position marking information to obtain a trained aortic dissection model.
Wherein, an adaptive learning rate strategy is adopted in the training process, the initial learning rate is set to be 0.001, and the learning rate of each 100 cycles is reduced to be 0.95 times of the learning rate of the previous cycle.
S104, further selecting a specified number of CTA images as a verification set, and verifying the aortic dissection model; the CTA image comprises extracted image features and position marking information.
Inputting the original unlabeled CTA image in the verification set into the aortic dissection model, and outputting the prediction results of the segmentation of the aorta, the true lumen and the false lumen of the aortic dissection part in the CTA image through the model.
And S105, comparing the overlapping degree of the prediction result with the position marking information of the gold standard segmentation of the CTA image corresponding to the prediction result, and acquiring the Hausdorff distance between the prediction result and the position marking information.
Wherein, the larger the overlapping degree between the prediction result of the model and the gold standard segmentation of the vascular surgeon is, the more accurate the prediction result is. Parameters such as the maximum diameter of the aorta can be measured according to the prediction result of the segmentation, and the aorta stent with a proper size can be selected according to the parameters. This degree of overlap can be measured by Dice Similarity Score (DSC). The Hausdorff distance is the 'distance' between the automatic segmentation results of the model on the aorta, the true lumen and the false lumen and the golden standard segmentation, and the smaller the value, the better the value, and the more accurate the prediction result is. So that the interlayer parameters calculated according to the automatic segmentation result of the model are reliable when applied to clinical decision.
And S106, judging whether the overlapping degree is maximum and the Hausdorff distance is minimum. If yes, executing S108 to complete model construction. If not, executing S107, and optimizing the aortic dissection model.
And S107, optimizing the aortic dissection model. Specifically, the method comprises the following steps:
and optimizing the aortic dissection model by adopting a hybrid loss function strategy which maximizes the overlapping degree and minimizes the Hausdorff distance, and continuing to train the aortic dissection model.
The obtaining method of the mixing loss function includes:
differentiable processing is carried out on the overlapping degree, and differentiable processing is carried out on the Hausdorff distance;
obtaining a loss function for each segmentation task of the aorta, the true lumen and the false lumen according to the overlapping degree after the differentiable processing and the Hausdorff distance after the differentiable processing;
and acquiring a final mixed loss function according to the loss function of each segmentation task.
Wherein the differentiable expression of the degree of overlap is:
Figure GDA0003158410270000071
wherein the content of the first and second substances,
Figure GDA0003158410270000073
a label of each segmentation task input for the model; c is a category comprising background and foreground, respectively represented by 0, 1;
Figure GDA0003158410270000074
a label representing the background or the foreground,
Figure GDA0003158410270000072
then represent
Figure GDA0003158410270000089
The ith voxel of (1);
rho is a label of each segmentation task output by the model; c is a category comprising background and foreground, respectively represented by 0, 1; rhocA label representing the background or the foreground,
Figure GDA0003158410270000081
then represents pcThe ith voxel of (1);
if the size of the label is assumed to be a multiplied by b multiplied by c, the label is heated singly, the label is a nominal a multiplied by b multiplied by c volume, and the number of voxels is a multiplied by b multiplied by c;
wherein the content of the first and second substances,
Figure GDA0003158410270000082
and
Figure GDA0003158410270000083
respectively representing the multiplication of two voxel values;
the differentiable expression of the Hausdorff distance is as follows:
Figure GDA0003158410270000084
wherein, gamma isi 1And Γi 2Are respectively the gold standard gamma1And model prediction result Γ2The profile of the ith layer image of (1); d represents a binary image where the contour is located; f (x) represents
Figure GDA00031584102700000810
Figure GDA0003158410270000085
Represents an arbitrary continuous strictly monotonic function;
Figure GDA0003158410270000086
is a function of
Figure GDA0003158410270000087
Area integral over D; m (D) is a constant calculated from the binary image D.
The expression of the loss function of each segmentation task is as follows:
Figure GDA00031584102700000811
wherein the expression of the mixing loss function is:
ltotal=Σli
i represents aorta, true lumen, false lumen; liThe method comprises the steps of representing an aorta segmentation task loss function, a true lumen segmentation task loss function and a false lumen segmentation task loss function.
Through the optimization processing, the prediction of the aortic dissection model can be more accurate.
In summary, the aortic dissection prediction method and device based on the aortic dissection are beneficial to rapidly and effectively obtaining the dissection prediction result of the aortic dissection through construction of the aortic dissection model, so that diagnosis time of doctors is greatly reduced, and effective support is provided for formulation of an operation scheme. For example, parameters such as the maximum diameter of the aorta may be measured according to the predicted segmentation result, and an aortic stent of an appropriate size may be selected according to the parameters. For another example, it is desirable to calculate parameters such as the diameter of the true lumen according to the result of the automatic segmentation of the true lumen, so as to evaluate the severity of the dissection, and accordingly, to select different treatment strategies. The effect of the operation can be evaluated by calculating the parameters of the false cavity before and after the operation.
Example two
The application also provides an aortic dissection method based on the aortic dissection model, which comprises the following steps:
s201, inputting a CTA image of an aorta region of a patient into the aortic dissection model;
s202, outputting the prediction results of the segmentation of the aorta, the true lumen and the false lumen at the aortic dissection position.
After step S202, post-processing of the segmented prediction result is further included, specifically:
s203, selecting the largest connected region for the aorta part in the prediction result of the segmentation, and removing other error segmentation; multiplying the true lumen and the false lumen by the maximum binarization communication area to ensure that the true lumen and the false lumen are all in the aorta area; and/or
S204, smoothing each layer of the segmented aorta, the true lumen and the false lumen in the z-axis by adopting cv2.Gaussian Blur; and/or
S205, processing the split and overlapped parts of the real cavity and the false cavity in the prediction result:
the probabilities of predicting a voxel V of an image as foreground and background when true lumen segmentation are set to be P and P respectively1And P2,P1>P2(ii) a The probabilities of predicting a voxel V in an image as a foreground and a background in the false cavity segmentation are respectively P3And P4,P3>P4(ii) a Calculating Delta1=P1-P2And Δ2=P3-P4If Δ1>Δ2Voxel V is classified as a true lumen and vice versa.
In summary, the method and the device have the advantages that the segmentation prediction result of the aortic dissection can be rapidly and effectively obtained, the prediction result is subjected to post-processing to increase the accuracy of the prediction result, diagnosis time of doctors is greatly shortened, and effective support is provided for making an operation scheme. For example, the predicted segmentation result can be used to measure parameters such as the maximum diameter of the aorta, and the aortic stent with a proper size can be selected according to the parameters. For another example, it is desirable to calculate parameters such as the diameter of the true lumen according to the result of the automatic segmentation of the true lumen, so as to evaluate the severity of the dissection, and accordingly, to select different treatment strategies. The effect of the operation can be evaluated by calculating the parameters of the false cavity before and after the operation.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (7)

1. A method for constructing an aortic dissection model is characterized by comprising the following steps:
A. acquiring CTA images of aortic regions of a specified number of aortic dissection patients;
B. preprocessing the CTA image, and extracting the image characteristics of aorta, true lumen and false lumen of the aortic dissection of the preprocessed CTA image through a convolution neural network; acquiring position marking information of the aorta, the true lumen and the false lumen which are divided by the golden standard;
C. training through a Multi-task network Multi-task UNet according to the image characteristics and the position marking information to obtain a trained aortic dissection model;
D. selecting a specified number of CTA images as a verification set, and verifying the aortic dissection model; wherein, the CTA image comprises the extracted image characteristics and position marking information; the method comprises the following steps:
d1, inputting the original unlabeled CTA image in the verification set into the aortic dissection model, and outputting the prediction results of the segmentation of the aorta, the true lumen and the false lumen of the aortic dissection part in the CTA image through the model;
d2, performing overlapping degree comparison on the prediction result and the position marking information of the gold standard segmentation of the CTA image corresponding to the prediction result, and acquiring a Hausdorff distance between the prediction result and the position marking information;
d3, optimizing the aortic dissection model by adopting a mixed loss function strategy which maximizes the overlapping degree and minimizes the Hausdorff distance, and continuing training the aortic dissection model until the overlapping degree is maximum and the Hausdorff distance is minimum.
2. The method according to claim 1, wherein the mixing loss function of step D3 is obtained by:
differentiable processing is carried out on the overlapping degree, and differentiable processing is carried out on the Hausdorff distance;
obtaining a loss function for each segmentation task of the aorta, the true lumen and the false lumen according to the overlapping degree after the differentiable processing and the Hausdorff distance after the differentiable processing;
and acquiring a final mixed loss function according to the loss function of each segmentation task.
3. The method of claim 2, wherein the differentiable expression of the degree of overlap is:
Figure FDA0003158410260000021
wherein the content of the first and second substances,
Figure FDA0003158410260000022
a label of each segmentation task input for the model; c is a category comprising background and foreground, respectively represented by 0, 1;
Figure FDA0003158410260000023
a label representing the background or the foreground,
Figure FDA0003158410260000024
then represent
Figure FDA0003158410260000025
The ith voxel of (1);
rho is a label of each segmentation task output by the model; c is a category comprising background and foreground, respectively represented by 0, 1; rhocA label representing the background or the foreground,
Figure FDA0003158410260000026
then represents pcThe ith voxel of (1);
if the size of the label is assumed to be a × b × c, the number of voxels is a × b × c;
wherein the content of the first and second substances,
Figure FDA0003158410260000027
and
Figure FDA0003158410260000028
respectively representing the multiplication of two voxel values;
the differentiable expression of the Hausdorff distance is as follows:
Figure FDA0003158410260000029
Figure FDA00031584102600000210
wherein, gamma isi 1And Γi 2Are respectively the gold standard gamma1And model prediction result Γ2The profile of the ith layer image of (1); d represents a binary image where the contour is located; f (x) represents
Figure FDA00031584102600000211
Figure FDA00031584102600000212
Represents an arbitrary continuous strictly monotonic function;
Figure FDA00031584102600000213
is a function of
Figure FDA00031584102600000214
Area integral over D; m (D) is a constant calculated from the binary image D.
4. The method of claim 3, wherein the loss function for each split task is expressed as:
Figure FDA00031584102600000215
wherein the expression of the mixing loss function is:
Figure FDA00031584102600000216
i represents aorta, true lumen, false lumen;
Figure FDA0003158410260000031
representing an aorta segmentation task loss function, a true lumen segmentation task loss function and a false lumen segmentation task loss function; alpha is a coefficient.
5. The method of claim 1, wherein the pre-processing of step B comprises:
normalizing the image resolution to ensure that the resolution of x, y and z axes is 1 mm;
converting the image pixel value into a Hu value, limiting the Hu value in a (0, 600) range, and carrying out normalization processing on the Hu value to obtain an image value with a mean value of 0 and a variance of 1; and
the image is rotated between-10, 10 degrees at random for data augmentation.
6. An aortic dissection method based on the aortic dissection model constructed by the method of any one of claims 1 to 5, comprising the following steps:
a ', inputting a CTA image of the patient's aortic region into the aortic dissection model;
b', and outputting the prediction results of the segmentation of the aorta, the true lumen and the false lumen at the aortic dissection position.
7. The method according to claim 6, wherein after step B' further comprises performing post-processing on the segmented prediction results, specifically:
c' 1, selecting the largest connected region for the aorta part in the prediction result of the segmentation, and removing other error segmentation; multiplying the true lumen and the false lumen by the maximum binarization communication area to ensure that the true lumen and the false lumen are all in the aorta area; and/or
C' 2, smoothing each layer of the segmented aorta, the true lumen and the false lumen in the z-axis by adopting cv2.Gaussian Blur; and/or
C' 3, processing the split and overlapped parts of the real cavity and the false cavity in the prediction result:
the probabilities of predicting a voxel V of an image as foreground and background when true lumen segmentation are set to be P and P respectively1And P2,P1>P2(ii) a The probabilities of predicting a voxel V in an image as a foreground and a background in the false cavity segmentation are respectively P3And P4,P3>P4(ii) a Calculating Delta1=P1-P2And Δ2=P3-P4If Δ1>Δ2Voxel V is classified as a true lumen and vice versa.
CN201810664754.5A 2018-06-25 2018-06-25 Construction method and application of aortic dissection model Active CN108805134B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810664754.5A CN108805134B (en) 2018-06-25 2018-06-25 Construction method and application of aortic dissection model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810664754.5A CN108805134B (en) 2018-06-25 2018-06-25 Construction method and application of aortic dissection model

Publications (2)

Publication Number Publication Date
CN108805134A CN108805134A (en) 2018-11-13
CN108805134B true CN108805134B (en) 2021-09-10

Family

ID=64071329

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810664754.5A Active CN108805134B (en) 2018-06-25 2018-06-25 Construction method and application of aortic dissection model

Country Status (1)

Country Link
CN (1) CN108805134B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110580702B (en) * 2019-07-16 2023-03-24 慧影医疗科技(北京)股份有限公司 Method for abdominal aortic aneurysm boundary segmentation
CN110652312B (en) * 2019-07-19 2023-03-14 慧影医疗科技(北京)股份有限公司 Blood vessel CTA intelligent analysis system and application
CN110675375A (en) * 2019-09-18 2020-01-10 天津工业大学 Method for automatically distinguishing thoracic and abdominal aorta image interlayers
CN110742633B (en) * 2019-10-29 2023-04-18 慧影医疗科技(北京)股份有限公司 Method and device for predicting risk after B-type aortic dissection operation and electronic equipment
CN110796670B (en) * 2019-10-30 2022-07-26 北京理工大学 Dissection method and device for dissecting interbed artery
CN110826908A (en) * 2019-11-05 2020-02-21 北京推想科技有限公司 Evaluation method and device for artificial intelligent prediction, storage medium and electronic equipment
CN112837322A (en) * 2019-11-22 2021-05-25 北京深睿博联科技有限责任公司 Image segmentation method and device, equipment and storage medium
CN111260134A (en) * 2020-01-17 2020-06-09 南京星火技术有限公司 Debugging assistance apparatus, product debugging apparatus, computer readable medium
CN111724374B (en) * 2020-06-22 2024-03-01 智眸医疗(深圳)有限公司 Evaluation method and terminal of analysis result
CN112330708B (en) * 2020-11-24 2024-04-23 沈阳东软智能医疗科技研究院有限公司 Image processing method, device, storage medium and electronic equipment
CN112561871B (en) * 2020-12-08 2021-09-03 中国医学科学院北京协和医院 Aortic dissection method and device based on flat scanning CT image
CN113763337B (en) * 2021-08-24 2024-05-03 慧影医疗科技(北京)股份有限公司 Method and system for detecting blood supply of aortic dissection false cavity
CN114663354B (en) * 2022-02-24 2023-04-07 中国人民解放军陆军军医大学 Intelligent segmentation method and device for arterial dissections and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1690230A1 (en) * 2003-11-13 2006-08-16 Centre Hospitalier de l'Université de Montréal Automatic multi-dimensional intravascular ultrasound image segmentation method
CN1924926A (en) * 2006-09-21 2007-03-07 复旦大学 Two-dimensional blur polymer based ultrasonic image division method
CN105719295A (en) * 2016-01-21 2016-06-29 浙江大学 Intracranial hemorrhage area segmentation method based on three-dimensional super voxel and system thereof

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080033302A1 (en) * 2006-04-21 2008-02-07 Siemens Corporate Research, Inc. System and method for semi-automatic aortic aneurysm analysis

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1690230A1 (en) * 2003-11-13 2006-08-16 Centre Hospitalier de l'Université de Montréal Automatic multi-dimensional intravascular ultrasound image segmentation method
CN1924926A (en) * 2006-09-21 2007-03-07 复旦大学 Two-dimensional blur polymer based ultrasonic image division method
CN105719295A (en) * 2016-01-21 2016-06-29 浙江大学 Intracranial hemorrhage area segmentation method based on three-dimensional super voxel and system thereof

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
B型主动脉夹层腔内治疗共识与争议;郭伟等;《中国实用外科杂志》;20171231;第1339-1345页 *
Hybrid Loss Guided Convolutional;Xin Yang et al;《STACOM 2017: Statistical Atlases and Computational Models of the Heart. ACDC and MMWHS Challenges》;20180315;第215-223页 *
Semi-automatic segmentation and detection of aorta dissection wall in MDCT angiography;Karl Krissian et al;《Medical Image Analysis》;20140131;第18卷(第1期);第83-102页 *
TRAIN A 3D U-NET TO SEGMENT CRANIAL VASCULATURE IN CTA VOLUME WITHOUT MANUAL ANNOTATION;Xuhui Chen et al;《2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018)》;20180524;正文第2节 *
U-Net: Convolutional Networks for Biomedical Image Segmentation;Olaf Ronneberger et al;《Medical Image Computing and Computer-Assisted Intervention MICCAI 2015》;20151118;第234-241页 *
基于深度学习算法的主动脉瘤CT影像分割技术研究;隋晓丹;《中国优秀硕士学位论文全文数据库信息科技辑》;20180115;第I138-1370页 *

Also Published As

Publication number Publication date
CN108805134A (en) 2018-11-13

Similar Documents

Publication Publication Date Title
CN108805134B (en) Construction method and application of aortic dissection model
US11580646B2 (en) Medical image segmentation method based on U-Net
US10867384B2 (en) System and method for automatically detecting a target object from a 3D image
CN110706246B (en) Blood vessel image segmentation method and device, electronic equipment and storage medium
CN107506761B (en) Brain image segmentation method and system based on significance learning convolutional neural network
CN111488914B (en) Alzheimer disease classification and prediction system based on multitask learning
CN107316294B (en) Lung nodule feature extraction method based on improved depth Boltzmann machine
CN112529839B (en) Method and system for extracting carotid vessel centerline in nuclear magnetic resonance image
CN107451615A (en) Thyroid papillary carcinoma Ultrasound Image Recognition Method and system based on Faster RCNN
US11748902B2 (en) Method, device and system for generating a centerline for an object in an image
CN111369528B (en) Coronary artery angiography image stenosis region marking method based on deep convolutional network
CN111476757A (en) Coronary artery patch data detection method, system, storage medium and terminal
CN104217418A (en) Segmentation of a calcified blood vessel
CN111145173A (en) Plaque identification method, device, equipment and medium for coronary angiography image
US20220198226A1 (en) Method and system for generating a centerline for an object, and computer readable medium
CN113610859B (en) Automatic thyroid nodule segmentation method based on ultrasonic image
CN113012086B (en) Cross-modal image synthesis method
CN114494215A (en) Transformer-based thyroid nodule detection method
CN110738702B (en) Three-dimensional ultrasonic image processing method, device, equipment and storage medium
CN114758137A (en) Ultrasonic image segmentation method and device and computer readable storage medium
CN111127400A (en) Method and device for detecting breast lesions
US20150278976A1 (en) Systems and methods for using geometry sensitivity information for guiding workflow
CN111210398A (en) White blood cell recognition system based on multi-scale pooling
CN111461065B (en) Tubular structure identification method, tubular structure identification device, computer equipment and readable storage medium
US20230115927A1 (en) Systems and methods for plaque identification, plaque composition analysis, and plaque stability detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP02 Change in the address of a patent holder

Address after: 100192 A206, floor B-2, Dongsheng Science Park, Zhongguancun, No. 66, xixiaokou Road, Haidian District, Beijing

Patentee after: HUIYING MEDICAL TECHNOLOGY (BEIJING) Co.,Ltd.

Address before: 100192 room 206, 2f, building C-2, Dongsheng Science Park, Zhongguancun, No. 66, xixiaokou Road, Haidian District, Beijing

Patentee before: HUIYING MEDICAL TECHNOLOGY (BEIJING) Co.,Ltd.

CP02 Change in the address of a patent holder
CP03 Change of name, title or address

Address after: 100000 Zhongguancun Dongsheng Science Park, 66 xixiaokou Road, Haidian District, Beijing A206, 2f, building B-2, Northern Territory

Patentee after: Huiying medical technology (Beijing) Co.,Ltd.

Address before: 100192 Northern Territory of Zhongguancun Dongsheng science and Technology Park, 66 xixiaokou Road, Haidian District, Beijing B-2nd floor A206

Patentee before: HUIYING MEDICAL TECHNOLOGY (BEIJING) Co.,Ltd.

CP03 Change of name, title or address
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Construction method and application of a segmentation model for aortic dissection

Granted publication date: 20210910

Pledgee: Bank of Shanghai Co.,Ltd. Beijing Branch

Pledgor: Huiying medical technology (Beijing) Co.,Ltd.

Registration number: Y2024990000074

PE01 Entry into force of the registration of the contract for pledge of patent right