CN115810018A - Method and system for optimizing segmentation results of blood vessel tree and coronary artery tree of CT image - Google Patents

Method and system for optimizing segmentation results of blood vessel tree and coronary artery tree of CT image Download PDF

Info

Publication number
CN115810018A
CN115810018A CN202211529439.4A CN202211529439A CN115810018A CN 115810018 A CN115810018 A CN 115810018A CN 202211529439 A CN202211529439 A CN 202211529439A CN 115810018 A CN115810018 A CN 115810018A
Authority
CN
China
Prior art keywords
image
image blocks
segmentation result
network model
mask
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211529439.4A
Other languages
Chinese (zh)
Inventor
马春晓
叶宏伟
王瑶法
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Mingfeng Nuclear Medical Imaging System Research Institute
Minfound Medical Systems Co Ltd
Original Assignee
Zhejiang Mingfeng Nuclear Medical Imaging System Research Institute
Minfound Medical Systems Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Mingfeng Nuclear Medical Imaging System Research Institute, Minfound Medical Systems Co Ltd filed Critical Zhejiang Mingfeng Nuclear Medical Imaging System Research Institute
Priority to CN202211529439.4A priority Critical patent/CN115810018A/en
Publication of CN115810018A publication Critical patent/CN115810018A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a method and a system for optimizing segmentation results of a blood vessel tree and a coronary artery tree of a CT image, wherein the method comprises the steps of obtaining a segmentation result image of the blood vessel tree according to an original CT image, multiplying the segmentation image and the original CT image through a region-of-interest obtaining unit to obtain a CT image only containing the blood vessel tree, and dividing the image into N non-overlapping image blocks through an image dividing unit; the mask proportion of the random mask unit is self-adaptive, and the normalization linear transformation is carried out on the CT values of all the non-mask image blocks through the normalization unit to obtain a preprocessed segmentation result image; training an optimized network model by adopting a loss function combining MSE and clDice in a network model training module, and acquiring weight information of the trained optimized network model; and in a network model reasoning module, inputting the preprocessed segmentation result image serving as an input image into a trained optimization network model, and combining weight information to obtain an optimized segmentation result image.

Description

Method and system for optimizing segmentation results of blood vessel tree and coronary artery tree of CT image
Technical Field
The invention relates to the field of medical imaging equipment, in particular to a method and a system for optimizing segmentation results of a blood vessel tree and a coronary artery tree in a CT (computed tomography) image.
Background
With the improvement of imaging speed and scanning accuracy of CT (computed tomography) devices, a technique for three-dimensional Coronary artery reconstruction based on CTA (CT angiography) has been widely applied to cardiac examination and Disease diagnosis, visualization and analysis of accurate Coronary artery blood vessel conditions, can provide a Coronary artery contour for a doctor, facilitate the doctor to analyze the conditions of blood vessel stenosis, calcification, plaque and the like, and is an important clinical means for early screening of Coronary Atherosclerotic Heart Disease (Coronary Atherosclerotic Heart Disease), and an important prerequisite thereof is segmentation and extraction of Coronary artery blood vessels.
At present, aiming at the segmentation of coronary vessels, most coronary trees can be usually identified, but the segmentation and extraction of the coronary arteries of the heart are difficult due to the influence of objective factors such as the complexity and strong randomness of a CT (computed tomography) contrast image, and the main reasons are as follows: coronary arteries are complex in structure and have many small branches; the coronary artery has uneven gray scale, the peripheral part of the blood vessel is fine and the boundary is fuzzy; coronary arteries contain multiple lesions; motion artifacts of the heart cannot be avoided, and the like, all affect the imaging of coronary arteries, often resulting in over-segmentation phenomena such as incomplete segmented coronary artery trees, vein adhesion, and the like.
Existing coronary artery segmentation methods can be broadly classified into fully automatic segmentation, semi-automatic segmentation, and interactive segmentation. Both semi-automatic segmentation and interactive segmentation need manual intervention, are relatively complex and have strong subjectivity; however, in any segmentation method, the problems of over-segmentation and under-segmentation are often easily caused, and the accuracy and continuity of fine branch segmentation are not high.
Disclosure of Invention
In order to overcome the technical defects, the invention aims to provide a method and a system for optimizing the segmentation result of a CT image vessel tree based on deep learning, and a method for optimizing the segmentation result of a CT image coronary artery tree, which can improve the continuity and the accuracy of the segmentation result.
The invention discloses a CT image vessel tree segmentation result optimization method based on deep learning, which comprises the following steps: acquiring a segmentation result image of a blood vessel tree according to an original CT image, multiplying the segmentation image with the original CT image to obtain a CT image only containing the blood vessel tree, and dividing the image into N non-overlapping image blocks; presetting a mask proportion a, randomly sampling N image blocks by adopting a strategy obeying uniform distribution, reserving N (1-a) image blocks as non-mask image blocks, and marking the rest N (a) image blocks as mask image blocks; calculating the average value of the non-mask image blocks, and recording the average value as m; when m =0, randomly selecting n image blocks with the average value not zero from the mask image blocks and exchanging the n image blocks with the non-mask image blocks to obtain a preprocessed segmentation result image; establishing an optimization network model, and training the optimization network model by adopting a loss function until the optimization network model converges; acquiring weight information of the trained optimized network model; and inputting the preprocessed segmentation result image serving as an input image into the trained optimization network model, and obtaining an optimized segmentation result image by combining the weight information.
Preferably, the obtaining of the preprocessed segmentation result image further includes: performing normalized linear transformation on CT values of all non-mask image blocks
Figure BDA0003973998360000021
Wherein v is mean Is the mean value, v, of the CT values v of the non-masked image blocks std Is the standard deviation of the CT values v of the non-masked image blocks.
Preferably, when m =0, randomly selecting n image blocks with a non-zero average value from among the mask image blocks, and exchanging the n image blocks with the non-mask image blocks further includes: a is 50-75%.
Preferably, when m =0, randomly selecting n image blocks with a non-zero average value from among the mask image blocks, and exchanging the n image blocks with the non-mask image blocks further includes: n is less than or equal to 25% of the total number of all image blocks of the mask image blocks whose average value is not zero.
Preferably, the loss function comprises a MSE loss function for evaluating a mean of a sum of squares of errors between the pre-processed segmentation result images output by the optimized network model and physician-annotated segmentation result images;
Figure BDA0003973998360000022
wherein x is a segmentation result image marked by a doctor, and y is the preprocessed segmentation result image output by the optimized network model.
Preferably, the MSE loss function is only used for masking image blocks.
Preferably, the loss function further comprises a clDice loss function,
Figure BDA0003973998360000023
wherein, by V L Image representing segmentation result after physician labeling, V P Representing the preprocessed segmentation result image output by the optimized network model; s P And S L Respectively represent from V P And V L Extracting the skeleton structure; t is a unit of prec (S P ,V L ) For topological accuracy, represent S P At V L The ratio of (1); t is sens (S L ,V P ) For topological sensitivity, denote S L At V P The ratio of (1).
Preferably, the clDice loss function is used for both masked image blocks and non-masked image blocks.
The invention also discloses a CT image coronary artery tree segmentation result optimization method based on deep learning, and the CT image vessel tree segmentation result optimization method is adopted.
The invention also discloses a CT image vessel tree segmentation result optimization system based on deep learning, which comprises an image processing module, a network model training module and a network model reasoning module; the image processing module comprises an interested region acquisition unit, an image dividing unit, a random mask unit and a normalization unit; acquiring a segmentation result image of a blood vessel tree according to an original CT image, multiplying the segmentation image and the original CT image through the region-of-interest acquisition unit to obtain a CT image only containing the blood vessel tree, and dividing the image into N non-overlapping image blocks through the image dividing unit; presetting a mask proportion a through the random mask unit, randomly sampling N image blocks by adopting a strategy obeying uniform distribution, reserving N (1-a) image blocks as non-mask image blocks, and marking the rest N (a) image blocks as mask image blocks; the random mask unit calculates the average value of the non-mask image blocks, and the average value is recorded as m; when m =0, randomly selecting n image blocks with the average value not being zero from the mask image blocks and exchanging the image blocks with the n image blocks in the non-mask image blocks, and performing normalized linear transformation on CT values of all the non-mask image blocks through the normalization unit to obtain a preprocessed segmentation result image; establishing an optimized network model, and training the optimized network model in the network model training module by adopting a loss function until the optimized network model converges; acquiring weight information of the trained optimized network model; and in the network model reasoning module, inputting the preprocessed segmentation result image serving as an input image into the trained optimized network model, and combining the weight information to obtain an optimized segmentation result image.
After the technical scheme is adopted, compared with the prior art, the method has the following beneficial effects:
1. according to the invention, the coronary artery segmentation result is optimized through a mask self-encoder (the random mask unit), the problem of segmentation discontinuity caused by over-segmentation and under-segmentation is automatically made up, and the completeness and accuracy of extraction of main branches and small branches of a coronary artery tree are ensured;
2. the mask probability of the mask self-encoder is selected to be self-adaptive, and in order to ensure that the coronary tree exists in both masked data and non-masked data;
3. the optimized network model based on deep learning adopts a self-supervision learning method and a loss function combining MSE and clDice, wherein the MSE only acts on mask image blocks, the clDice acts on all the image blocks, and the clDice further restricts the connectivity of network output results.
Drawings
FIG. 1 is a flowchart of a segmentation result optimization method for a blood vessel tree of a CT image according to the present invention;
FIG. 2 is a block diagram of a segmentation result optimization system of a CT image vessel tree according to the present invention;
FIG. 3 is a schematic structural diagram of the image processing module according to the present invention;
fig. 4 is a schematic diagram of a work flow of the random mask unit provided in the present invention;
FIG. 5 is a schematic structural diagram of a network model training module provided in the present invention;
FIG. 6 is a schematic structural diagram of a network model inference module provided by the present invention;
FIG. 7 is a diagram illustrating the optimization results of the preferred embodiment of the present invention;
fig. 8 is a schematic diagram of an optimization result of the preferred embodiment provided by the present invention.
Wherein: 100-an image processing module, 101-a region of interest acquisition unit, 102-an image dividing unit, 103-a random mask unit, 104-a normalization unit, 200-a network model training module, 201-an encoder, 202-a decoder, 300-a network model reasoning module,
Detailed Description
The advantages of the invention are further illustrated in the following description of specific embodiments in conjunction with the accompanying drawings.
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if," as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination," depending on the context.
In the description of the present invention, it is to be understood that the terms "longitudinal", "lateral", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, and are used merely for convenience of description and for simplicity of description, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed in a particular orientation, and be operated, and thus, are not to be construed as limiting the present invention.
In the description of the present invention, unless otherwise specified and limited, it is to be noted that the terms "mounted," "connected," and "connected" are to be interpreted broadly, and may be, for example, a mechanical connection or an electrical connection, a communication between two elements, a direct connection, or an indirect connection via an intermediate medium, and specific meanings of the terms may be understood by those skilled in the art according to specific situations.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in themselves. Thus, "module" and "component" may be used in a mixture.
Referring to the attached figure 1, the invention discloses a method for optimizing a segmentation result of a CT image vessel tree based on deep learning, which comprises the following steps:
s1, acquiring a three-dimensional medical image of a blood vessel tree obtained by CT scanning; the related data are all original data and are stored in a dicom data format;
s2, acquiring a segmentation labeling image of the blood vessel tree; wherein, the related annotation images are obtained by combining the comprehensive annotation results of a plurality of professional physicians;
s3, acquiring a segmentation result image of the blood vessel tree; the related segmentation image can be an automatic segmentation result, a semi-automatic segmentation result or an interactive segmentation result, and the segmentation method is not limited;
s4, multiplying the segmentation result image of the blood vessel tree with the original CT image to obtain a CT image only containing the blood vessel tree, and dividing the image into N non-overlapping image blocks;
s5, presetting a mask proportion a, randomly sampling N image blocks by adopting a strategy obeying uniform distribution, reserving N (1-a) image blocks as non-mask image blocks, and marking the rest N (a) image blocks as mask image blocks;
s6, calculating the average value of the non-mask image blocks, and recording the average value as m; when m =0, randomly selecting n image blocks with the average value not zero from the mask image blocks and exchanging the n image blocks with the non-mask image blocks to obtain a preprocessed segmentation result image;
s7, carrying out normalized linear transformation on the CT values of all the non-mask image blocks;
s8, establishing an optimized network model of the CT image vessel tree segmentation result;
s9, adopting the selected loss function until the training optimization network model converges, obtaining and storing the weight of the whole network;
and S10, loading the trained weight file in the trained optimization network model, and inputting the preprocessed segmentation result image to complete the optimization of the segmentation result of the blood vessel tree.
The loss functions for training the optimized network model include MSE loss functions and clDice loss functions.
The MSE loss function is used for evaluating the mean value of the sum of squares of errors between the preprocessed segmentation result images output by the optimized network model and the segmentation result images marked by the doctors;
Figure BDA0003973998360000061
wherein, x is the segmentation result image marked by the doctor, and y is the segmentation result image output by the optimized network model after preprocessing. MSE has a value in the range of [0, + ∞]When the predicted value (namely the segmentation result image after the preprocessing output by the optimization network model) is completely matched with the true value (namely the segmentation result image after the marking of the doctor), the value is 0; when the error between the predicted value (namely the segmentation result image after preprocessing output by the optimization network model) and the real value (namely the segmentation result image after physician labeling) is larger, the MSE value is larger.
In the present invention, the MSE loss function only acts on the masked image blocks. The masked tokens are taken out of all tokens generated by the decoder 202 and input into the full-link layer, and the output channel is mapped to the pixel quantity (P × C) of 1 image block, that is, the size at this time is (N', P × C) as a predicted value, the mask image block as a true value, and the MSE loss function is calculated.
The clDice loss function is a metric that evaluates the connectivity of two samples based on extracting the skeletons of tubular or presence structures that intersect.
Figure BDA0003973998360000062
Wherein, with V L Image representing segmentation result after physician labeling, V P Representing a preprocessed segmentation result image output by the optimized network model; s P And S L Respectively represent from V P And V L Extracting the skeleton structure; t is prec (S P ,V L ) For topological accuracy, represent S P At V L The ratio of (1); t is sens (S L ,V P ) For topological sensitivity, denote S L At V P The ratio of (1).
In the present invention, the clDice loss function acts on all image blocks (i.e., the masked image blocks and the non-masked image blocks). All tokens generated by the decoder 202 are input to the full-link layer, and the output channel is mapped to the pixel quantity (P × C) of 1 image block, that is, the size at this time is (N, P × C) as a prediction value, and all image blocks are real values.
Referring to fig. 2, the present invention further provides a system for optimizing a segmentation result of a CT image vessel tree based on deep learning for implementing the above method, which includes an image processing module 100, a network model training module 200, and a network model reasoning module 300.
Referring to fig. 3, the image processing module 100 includes a region of interest acquisition unit 101, an image dividing unit 102, a random mask unit 103, and a normalization unit 104. The selection of the masking probability in the random masking unit 103 is adaptive, in order to ensure that the vessel tree exists in both masked and non-masked data.
The region-of-interest obtaining unit 101 mainly multiplies the segmentation result of the blood vessel tree (for the training data, the segmentation result is obtained by combining the comprehensive labeling results of a plurality of specialized physicians) with the original CT image to obtain a CT image x ∈ R containing only the blood vessel tree H×W×C The spatial resolution of the image is H multiplied by W;
the image division unit 102 mainly divides the image x into flat image blocks
Figure BDA0003973998360000071
Where P is the size of each image block, N is the number of image blocks,
Figure BDA0003973998360000072
the random mask unit 103 generally refers to randomly sampling N image blocks by using a uniformly distributed policy according to a preset mask ratio a, reserving N × a (1-a) image blocks as non-mask image blocks, and marking the remaining N × a image blocks as mask image blocks.
Referring to fig. 4, since the input of the random mask unit 103 is processed by the region-of-interest obtaining unit 101 in the present invention, in order to ensure that the non-mask image blocks contain valid information, the average value of the non-mask image blocks needs to be calculated and is denoted as m. When m =0, randomly picking n image blocks with non-zero mean values from the mask image blocks and exchanging the n image blocks from the non-mask image blocks. Through many trial and error attempts, generally, a is selected to be 50% to 75%, and n is not more than 25% of the total number of all the image blocks with non-zero mean values in the mask image blocks.
The normalization unit 104 mainly performs linear transformation on the CT value v of the non-mask image block, so as to facilitate convergence in the subsequent network training process, and the CT value
Figure BDA0003973998360000073
Wherein v is mean Is the mean value, v, of the CT values v of the non-masked image blocks std Is the standard deviation of the CT values v of the non-masked image blocks.
Referring to fig. 5, the network model training module 200 includes an encoder 201 and a decoder 202. The input image refers to an image block processed by the random masking unit 103, and the encoder 201 processes only non-masked image blocks. The encoder 201 may be ViT or ResNet or other back-bone etc., here ViT for example, using trainable linear projection to map vectorized xp to a potential D-dimensional embedding space, the mapped result is called token, which has a size of (N, D). In order to encode spatial information of image blocks, position embedding (position embedding) is performed in a token to retain position information, thereby adding position information for each image block. See the formula
Figure BDA0003973998360000074
Wherein,
Figure BDA0003973998360000075
for mapping projections, E pos ∈R N×D Is the embedding location. The location embedding is learnable with all images shared. The encoder 201 is mainly composed of an L-Layer Multi-head Self-Attention Mechanism (MSA) and a Multi-Layer Perceptron (MLP) module. Thus, the output of l layers is as follows: z' l =MSA(LN(z l-1 ))+z l-1 ;z l =MLP(LN(z′ l ))+z′ l . Wherein LN (-) denotes a normalization operation, z l Representing the encoded image.
The decoder 202 needs to handle not only the unmasked tokens that have undergone the encoding operation by the encoder 201, but also the masked tokens. The mask tokens is a vector that can be learned, all mask image blocks shared. As in the encoder 201, the mask token also adds position information, and each mask image block corresponds to a position embedding, and the size is ('N', D), where 'N' is the number of mask image blocks. And copying the mask tokens by 'N' parts, so that each mask image block corresponds to one mask token. Therefore, the unmasked tokens subjected to the encoding operation of the encoder 201 and the masked tokens added with the position information are spliced together in the order in which the image block morphology originally corresponds to each other, and are used as the input of the decoder 202. When the dimension of the unmasked tokens subjected to the encoding operation of the encoder 201 is not consistent with the input dimension of the decoder 202, the dimension needs to be mapped to the required dimension through linear mapping.
Referring to fig. 6, the input of the network model inference module 300 consists of the segmentation result (i.e., the preprocessed segmentation result image) of the vessel tree and the weight information. The network model inference module 300 is mainly used for loading trained weight information to the deep learning network model, optimizing the segmentation result of the input blood vessel tree, and inputting the segmentation result of the blood vessel tree to the optimized network model loaded with the weight file through the image processing module 100 for inference, so that discontinuous positions in the initial segmentation result can be well compensated, false positives caused by vein adhesion and other reasons can be inhibited, and the optimization of the initial segmentation result is completed. The whole process is simple and easy to operate, consumes short time, is a full-automatic process, does not need additional manual operation, and can effectively improve the integrity and the accuracy of the segmentation result of the blood vessel tree.
Referring to fig. 7-8, there are shown optimization examples of a preferred embodiment of the method for optimizing the segmentation result of the blood vessel tree of the CT image based on deep learning according to the present invention. The segmentation result of the blood vessel tree adopted in the optimized network model is obtained by combining the comprehensive labeling results of a plurality of professional doctors, and the final predicted segmentation result output by the trained optimized network model is also compared with the comprehensive labeling results of the plurality of professional doctors.
The invention also discloses a CT image coronary artery tree segmentation result optimization method based on deep learning, and the coronary artery tree is optimized by adopting the CT image blood vessel tree segmentation result optimization method.
Based on the deep learning model technology, the invention trains a model for optimizing the segmentation result of the CT image vessel tree (such as a coronary artery tree) through the loss function combining the MSE loss function and the clDice loss function, automatically makes up the problem of segmentation discontinuity caused by over-segmentation and under-segmentation, and ensures the integrity and accuracy of the extraction of main branches and tiny branches of the coronary artery tree.
It should be noted that the embodiments of the present invention have been described in a preferred embodiment and not limited to the embodiments, and those skilled in the art may modify and modify the above-disclosed embodiments to equivalent embodiments without departing from the scope of the present invention.

Claims (10)

1. A CT image vessel tree segmentation result optimization method based on deep learning is characterized by comprising the following steps:
acquiring a segmentation result image of a blood vessel tree according to an original CT image, multiplying the segmentation image with the original CT image to obtain a CT image only containing the blood vessel tree, and dividing the image into N non-overlapping image blocks;
presetting a mask proportion a, randomly sampling N image blocks by adopting a strategy obeying uniform distribution, reserving N (1-a) image blocks as non-mask image blocks, and marking the rest N (a) image blocks as mask image blocks;
calculating the average value of the non-mask image blocks, and recording the average value as m; when m =0, randomly selecting n image blocks with the average value not zero from the mask image blocks and exchanging the n image blocks with the non-mask image blocks to obtain a preprocessed segmentation result image;
establishing an optimization network model, and training the optimization network model by adopting a loss function until the optimization network model converges; acquiring weight information of the trained optimized network model;
and inputting the preprocessed segmentation result image serving as an input image into the trained optimization network model, and obtaining an optimized segmentation result image by combining the weight information.
2. The method for optimizing the segmentation result of the CT image vessel tree as set forth in claim 1, wherein the obtaining the preprocessed segmentation result image further comprises:
performing normalized linear transformation on CT values of all non-mask image blocks
Figure FDA0003973998350000011
Wherein v is mean Is the mean value, v, of the CT values v of the non-masked image blocks std Is the standard deviation of the CT values v of the non-masked image blocks.
3. The method as claimed in claim 1, wherein the randomly selecting n image blocks with non-zero mean values from the mask image blocks and exchanging n image blocks from the non-mask image blocks when m =0 further comprises:
a is 50-75%.
4. The method as claimed in claim 3, wherein the randomly selecting n image blocks with non-zero mean values from the mask image blocks and exchanging n image blocks from the non-mask image blocks when m =0 further comprises:
n is less than or equal to 25% of the total number of all image blocks of the mask image blocks whose average value is not zero.
5. The method of optimizing a segmentation result of a CT image vessel tree according to claim 1, wherein the loss function comprises a MSE loss function for evaluating a mean of a sum of squares of errors between the preprocessed segmentation result image and physician-annotated segmentation result image output by the optimized network model;
Figure FDA0003973998350000021
wherein x is a segmentation result image marked by a doctor, and y is the preprocessed segmentation result image output by the optimized network model.
6. The method as claimed in claim 5, wherein the MSE loss function is only used for mask image blocks.
7. The method of optimizing the segmentation result of the CT image vessel tree as set forth in claim 1, wherein the loss function further comprises a clDice loss function,
Figure FDA0003973998350000022
wherein, by V L Image representing segmentation result after physician labeling, V P Representing the preprocessed segmentation result image output by the optimized network model; s P And S L Respectively represent from V P And V L Extracting the skeleton structure; t is prec (S P ,V L ) For topological accuracy, represent S P At V L The ratio of (1); t is sens (S L ,V P ) Is topologically activeSensitivity, representing S L At V P The ratio of (1).
8. The method of optimizing the segmentation result of a vessel tree in a CT image according to claim 7,
the clDice loss function is for masked image blocks and non-masked image blocks.
9. A method for optimizing the segmentation result of a coronary artery tree in a CT image based on deep learning, which is characterized by using the method for optimizing the segmentation result of a vessel tree in a CT image according to any one of claims 1 to 8.
10. A CT image vessel tree segmentation result optimization system based on deep learning is characterized by comprising an image processing module, a network model training module and a network model reasoning module;
the image processing module comprises an interested region acquisition unit, an image dividing unit, a random mask unit and a normalization unit;
acquiring a segmentation result image of a blood vessel tree according to an original CT image, multiplying the segmentation image and the original CT image through the region-of-interest acquisition unit to obtain a CT image only containing the blood vessel tree, and dividing the image into N non-overlapping image blocks through the image dividing unit;
presetting a mask proportion a through the random mask unit, randomly sampling N image blocks by adopting a uniform distribution obeying strategy, reserving N (1-a) image blocks as non-mask image blocks, and recording the rest N (a) image blocks as mask image blocks;
the random mask unit calculates the average value of the non-mask image blocks, and the average value is recorded as m; when m =0, randomly selecting n image blocks with the average value not being zero from the mask image blocks and exchanging the n image blocks from the non-mask image blocks; carrying out normalized linear transformation on the CT values of all the non-mask image blocks through the normalization unit to obtain a preprocessed segmentation result image;
establishing an optimized network model, and training the optimized network model in the network model training module by adopting a loss function until the optimized network model converges; acquiring weight information of the trained optimized network model;
and in the network model reasoning module, inputting the preprocessed segmentation result image serving as an input image into the trained optimized network model, and combining the weight information to obtain an optimized segmentation result image.
CN202211529439.4A 2022-11-30 2022-11-30 Method and system for optimizing segmentation results of blood vessel tree and coronary artery tree of CT image Pending CN115810018A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211529439.4A CN115810018A (en) 2022-11-30 2022-11-30 Method and system for optimizing segmentation results of blood vessel tree and coronary artery tree of CT image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211529439.4A CN115810018A (en) 2022-11-30 2022-11-30 Method and system for optimizing segmentation results of blood vessel tree and coronary artery tree of CT image

Publications (1)

Publication Number Publication Date
CN115810018A true CN115810018A (en) 2023-03-17

Family

ID=85484619

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211529439.4A Pending CN115810018A (en) 2022-11-30 2022-11-30 Method and system for optimizing segmentation results of blood vessel tree and coronary artery tree of CT image

Country Status (1)

Country Link
CN (1) CN115810018A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117079080A (en) * 2023-10-11 2023-11-17 青岛美迪康数字工程有限公司 Training optimization method, device and equipment for coronary artery CTA intelligent segmentation model

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117079080A (en) * 2023-10-11 2023-11-17 青岛美迪康数字工程有限公司 Training optimization method, device and equipment for coronary artery CTA intelligent segmentation model
CN117079080B (en) * 2023-10-11 2024-01-30 青岛美迪康数字工程有限公司 Training optimization method, device and equipment for coronary artery CTA intelligent segmentation model

Similar Documents

Publication Publication Date Title
US10489907B2 (en) Artifact identification and/or correction for medical imaging
CN109272510B (en) Method for segmenting tubular structure in three-dimensional medical image
US4945478A (en) Noninvasive medical imaging system and method for the identification and 3-D display of atherosclerosis and the like
Suzuki et al. Extraction of left ventricular contours from left ventriculograms by means of a neural edge detector
CN114066866B (en) Medical image automatic segmentation method based on deep learning
CN112258530A (en) Neural network-based computer-aided lung nodule automatic segmentation method
CN113420826B (en) Liver focus image processing system and image processing method
CN111612756B (en) Coronary artery specificity calcification detection method and device
CN111798458B (en) Interactive medical image segmentation method based on uncertainty guidance
CN111584066B (en) Brain medical image diagnosis method based on convolutional neural network and symmetric information
CN113674291A (en) Full-type aortic dissection real-false lumen image segmentation method and system
Biswas et al. Chest X-ray enhancement to interpret pneumonia malformation based on fuzzy soft set and Dempster–Shafer theory of evidence
US12046018B2 (en) Method for identifying bone images
CN113421228A (en) Thyroid nodule identification model training method and system based on parameter migration
CN115810018A (en) Method and system for optimizing segmentation results of blood vessel tree and coronary artery tree of CT image
CN113362360B (en) Ultrasonic carotid plaque segmentation method based on fluid velocity field
CN116612090A (en) System and method for determining pulmonary artery embolism index
CN114418955A (en) Coronary plaque stenosis degree detection system and method
CN114359308A (en) Aortic dissection method based on edge response and nonlinear loss
Brown et al. Model-based assessment of lung structures: inferring and control system
CN113643263A (en) Identification method and system for upper limb bone positioning and forearm bone fusion deformity
Cakir et al. Automated Aortic Valve Calcific Area Segmentation in Echocardiography Images Using Fully Convolutional Neural Networks
US11995830B2 (en) Deep learning architecture for analyzing medical images for body region recognition and delineation
CN112365460B (en) Object detection method and device based on biological image
CN118297941B (en) Three-dimensional abdominal aortic aneurysm and visceral vessel lumen extraction method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination