CN113077441A - Coronary artery calcified plaque segmentation method and method for calculating coronary artery calcified score - Google Patents

Coronary artery calcified plaque segmentation method and method for calculating coronary artery calcified score Download PDF

Info

Publication number
CN113077441A
CN113077441A CN202110350214.1A CN202110350214A CN113077441A CN 113077441 A CN113077441 A CN 113077441A CN 202110350214 A CN202110350214 A CN 202110350214A CN 113077441 A CN113077441 A CN 113077441A
Authority
CN
China
Prior art keywords
voxel
classification
medical image
result
prediction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110350214.1A
Other languages
Chinese (zh)
Inventor
王佳宇
吴迪嘉
丁小柳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Intelligent Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Intelligent Healthcare Co Ltd filed Critical Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority to CN202110350214.1A priority Critical patent/CN113077441A/en
Publication of CN113077441A publication Critical patent/CN113077441A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The present application relates to a coronary calcified plaque segmentation method, a method of calculating a coronary calcified score, and a computer-readable storage medium. Wherein, the method comprises the following steps: acquiring a first medical image of a cardiac region; classifying each voxel of the first medical image by using a preset segmentation model to obtain a first classification result of each voxel classified according to a parent class and a second classification result of each voxel classified according to a sub-class; correcting the sub-category of each voxel in the first medical image according to the parent category of each voxel in the first medical image to obtain a third classification result; and acquiring a coronary calcified plaque segmentation result of the first medical image according to the third classification result. By the method and the device, the problem of low robustness of the coronary artery calcified plaque segmentation method in the related technology is solved, and the robustness of coronary artery calcified plaque segmentation is improved.

Description

Coronary artery calcified plaque segmentation method and method for calculating coronary artery calcified score
Technical Field
The present application relates to the field of image segmentation, and in particular, to a coronary calcified plaque segmentation method, a method of calculating a coronary calcified score, and a computer-readable storage medium.
Background
Cardiovascular disease (CVD) is the most common cause of death worldwide. The degree of calcification of calcified plaques in the coronary arteries is an important indicator for monitoring cardiovascular disease and also an important means for predicting cardiovascular disease. The calcification score calculated from the cardiac panned image is an evaluation index reflecting the degree of calcification in the coronary. The cardiac scout image is difficult to see because it is not enhanced, and because of the lack of Electrocardiogram (ECG) synchronization, motion artifacts and scan dose of the heart can also produce image noise. The traditional method for calculating the calcification score is to manually delineate a calcification area or design a decision rule to find out calcified plaque of coronary artery and then calculate the calcification score. The method for manually delineating the calcified area is time-consuming and labor-consuming, and needs to be further improved in accuracy, universality and automation degree; the calcified area division by the decision rule has the disadvantage of low robustness.
Aiming at the problem of low robustness of a coronary calcified plaque segmentation method in the related art, no effective solution is provided at present.
Disclosure of Invention
In the embodiment, a coronary artery calcified plaque segmentation method, a method for calculating a coronary artery calcified plaque and a computer readable storage medium are provided to solve the problem of low robustness of the coronary artery calcified plaque segmentation method in the related art.
In a first aspect, the present embodiment provides a coronary calcified plaque segmentation method, including:
acquiring a first medical image of a cardiac region;
classifying each voxel of the first medical image by using a preset segmentation model to obtain a first classification result of each voxel classified according to a parent class and a second classification result of each voxel classified according to a sub-class;
correcting the sub-category of each voxel in the first medical image according to the parent category of each voxel in the first medical image to obtain a third classification result;
and acquiring a coronary calcified plaque segmentation result of the first medical image according to the third classification result.
In some of these embodiments, classifying each voxel of the first medical image according to a parent class and a child class, respectively, using a preset segmentation model further comprises:
and taking the distance field map as reference information, and classifying each voxel of the first medical image according to a parent class and a sub-class by an auxiliary preset segmentation model, wherein the distance field map represents the distance between each voxel in the first medical image and the surface of the heart.
In some embodiments, correcting the sub-category of each voxel in the first medical image according to the parent category of each voxel in the first medical image to obtain a third classification result comprises:
determining that the voxels of the first classification result, which are not of interest in the parent class, and the voxels of the second classification result, which are of interest in the sub-class, in the first medical image are first voxels;
and setting the second classification result of the first voxel as a non-interesting subcategory.
In some of these embodiments, before correcting the sub-category of each voxel in the first medical image according to the parent category of each voxel in the first medical image to obtain a third classification result, the method further comprises:
acquiring a region of interest of the first medical image;
determining voxels in the first medical image that are outside the region of interest as second voxels;
setting the first classification result of the second voxel as a parent class of non-interest, and setting the second classification result of the second voxel as a child class of non-interest.
In some of these embodiments, the preset segmentation model comprises a feature extraction module, a first classification module, and a second classification module; the preset segmentation model is trained by the following steps:
acquiring a training sample, wherein the training sample comprises a second medical image and a classification label of each voxel in the second medical image, and the classification label of each voxel in the second medical image comprises a first classification label of each voxel classified according to a parent class and a second classification label of each voxel classified according to a sub-class;
and taking the second medical image as input data of the feature extraction module, taking the first classification label as a gold standard for classifying each voxel by the first classification module, taking the second classification label as a gold standard for classifying each voxel by the second classification module, and training the feature extraction module, the first classification module and the second classification module.
In some of these embodiments, training the feature extraction module, the first classification module, and the second classification module comprises:
acquiring a region of interest of the second medical image, and determining a voxel in the region of interest in the second medical image as a third voxel;
obtaining a first prediction result of each third voxel classification by the first classification module; determining a first prediction loss of the third voxel classification by the first classification module according to the first prediction result and the first classification label;
obtaining a second prediction result of each third voxel classification by the second classification module; determining a second prediction loss of the third voxel classification by the second classification module according to the second prediction result and the second classification label;
fusing the first prediction loss and the second prediction loss to obtain a third prediction loss;
updating the feature extraction module, the first classification module, and the second classification module based on the third predicted loss.
In some of these embodiments, the first prediction comprises a first probability value that each of the third voxels belongs to a parent category of interest, and the second prediction comprises a second probability value that each of the third voxels belongs to a sub-category; determining, from the second prediction result and the second classification label, a second prediction loss of the third voxel classification by the second classification module comprises:
normalizing the first probability value to obtain a normalized numerical value corresponding to each third voxel;
determining a fourth predicted loss of each third voxel classification by the second classification module according to the second probability value and the second classification label;
and determining a weighted sum of the fourth prediction loss by using the normalized numerical value corresponding to each third voxel as a weight value of each third voxel, and using the weighted sum as the second prediction loss.
In some of the embodiments described herein, the first and second,
determining, from the first prediction result and the first classification label, a first prediction loss of the third voxel classification by the first classification module comprises: determining a parent class prediction loss of each voxel in the third voxel according to the first prediction result and the first classification label; determining a first weight value of each voxel in the third voxels according to the number of the third voxels of each category in the first classification label; determining a weighted sum of the parent class prediction losses for the third voxel of the same class; fusing the weighted sum of the parent class prediction losses corresponding to the classes to obtain the first prediction loss;
determining, from the second prediction result and the second classification label, a second prediction loss of the third voxel classification by the second classification module comprises: determining a sub-category prediction loss for each voxel in the third voxel according to the second prediction result and the second classification label; determining a second weight value of each voxel in the third voxels according to the number of the third voxels in each category in the second classification label; determining a weighted sum of the sub-category prediction losses for the third voxel of the same category; and fusing the weighted sum of the sub-category prediction losses corresponding to each category to obtain the second prediction loss.
In a second aspect, the present embodiment also provides a method for calculating a coronary calcification score, the method including:
acquiring a first medical image of a cardiac region;
classifying each voxel of the first medical image by using a preset segmentation model to obtain a first classification result of each voxel classified according to a parent class and a second classification result of each voxel classified according to a sub-class;
correcting the sub-category of each voxel in the first medical image according to the parent category of each voxel in the first medical image to obtain a third classification result;
obtaining a coronary calcified plaque segmentation result of the first medical image according to the third classification result;
and determining the coronary artery calcification score corresponding to the first medical image according to the coronary artery calcification plaque segmentation result.
In a third aspect, the present embodiment also provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the coronary calcified plaque segmentation method according to the first aspect and/or the steps of the method of calculating a coronary calcific score according to the second aspect.
Compared with the related art, the coronary calcified plaque segmentation method, the method for calculating the coronary calcified plaque and the computer readable storage medium provided in the embodiment are realized by acquiring a first medical image of a heart region; classifying each voxel of the first medical image by using a preset segmentation model to obtain a first classification result of each voxel classified according to a parent class and a second classification result of each voxel classified according to a sub-class; correcting the sub-category of each voxel in the first medical image according to the parent category of each voxel in the first medical image to obtain a third classification result; according to the third classification result, the coronary calcified plaque segmentation result of the first medical image is obtained, the problem of low robustness of a coronary calcified plaque segmentation method in the related technology is solved, and the robustness of coronary calcified plaque segmentation is improved.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a flowchart of the coronary calcified plaque segmentation method according to the embodiment.
Fig. 2 is a flow chart of a method of calculating coronary calcium scores according to a preferred embodiment of the present application.
FIG. 3 is a flow chart of the preferred embodiment of the present application for generating a distance field map.
FIG. 4 is a flowchart of the preferred embodiment of the present application for generating a mask for a region of interest.
FIG. 5 is a flow chart of the training of the segmentation model of the preferred embodiment of the present application.
FIG. 6 is a schematic diagram of the network architecture of the segmentation model and its total prediction loss generation according to the preferred embodiment of the present application.
Fig. 7 is a flow chart of coronary artery calcium score segmentation based on a segmentation model according to the preferred embodiment of the present application.
Fig. 8 is a flowchart of a coronary artery calcium score calculation method according to a preferred embodiment of the present application.
Fig. 9 is a block diagram illustrating a structure of a coronary calcified plaque segmentation apparatus according to an embodiment of the present application.
Fig. 10 is a block diagram illustrating a structure of an apparatus for calculating coronary artery calcium scores according to an embodiment of the present application.
Detailed Description
For a clearer understanding of the objects, aspects and advantages of the present application, reference is made to the following description and accompanying drawings.
Unless defined otherwise, technical or scientific terms used herein shall have the same general meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The use of the terms "a" and "an" and "the" and similar referents in the context of this application do not denote a limitation of quantity, either in the singular or the plural. The terms "comprises," "comprising," "has," "having," and any variations thereof, as referred to in this application, are intended to cover non-exclusive inclusions; for example, a process, method, and system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or modules, but may include other steps or modules (elements) not listed or inherent to such process, method, article, or apparatus. Reference throughout this application to "connected," "coupled," and the like is not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. Reference to "a plurality" in this application means two or more. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. In general, the character "/" indicates a relationship in which the objects associated before and after are an "or". The terms "first," "second," "third," and the like in this application are used for distinguishing between similar items and not necessarily for describing a particular sequential or chronological order.
In the present embodiment, a coronary calcified plaque segmentation method is provided, fig. 1 is a flowchart of the coronary calcified plaque segmentation method of the present embodiment, as shown in fig. 1, the flowchart includes the following steps:
step S101, a first medical image of a cardiac region is acquired.
Step S102, classifying each voxel of the first medical image by using a preset segmentation model to obtain a first classification result of each voxel classified according to a parent class and a second classification result of each voxel classified according to a sub-class.
Step S103, correcting the sub-category of each voxel in the first medical image according to the parent category of each voxel in the first medical image to obtain a third classification result.
And step S104, acquiring a coronary calcified plaque segmentation result of the first medical image according to the third classification result.
Through the above steps S101 to S104, the first medical image of the cardiac region is processed using the preset segmentation model to obtain two kinds of classification results by the parent category and the child category, respectively. The preset segmentation model includes, but is not limited to, a shallow learning model based on traditional machine learning, or a deep learning model based on an artificial neural network. And the classification number of the parent category is smaller than that of the child category. For example, the parent category is two categories for distinguishing a region of interest from a region of non-interest; the subcategories are five categories for distinguishing between regions of non-interest and calcified plaques of each of the four main branches; and the categories of the calcified plaques of the four main branches in the subcategory belong to the categories corresponding to the interest areas in the parent category.
In this embodiment, the corrected classification result (i.e., the third classification result) is obtained by combining the classification results of the parent classification and the classification results of the child classification of the medical image, and then the segmentation result of the medical image is obtained according to the corrected classification result, so that the automatic segmentation of the calcified plaque region is realized, and the problem that the manual delineation of the calcified region is time-consuming and labor-consuming is solved. The preset classification module can be trained based on a shallow learning or deep learning method, a decision rule does not need to be artificially designed, and the robustness of calcified region division is improved.
In this embodiment, the first medical image may be a CT scout image. CT scout, also known as normal scan, refers to a scan without intravenous administration of an iodine-containing contrast agent, which is usually used by primary CT examiners, and most importantly, CT scout scans are performed by techniques for grasping the layer thickness and layer distance of various parts or organs and regions of interest, so that CT scout images can be acquired more quickly and without causing damage to the human body compared to an enhanced scan.
Taking an artificial neural network-based segmentation model as an example, a CT scout image as a sample is generally used as input data of the segmentation model, and the segmentation model is trained by using a class label of each voxel in the artificially labeled CT scout image as a gold standard. In the segmentation model for calcified plaque segmentation in the present embodiment, feature extraction of each voxel in a CT flat scan image can be realized based on a convolution layer provided by an artificial neural network, and a weight value of an extracted feature is updated based on a prediction loss (or referred to as an error) existing between a prediction result output by the segmentation model and a gold standard. Updating the weight value until the prediction loss between the prediction result and the golden standard is smaller than a set threshold value is called parameter convergence, and the obtained segmentation model is used for predicting the classification of each voxel in the input CT flat-scan image based on the trained weight value (also called network parameter).
In this embodiment, when the segmentation model is trained, feature information may be extracted in advance, the feature information extracted in advance is used as reference information, and the preset segmentation model is assisted to classify each voxel of the first medical image according to the parent category and the sub-category. For example, when the segmentation model is trained or applied, the feature information is input into the segmentation model as a medical image together with the CT scout image to implement the training of the segmentation model or to implement the segmentation of the first medical image. For example, the pre-extracted feature information may be a distance field map that represents the distance of each voxel from the surface of the heart in a CT scout image. Each voxel in the distance field map corresponds to each voxel in the CT flat scan image in a one-to-one mode, and the value of each voxel in the distance field map is the distance value of the voxel from the surface of the heart. Because the calcified plaque is positioned on the surface of the heart, the distance value of the voxel from the surface of the heart is extracted in advance to assist the training of the segmentation model, the parameter convergence of the segmentation model can be accelerated, and the training effect is improved. In accordance with the training process, when the calcified plaque segmentation of the CT scout image is performed by using the segmentation model with the converged parameters, the feature information may be manually extracted from the CT scout image in advance, and the extracted feature information and the CT scout image may be input into the segmentation model as a medical image to obtain the segmentation result.
In the above step S104, the first classification result obtained by classifying the voxels according to the parent category is used as a basis for correcting the second classification result obtained by classifying the voxels according to the child category. The parent category is divided into an interested parent category and a non-interested parent category; similarly, sub-categories are also divided into sub-categories of interest and sub-categories of non-interest. Preferably, the number of the non-interesting parent category and the interesting parent category is 1, and the number of the non-interesting child categories is 1.
In an ideal case, a certain voxel is classified by the segmentation model as a parent class not of interest, and the voxel should also be classified as a sub-class not of interest; conversely, if a voxel is classified by the segmentation model as a sub-category of non-interest, the voxel should be classified as a parent category of non-interest.
However, in practice, after the voxels are classified by the segmentation model, it may happen that some voxels are classified into parent and non-interesting sub-categories, respectively, or some voxels are classified into non-interesting parent and interesting sub-categories, respectively. In order to correct the situation that the actual situation of the segmentation model does not conform to the ideal situation and obtain a better segmentation result, in this embodiment, firstly, according to the parent class and the sub-class to which the voxel belongs, the voxels in the first medical image whose first classification result is the parent class and whose second classification result is the sub-class of interest are determined as the first voxels, and then the second classification result of the first voxels is set as the sub-class of non-interest, so as to correct the second classification result. And the classification result obtained by correction is the third classification result.
In step S104, after obtaining the third classification result for each voxel in the medical image, connected domains of the voxels classified into each sub-category may be searched according to the third classification result, and boundaries of each connected domain may be determined to obtain image segmentation results of each sub-category.
In some embodiments, before the sub-category of each voxel in the first medical image is corrected according to the parent category of each voxel in the first medical image and the third classification result is obtained, the region of interest of the first medical image may be further acquired; determining voxels in the first medical image that are outside the region of interest as second voxels; the first classification result of the second voxel is set as a parent class of non-interest and the second classification result of the second voxel is set as a child class of non-interest. In the above manner, correction of the segmentation result of the medical image can be further achieved in combination with the region of interest extracted in other manners. For example, the region of interest including the suspected calcified plaque region may be obtained by a conventional region growing method, a threshold segmentation method, or an image segmentation method based on an artificial neural network.
The preset segmentation model of the present embodiment adopts a supervised learning method, and specifically can be trained through the following steps:
step 1, obtaining a training sample, wherein the training sample comprises a second medical image and a classification label of each voxel in the second medical image, and the classification label of each voxel in the second medical image comprises a first classification label of each voxel classified according to a parent class and a second classification label of each voxel classified according to a sub-class.
And 2, taking the second medical image as input data of the feature extraction module, taking the first classification label as a gold standard for classifying each voxel by the first classification module, taking the second classification label as a gold standard for classifying each voxel by the second classification module, and training the feature extraction module, the first classification module and the second classification module.
The preset segmentation model adopted in the embodiment comprises a feature extraction module, a first classification module and a second classification module; the input data of the feature extraction module comprises medical images, and feature graphs obtained by processing of the feature extraction module are respectively provided for the first classification module and the second classification module for classification processing. The first classification module and the second classification module classify each voxel in the medical image according to a parent class and a sub-class respectively. The input data of the first classification module and the second classification module are from the feature extraction module, and the input data of the first classification module and the second classification module share the network parameters of the feature extraction module, so the network parameters of the feature extraction module can be updated based on the total prediction loss of the output results of the first classification module and the second classification module.
In the present embodiment, the image segmentation processing of the medical image may be performed based on a deep learning model constructed by a full convolution network. For example, the segmentation model may employ a network framework of a full convolution network such as U-Net, V-Net, or a variant of this network.
Taking the example of a V-Net network, the V-Net network uses a convolution operation to extract features of the data, while reducing the resolution of the data by an appropriate step size at the end of each "phase". The left side of the whole structure is a path of gradual compression, and the right side is a path of gradual decompression (expansion). The size of the final output is as large as the original size of the image. In a V-Net network, the results of each stage in the systolic path are added as part of the input to the stage corresponding to the right decompression. Therefore, a part of information lost due to compression can be reserved, the accuracy of the final boundary segmentation is improved, and meanwhile, the convergence rate of the model is favorably improved.
Also for the purpose of improving the training efficiency, when the segmentation model is trained, a suspected calcified region in the sample CT flat scan image can be extracted as a region of interest. The method for extracting the region of interest includes, but is not limited to, a region growing method, a threshold segmentation method, or an image segmentation method based on an artificial neural network, and performs preliminary image segmentation on the CT scout image to obtain the region of interest of the CT scout image. In the case of using the sample CT scout image and the sample distance field map as input data of the segmentation model, after obtaining the region of interest to the sample CT scout image, a mask may be generated from the region of interest. In the present embodiment, in consideration of the versatility of the application phase of the segmentation model, the mask is not used to process the sample CT swept-across image input into the segmentation model, but the mask is used to process the classification probability map output by the segmentation model, so as to calculate the prediction loss from the classification probability map of the region of interest, and the training of the segmentation model is performed according to the prediction loss.
For example, when the feature extraction module, the first classification module and the second classification module are trained, a region of interest of the second medical image may be acquired, and a voxel in the region of interest in the second medical image is determined as a third voxel; acquiring a first prediction result of each third voxel classification by the first classification module; determining a first prediction loss of the first classification module to the third voxel classification according to the first prediction result and the first classification label; acquiring a second prediction result of the second classification module for classifying each third voxel; determining a second prediction loss of the second classification module for the third voxel classification according to the second prediction result and the second classification label; fusing the first prediction loss and the second prediction loss to obtain a third prediction loss; and updating the feature extraction module, the first classification module and the second classification module according to the third prediction loss.
In this embodiment, the image segmentation result is obtained based on a classification task at a voxel level, and both training and prediction are based on classification at the voxel level. In training the segmentation model, the training sample actually used for training the segmentation model is each voxel on the medical image, and the gold standard is the first classification label and the second classification label of the voxel respectively. In this way, the training of the segmentation model does not require a large number of medical images as input data for the segmentation model, but the segmentation model can be trained on a smaller number of medical images.
In the above manner, the pre-extracted region of interest can exclude most voxels belonging to parent and child categories of non-interest. In the training image, voxels belonging to the parent category or the sub-category of non-interest are referred to as negative samples of the segmentation model in this embodiment, and voxels belonging to the parent category or the sub-category of interest are referred to as positive samples of the segmentation model in this embodiment. The total prediction loss of the segmentation model is determined according to the third voxels in the region of interest, so that the loss of positive and negative samples can be balanced, the task of multi-classification according to sub-categories is simpler, and only the categories of the voxels in the region of interest need to be correctly segmented.
In some of these embodiments, the first prediction includes a first probability value that each third voxel belongs to the parent category of interest, and the second prediction includes a second probability value that each third voxel belongs to the sub-category. The second prediction loss may also be determined weighted for the weight value in combination with a first probability value of a parent class of the region of interest of the first classification module. For example, determining a second predicted loss of the third voxel classification by the second classification module based on the second prediction result and the second classification label comprises:
and step 1, normalizing the first probability value to obtain a normalized numerical value corresponding to each third voxel.
And 2, determining a fourth prediction loss of each third voxel classification by the second classification module according to the second probability value and the second classification label.
And 3, taking the normalized numerical value corresponding to each third voxel as the weight value of each third voxel, determining the weighted sum of the fourth prediction loss, and taking the weighted sum as the second prediction loss.
In this embodiment, the first prediction result and the second prediction result are both probability maps having the same resolution as the medical image input to the segmentation model. Each voxel on the probability map corresponds to a voxel at a corresponding location of the medical image input to the segmentation model. The value of each voxel on the probability map represents the probability value of a voxel on the medical image corresponding to the same location of the voxel belonging to a certain class. Typically, one probability map representation is used for each category. In this embodiment, the probability map for representing the category of interest is referred to as a foreground probability map, and the probability map for representing the non-category of interest is referred to as a background probability map.
Since the number of negative samples (non-calcified regions, i.e. voxels belonging to non-interested parent or sub-categories) and the number of positive samples (calcified regions, voxels belonging to interested parent or sub-categories) in the medical image are very different, the samples are severely unbalanced, and the model is difficult to train and learn. In this embodiment, the two classification networks used for parent classification may remove most negative samples, and the foreground probability map output by the first classification module is used as a weight to guide training of the second classification module, so that the segmentation model mainly learns how to distinguish positive samples of multiple interesting sub-classes and part of negative samples missed by the first classification module. Because the weight value corresponding to the negative sample correctly identified by the first classification module is very low, the corresponding prediction loss after weighting is also very low, namely the segmentation model does not pay attention to learning the samples, and the loss of multiple sub-categories to be classified by the segmentation model is similar at the moment, so that the balance of training samples is realized, and the training effect of the segmentation model is improved.
For the first prediction loss and the second prediction loss, in this embodiment, the prediction losses of each parent category or each child category are calculated, and then the prediction losses of each category are fused.
When the prediction losses corresponding to the parent categories are fused or the prediction losses corresponding to the sub-categories are fused, the prediction losses of the categories can be directly added, or the number of voxels included in each category can be used as a weight value to calculate a weighted sum, and the weighted sum is used as the final prediction loss of each classification module.
For example, determining a first predicted loss of the third voxel classification by the first classification module based on the first prediction result and the first classification label comprises: determining the parent class prediction loss of each voxel in the third voxel according to the first prediction result and the first classification label; determining a first weight value of each voxel in the third voxels according to the number of the third voxels in each category in the first classification label; determining a weighted sum of parent class prediction losses for a third voxel of the same class; and fusing the weighted sum of the parent class prediction losses corresponding to the classes to obtain a first prediction loss.
For another example: determining a second prediction loss of the third voxel classification by the second classification module according to the second prediction result and the second classification label comprises: determining the sub-category prediction loss of each voxel in the third voxel according to the second prediction result and the second classification label; determining a second weight value of each voxel in the third voxels according to the number of the third voxels in each category in the second classification label; determining a weighted sum of sub-category prediction losses for a third voxel of the same category; and fusing the weighted sum of the prediction losses of the sub-categories corresponding to the categories to obtain a second prediction loss.
In the above step, the weight value of each third voxel may be determined according to the number of voxels included in the category to which each third voxel belongs. For example, when the number of the third voxels classified into a parent category or a child category is N, the weight value of each third voxel in the parent category or the child category may be set to a value related to the number of the third voxels, for example, 1/N, so as to adaptively generate the weight value of each third voxel without manually setting parameters.
The present application is described and illustrated below by means of preferred embodiments.
The preferred embodiment provides a coronary artery calcified plaque segmentation method and provides a method for calculating a coronary artery calcified plaque based on the coronary artery calcified plaque segmentation method. In the preferred embodiment, the segmentation model based on the artificial neural network is used, and the original CT flat scan image and the distance field map based on the heart segmentation are used as the input of the segmentation model, so that the segmentation model can learn not only the gray value information of the calcified plaque in the CT flat scan image, but also the distance information between the calcified plaque and the surface of the heart.
In the preferred embodiment, the first classification module is a two-classification network for predicting whether each voxel belongs to a calcified plaque. Wherein, calcified plaque is interested father category, and non-calcified plaque is not interested father category.
In the preferred embodiment, the second classification module is a five-classification network for predicting whether each voxel belongs to a calcified plaque and determining the principal classification to which the calcified plaque belongs if the voxel belongs to the calcified plaque. The main branch categories to which the calcified plaque belongs in the preferred embodiment include four, respectively: left Main branch (LM), Left Anterior Descending branch (LAD), Left Circumflex (LCX), Right Coronary Artery (RCA).
Fig. 2 is a flow chart of a method of calculating coronary calcium scores according to a preferred embodiment of the present application. As shown in fig. 2, the method for calculating coronary artery calcium score includes: generating a distance field map, generating a mask of a region of interest, training a segmentation model, segmenting a coronary artery calcium score based on the segmentation model, and calculating the coronary artery calcium score.
Fig. 3 is a flow chart of generating a distance field map according to the preferred embodiment of the present application, and as shown in fig. 3, the flow chart includes the following steps:
step S301, an original cardiac CT scout image is acquired.
Step S302, cardiac segmentation is performed on the original cardiac CT scan image.
In the above steps, data of an original cardiac CT scout image is obtained, and a segmentation result of a cardiac region is obtained based on the CT scout image, and in this step, a conventional algorithm such as a region growing algorithm, an adaptive threshold algorithm, and the like may be adopted, or a fully automatic segmentation of the cardiac region may be performed based on a CNN-trained deep learning model.
In step S303, a cardiac distance field map is generated based on the cardiac segmentation result.
In the above steps S301 to S302, most of the voxels belonging to the parent category or the sub-category that are not of interest can be removed by performing cardiac segmentation on the original cardiac CT scan image. The above step S303 may determine a heart surface region based on the heart segmentation result, and then determine the distance of each voxel from the heart surface region, thereby obtaining a heart distance field map. The above approach provides a method of extracting a cardiac distance field map.
Fig. 4 is a flowchart of the mask for generating the region of interest according to the preferred embodiment of the present application, and as shown in fig. 4, the flowchart includes the following steps:
step S401, an original cardiac CT scout image is acquired.
In step S402, HU threshold segmentation is performed on the original cardiac CT scan image.
Since the calcium score is calculated in voxels where the image HU values are larger than a preset threshold, in the preferred embodiment the HU threshold can be set to remove voxels in the original cardiac CT scan image that are smaller than the HU threshold. In this embodiment, the HU threshold may be set to 125, 130, 135 or other values as needed.
In step S403, a cardiac ring band is generated from the cardiac distance field map.
After the processing of step S302, there are many voxels larger than the HU threshold, and there are calcified plaques on the main branches (LM, LAD, LCX, RCA) that are not four categories.
In order to remove voxels not belonging to the above four main branches, a distance distribution range of the calcified plaque from the heart surface may be counted in advance based on the heart segmentation result, and a distance threshold from the heart surface to the inside and outside of the heart may be set according to the distance distribution range, thereby obtaining an annular band region surrounding the heart surface. Wherein the voxels belonging outside the ring-shaped zone are voxels of a parent or sub-category that are not of interest. Therefore, the number of negative samples in the heart segmentation image can be further reduced by dividing the annular band region by other voxel-like connected regions not belonging to the annular band region.
And S404, filtering the voxel connected domain in the heart annular band according to the calcified plaque volume threshold.
In order to reduce interference to subsequent steps, in step S404, volume distribution ranges of the four main branch calcified plaques may be obtained based on statistics of the volume distribution ranges of the calcified plaques in advance, and a volume threshold is set based on the volume distribution ranges, so as to filter out voxel connected domains larger than the volume threshold.
And step S405, filtering voxel connected domains in the heart annular band according to an area threshold value.
In order to remove too small isolated voxels that may be noise points, in step S405, for each slice (slice) image of the medical image along the Z-axis direction (long axis direction of the heart), voxel connected domains smaller than the area threshold may be removed.
It should be noted that, in some other preferred embodiments, the above step S404 and step S405 may be executed sequentially, for example, the annular cardiac band filtered according to the area threshold in step S405 may be the annular cardiac band obtained after the processing in step S404. The execution order of the above steps S404 and S405 may be changed, that is, step S405 is executed before step S404, and the annular cardiac band filtered according to the volume threshold in step S404 may be the annular cardiac band obtained after the processing in step S405. The above steps S404 and S405 may be executed in parallel, for example, the cardiac ring band processed in steps S404 and S405 is the cardiac ring band generated in step S403.
In step S406, a mask of the region of interest is obtained.
The "filtering" implemented in each of the above-described steps S402 to S405 may be to generate a corresponding mask image according to a filtering condition. For example, the HU thresholding mask image obtained in step S402 is an image having exactly the same resolution as the original cardiac CT scout image, and the voxels of the image are binarized, for example, the value of the voxel having a HU value lower than the HU threshold is set to 0, and the value of the voxel having a HU value not lower than the HU threshold is set to 1, so that the HU thresholding mask image is obtained. Similarly, corresponding mask images are obtained in step S403, step S404, and step S405.
In the above step S406, a final mask image of the region of interest may be obtained based on these mask images. For example, a union set is taken for the regions set to 0 in each mask image, the voxels of the regions outside the union set have a value of 1, and the voxel regions of the original cardiac CT scout image corresponding to the voxel regions with a value of 1 are regions of interest.
After the processing from step S401 to step S405, a suspected calcified plaque region, that is, a region of interest, is obtained by segmentation from the original cardiac CT scan image.
The distance threshold and the volume threshold in steps S404 and S405 may be determined by a statistical result of known calcified plaques, and the larger the statistical amount of calcified plaque data is, the more universal the obtained distance threshold and the obtained volume threshold are.
Fig. 5 is a flowchart of training of a segmentation model according to a preferred embodiment of the present application, and as shown in fig. 5, the flowchart includes the following steps:
and step S501, marking data.
The medical images used to train the segmentation model also employ cardiac CT scout images, referred to in the preferred embodiment as cardiac CT scout image samples. In order to reduce the labeling workload, in the preferred embodiment, the region of interest of the cardiac CT scout image sample may be obtained with reference to the above-mentioned region of interest acquisition method, and only the voxels in the region of interest obtained in step S405 may be subjected to the class labeling.
The labeling of categories includes five sub-categories, namely calcified plaque on the four main branches (RCA, LM, LAD, LCX) and non-calcified plaque. The four calcified plaques are sub-categories of interest in the preferred embodiment, and the four calcified plaques belong to a parent category of interest. Voxels of non-calcified plaque within the region of interest are labeled as a sub-category of non-interest, while the sub-category of non-interest is also a parent category of non-interest. Voxels that do not belong to the region of interest can then be labeled as background classes.
In the present embodiment, the data labeling result is represented by a labeling image having the same resolution as the cardiac CT scan image sample. The value of each voxel on the labeled image represents the labeled class of the corresponding voxel on the original cardiac CT scout image.
Step S502, image cutting.
The purpose of image cropping is to remove the amount of data of image data input into the segmentation model. In step S502, a bounding box (bounding box) may be generated based on a mask of a region of interest of a cardiac CT swept image sample. The bounding box is capable of enclosing a region of interest in a cardiac CT swept image sample. After the bounding box is obtained, the cardiac CT scout image sample, the distance field map and the annotation image are cropped based on the bounding box, and the image regions outside the bounding box, i.e., the image regions where there are no possible four main branch calcified plaques, are removed.
And step S503, training the binary network.
In the original cardiac CT flat scan image, the proportion of voxels that can be classified into the non-calcified plaque category is much larger than that of voxels that can be classified into the calcified plaque category, resulting in unbalanced sample number, which makes training of the five-classification network difficult or difficult to be successfully trained.
In order to reduce the training difficulty of the five-classification network, in the training of the two-classification network, voxels belonging to calcified plaques on four main branches are calculated as a parent class of interest, and voxels not belonging to the four main branches in the region of interest are calculated as a parent class of non-interest. During training, in order to increase the training speed, voxels outside the region of interest do not participate in the training.
In step S503, a binary network is trained with the clipped cardiac CT scout image sample and the distance field map as input and the labeling result represented by the labeling image as a gold standard, so as to determine whether each voxel in the cardiac CT scout image sample is a calcified plaque on four main branches.
The artificial neural network adopted by the segmentation model can select an image segmentation network such as U-net or V-net, the output channel of the classification network of the task branch is 2, and the loss function can adopt focal loss.
And step S504, training the five-classification network.
In the preferred embodiment, the five-class network shares the input data and the parameters of the feature extraction network with the two-class network. The five classification network outputs a channel of 5, and voxels outside the region of interest do not participate in the training similar to the training of the two classification network.
In the preferred embodiment, the prediction results of the two-class network are used to guide the training of the five-class network. For example, the probability values of the voxels participating in training in the foreground probability map (the probability map corresponding to the parent category of interest) predicted by the binary classification network are normalized by the softmax function on all the voxels participating in training. When the prediction loss of the five-classification network is calculated, for each voxel participating in loss calculation, the normalized probability value of the two-classification network is used as a weight value to weight and determine the prediction loss of all the voxels participating in loss calculation in the five-classification network, so that the five-classification network can focus on learning the characteristics of the voxels belonging to the interested parent classification and secondarily learn the characteristics of the voxels belonging to the uninteresting parent classification.
The total prediction loss when the two-class network and the five-class network are trained together is shown in fig. 6. In the above step S503 and step S504, the segmentation model performs end-to-end supervised learning through multitask learning, the two-classification network and the five-classification network learn simultaneously, and the total prediction loss can be obtained by adjusting the prediction losses of the two classification networks through a certain weight.
In the above training of the segmentation model, each input image is used as a batch, each voxel on each input image is used as a sample, and the parameters of the segmentation model are updated.
In the above step S503, the local loss of each batch (batch) can be calculated by the following method:
Figure BDA0003001906080000151
in the above step S504, the local mass of each batch (batch) can be calculated by the following method:
Figure BDA0003001906080000152
the total loss per batch (batch) can be calculated in the following way:
FL=wFL_a+(1-w)FL_b。
wherein k represents the number of categories for which loss needs to be calculated; n isiRepresenting the number of voxels of the i-th class; α represents a balance factor; gamma represents an adjustment factor; p is a radical ofijRepresenting the prediction probability of the jth sample of the ith class; w is aijThe weight value of the jth sample of class i, w representing a weight factor.
Through the loss calculation mode, the predicted loss of each category is firstly averaged according to the category, and then the loss of each category is fused, so that the unbalance of the network to each sample (voxel) can be improvedBalance of learning of categories. In the above loss calculation, the weight value of the prediction loss of the i-th class voxel in FL _ a to the prediction loss of the i-th class voxel is determined adaptively for each voxel by the number of voxels
Figure BDA0003001906080000153
Therefore, the self-adaptive setting of the voxel weight value is realized, and the voxel weight value parameter does not need to be manually set.
Fig. 7 is a flowchart of coronary calcification segmentation based on a segmentation model according to a preferred embodiment of the present application, and as shown in fig. 7, the flowchart includes the following steps:
step S701, an original cardiac CT scout image is acquired.
Step S702, a cardiac distance field map of the original cardiac CT swept-area image and a mask of the region of interest are generated.
Step S703, image cropping.
Step S704, the original cardiac CT flat scan image and the distance field image after the image cutting are input into a trained segmentation model, and a two-classification result and a five-classification result are obtained.
Step S705, correcting the five-classification result according to the two-classification result to obtain a final five-classification result.
The preferred embodiment solves the problem that manual delineation of calcified areas is time-consuming and labor-consuming. The preset classification module can be trained based on a shallow learning or deep learning method, a decision rule does not need to be artificially designed, and the robustness of calcified region division is improved. Also, by correcting the fifth classification result using the second classification result, a more reliable segmentation result can be obtained.
Fig. 8 is a flowchart of a coronary artery calcium score calculation method according to a preferred embodiment of the present application, and as shown in fig. 8, the flowchart includes the following steps:
and step S801, acquiring a final five-classification result.
Step S802, an original cardiac CT scout image is acquired.
And step S803, calculating Agatston integral, volume integral and mass integral of the four main branches respectively according to the final five-classification result and the original cardiac CT flat scan image.
The Agatston score calculation method is to assign a calcification density to be multiplied by a calcification area, and assign a score according to the CT value of a calcified plaque, for example: the score of 130-. The volume score can be obtained by directly multiplying the area of the calcification by the thickness of the layer, reflecting the total volume of the calcification. The mass score may be obtained by dividing the Agatston score by the area of calcification, reflecting the average degree of calcification.
The coronary calcified plaque segmentation method and the coronary calcified plaque calculation method provided by the preferred embodiment have the following advantages:
(1) the position of the plaque, namely the coronary branch to which the plaque belongs, is directly identified without depending on coronary artery segmentation, and the calcified plaque area and the position thereof are easy to operate and fast.
(2) Based on big data statistics, the distribution condition of four main tributary calcified plaque volumes is analyzed, a volume threshold value is set, large non-calcified plaques such as ribs can be filtered out, interference on a subsequent classification model is reduced, and final identification accuracy is improved.
(3) Through single-model multi-task learning, the model can learn more general characteristic representation, the generalization capability of the model is improved, and meanwhile, the training time can be saved through end-to-end learning.
(4) The loss of the five classification tasks is weighted through the probability graph of the two classification tasks, so that the model mainly learns calcified plaques on the four main branches, and the learning efficiency is improved.
(5) And generating a bounding box containing all the current voxels according to the maximum and minimum coordinates based on the voxel distribution condition generated after the threshold is set, cutting the original image, sending the cut image into a network, and improving the speed of plaque identification by reducing the size of the input image.
(6) By means of respectively averaging and then summing the loss of each type in the local loss, the learning balance of the network on each sample unbalanced type can be improved, and parameters do not need to be set manually.
In this embodiment, a coronary calcified plaque segmentation apparatus is further provided, and the apparatus is used to implement the above embodiments and preferred embodiments, which have already been described and will not be described again. The terms "module," "unit," "subunit," and the like as used below may implement a combination of software and/or hardware for a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 9 is a block diagram illustrating a structure of a coronary calcified plaque segmentation apparatus according to an embodiment of the present application, and as shown in fig. 9, the apparatus includes:
a first acquisition module 91 for acquiring a first medical image of a cardiac region.
The processing module 92 is coupled to the first obtaining module 91, and configured to classify each voxel of the first medical image using a preset segmentation model, so as to obtain a first classification result that each voxel is classified according to a parent class and a second classification result that each voxel is classified according to a sub-class.
And a correcting module 93, coupled to the processing module 92, for correcting the sub-category of each voxel in the first medical image according to the parent category of each voxel in the first medical image, to obtain a third classification result.
And a second obtaining module 94, coupled to the correcting module 93, for obtaining a coronary calcified plaque segmentation result of the first medical image according to the third classification result.
In some embodiments, the processing module 92 is further configured to classify voxels of the first medical image according to a parent category and a sub-category respectively by using the distance field map as reference information, wherein the distance field map represents a distance between each voxel in the first medical image and a surface of the heart.
In some of these embodiments, the correction module 93 includes: a first determining unit, configured to determine that a voxel in the first medical image whose first classification result is a parent category that is not of interest and whose second classification result is a child category of interest is a first voxel. And the classification unit is used for setting the second classification result of the first voxel as a non-interesting subcategory.
In some of these embodiments, the apparatus further comprises: a third acquisition module, coupled to the first acquisition module 91, is configured to acquire a region of interest of the first medical image. And a determining module, coupled to the third acquiring module, for determining a voxel in the first medical image that is outside the region of interest as a second voxel. And the classification module is coupled with the determination module and used for setting the first classification result of the second voxel as a parent class which is not interested and setting the second classification result of the second voxel as a sub-class which is not interested.
In some embodiments, the preset segmentation model comprises a feature extraction module, a first classification module and a second classification module; the above apparatus further comprises a training module, the training module comprising:
and the training sample acquisition unit is used for acquiring a training sample, wherein the training sample comprises a second medical image and a classification label of each voxel in the second medical image, and the classification label of each voxel in the second medical image comprises a first classification label of each voxel classified according to a parent class and a second classification label of each voxel classified according to a sub-class.
And the model training unit is used for taking the second medical image as input data of the feature extraction module, taking the first classification label as a gold standard for classifying each voxel by the first classification module, taking the second classification label as a gold standard for classifying each voxel by the second classification module, and training the feature extraction module, the first classification module and the second classification module.
In some of these embodiments, the model training unit comprises: and the region-of-interest acquisition subunit is used for acquiring a region of interest of the second medical image and determining a voxel in the region of interest in the second medical image as a third voxel. The first prediction loss generation subunit is used for acquiring a first prediction result of each third voxel classified by the first classification module; and determining a first prediction loss of the third voxel classification by the first classification module according to the first prediction result and the first classification label. The second prediction loss generation subunit is used for acquiring a second prediction result of each third voxel classified by the second classification module; and determining a second prediction loss of the third voxel classification by the second classification module according to the second prediction result and the second classification label. And the third prediction loss generation subunit is used for fusing the first prediction loss and the second prediction loss to obtain a third prediction loss. And the updating subunit is used for updating the feature extraction module, the first classification module and the second classification module according to the third prediction loss.
In some of these embodiments, the first prediction includes a first probability value that each third voxel belongs to the parent category of interest, and the second prediction includes a second probability value that each third voxel belongs to the sub-category. The second prediction loss generation subunit is further to: normalizing the first probability value to obtain a normalized numerical value corresponding to each third voxel; determining a fourth prediction loss of each third voxel classification by the second classification module according to the second probability value and the second classification label; and determining a weighted sum of the fourth prediction loss by taking the normalized numerical value corresponding to each third voxel as a weight value of each third voxel, and taking the weighted sum as the second prediction loss.
In some of these embodiments, the first prediction loss generation subunit is further to: determining the parent class prediction loss of each voxel in the third voxel according to the first prediction result and the first classification label; determining a first weight value of each voxel in the third voxels according to the number of the third voxels in each category in the first classification label; determining a weighted sum of parent class prediction losses for a third voxel of the same class; fusing the weighted sum of the parent category prediction losses corresponding to the categories to obtain a first prediction loss;
in some of these embodiments, the second prediction loss generation subunit is further configured to: determining the sub-category prediction loss of each voxel in the third voxel according to the second prediction result and the second classification label; determining a second weight value of each voxel in the third voxels according to the number of the third voxels in each category in the second classification label; determining a weighted sum of sub-category prediction losses for a third voxel of the same category; and fusing the weighted sum of the prediction losses of the sub-categories corresponding to the categories to obtain a second prediction loss.
In this embodiment, a device for calculating coronary artery calcium scores is also provided, and the device is used to implement the above embodiments and preferred embodiments, which have already been described and will not be described again. The terms "module," "unit," "subunit," and the like as used below may implement a combination of software and/or hardware for a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 10 is a block diagram illustrating a structure of an apparatus for calculating coronary artery calcium score according to an embodiment of the present application, and as shown in fig. 10, the apparatus includes:
a medical image acquisition module 101 for acquiring a first medical image of a cardiac region.
The medical image processing module 102 is coupled to the medical image obtaining module 101, and configured to classify each voxel of the first medical image using a preset segmentation model, so as to obtain a first classification result that each voxel is classified according to a parent class and a second classification result that each voxel is classified according to a sub-class.
And a sub-category correcting module 103, coupled to the medical image processing module 102, for correcting the sub-category of each voxel in the first medical image according to the parent category of each voxel in the first medical image, to obtain a third classification result.
And the segmentation result acquisition module 104 is coupled with the subcategory correction module 103 and configured to acquire a coronary calcified plaque segmentation result of the first medical image according to the third classification result.
And a calcium score determining module 105, coupled to the segmentation result obtaining module 104, configured to determine a coronary calcium score corresponding to the first medical image according to the coronary calcified plaque segmentation result.
In addition, in combination with the methods provided in the above embodiments, a storage medium may also be provided to implement in this embodiment. The storage medium having stored thereon a computer program; the computer program, when executed by a processor, implements any of the coronary calcified plaque segmentation methods and/or coronary calcification score calculation methods of the above embodiments.
It should be understood that the specific embodiments described herein are merely illustrative of this application and are not intended to be limiting. All other embodiments, which can be derived by a person skilled in the art from the examples provided herein without any inventive step, shall fall within the scope of protection of the present application.
It is obvious that the drawings are only examples or embodiments of the present application, and it is obvious to those skilled in the art that the present application can be applied to other similar cases according to the drawings without creative efforts. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
The term "embodiment" is used herein to mean that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is to be expressly or implicitly understood by one of ordinary skill in the art that the embodiments described in this application may be combined with other embodiments without conflict.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the patent protection. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (10)

1. A coronary calcified plaque segmentation method, comprising:
acquiring a first medical image of a cardiac region;
classifying each voxel of the first medical image by using a preset segmentation model to obtain a first classification result of each voxel classified according to a parent class and a second classification result of each voxel classified according to a sub-class;
correcting the sub-category of each voxel in the first medical image according to the parent category of each voxel in the first medical image to obtain a third classification result;
and acquiring a coronary calcified plaque segmentation result of the first medical image according to the third classification result.
2. The method of claim 1, wherein classifying each voxel of the first medical image according to a parent class and a child class, respectively, using a preset segmentation model, further comprises:
and taking the distance field map as reference information, and classifying each voxel of the first medical image according to a parent class and a sub-class by an auxiliary preset segmentation model, wherein the distance field map represents the distance between each voxel in the first medical image and the surface of the heart.
3. The method of claim 1, wherein correcting the sub-category of each voxel in the first medical image according to the parent category of each voxel in the first medical image to obtain a third classification result comprises:
determining that the voxels of the first classification result, which are not of interest in the parent class, and the voxels of the second classification result, which are of interest in the sub-class, in the first medical image are first voxels;
and setting the second classification result of the first voxel as a non-interesting subcategory.
4. The method of claim 1, further comprising, before correcting the sub-category of each voxel in the first medical image according to the parent category of each voxel in the first medical image to obtain a third classification result:
acquiring a region of interest of the first medical image;
determining voxels in the first medical image that are outside the region of interest as second voxels;
setting the first classification result of the second voxel as a parent class of non-interest, and setting the second classification result of the second voxel as a child class of non-interest.
5. The method of claim 1, wherein the preset segmentation model comprises a feature extraction module, a first classification module and a second classification module; the preset segmentation model is trained by the following steps:
acquiring a training sample, wherein the training sample comprises a second medical image and a classification label of each voxel in the second medical image, and the classification label of each voxel in the second medical image comprises a first classification label of each voxel classified according to a parent class and a second classification label of each voxel classified according to a sub-class;
and taking the second medical image as input data of the feature extraction module, taking the first classification label as a gold standard for classifying each voxel by the first classification module, taking the second classification label as a gold standard for classifying each voxel by the second classification module, and training the feature extraction module, the first classification module and the second classification module.
6. The method of claim 5, wherein training the feature extraction module, the first classification module, and the second classification module comprises:
acquiring a region of interest of the second medical image, and determining a voxel in the region of interest in the second medical image as a third voxel;
obtaining a first prediction result of each third voxel classification by the first classification module; determining a first prediction loss of the third voxel classification by the first classification module according to the first prediction result and the first classification label;
obtaining a second prediction result of each third voxel classification by the second classification module; determining a second prediction loss of the third voxel classification by the second classification module according to the second prediction result and the second classification label;
fusing the first prediction loss and the second prediction loss to obtain a third prediction loss;
updating the feature extraction module, the first classification module, and the second classification module based on the third predicted loss.
7. The method of claim 6, wherein the first predictor comprises a first probability value that each of the third voxels belongs to a parent category of interest, and wherein the second predictor comprises a second probability value that each of the third voxels belongs to a sub-category; determining, from the second prediction result and the second classification label, a second prediction loss of the third voxel classification by the second classification module comprises:
normalizing the first probability value to obtain a normalized numerical value corresponding to each third voxel;
determining a fourth predicted loss of each third voxel classification by the second classification module according to the second probability value and the second classification label;
and determining a weighted sum of the fourth prediction loss by using the normalized numerical value corresponding to each third voxel as a weight value of each third voxel, and using the weighted sum as the second prediction loss.
8. The method of claim 6,
determining, from the first prediction result and the first classification label, a first prediction loss of the third voxel classification by the first classification module comprises: determining a parent class prediction loss of each voxel in the third voxel according to the first prediction result and the first classification label; determining a first weight value of each voxel in the third voxels according to the number of the third voxels of each category in the first classification label; determining a weighted sum of the parent class prediction losses for the third voxel of the same class; fusing the weighted sum of the prediction losses of the parent classes corresponding to the classes to obtain the first prediction loss;
determining, from the second prediction result and the second classification label, a second prediction loss of the third voxel classification by the second classification module comprises: determining a sub-category prediction loss for each voxel in the third voxel according to the second prediction result and the second classification label; determining a second weight value of each voxel in the third voxels according to the number of the third voxels in each category in the second classification label; determining a weighted sum of the sub-category prediction losses for the third voxel of the same category; and fusing the weighted sum of the prediction losses of the sub-categories corresponding to the categories to obtain the second prediction loss.
9. A method of calculating a coronary calcification score, the method comprising:
acquiring a first medical image of a cardiac region;
classifying each voxel of the first medical image by using a preset segmentation model to obtain a first classification result of each voxel classified according to a parent class and a second classification result of each voxel classified according to a sub-class;
correcting the sub-category of each voxel in the first medical image according to the parent category of each voxel in the first medical image to obtain a third classification result;
obtaining a coronary calcified plaque segmentation result of the first medical image according to the third classification result;
and determining the coronary artery calcification score corresponding to the first medical image according to the coronary artery calcification plaque segmentation result.
10. A computer readable storage medium, having stored thereon a computer program, wherein the computer program, when being executed by a processor, is adapted to carry out the steps of the method for coronary calcified plaque segmentation according to any one of the claims 1 to 8 and/or the steps of the method for calculating a coronary calcification score according to claim 9.
CN202110350214.1A 2021-03-31 2021-03-31 Coronary artery calcified plaque segmentation method and method for calculating coronary artery calcified score Pending CN113077441A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110350214.1A CN113077441A (en) 2021-03-31 2021-03-31 Coronary artery calcified plaque segmentation method and method for calculating coronary artery calcified score

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110350214.1A CN113077441A (en) 2021-03-31 2021-03-31 Coronary artery calcified plaque segmentation method and method for calculating coronary artery calcified score

Publications (1)

Publication Number Publication Date
CN113077441A true CN113077441A (en) 2021-07-06

Family

ID=76614179

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110350214.1A Pending CN113077441A (en) 2021-03-31 2021-03-31 Coronary artery calcified plaque segmentation method and method for calculating coronary artery calcified score

Country Status (1)

Country Link
CN (1) CN113077441A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114529724A (en) * 2022-02-15 2022-05-24 推想医疗科技股份有限公司 Image target identification method and device, electronic equipment and storage medium
CN114943699A (en) * 2022-05-16 2022-08-26 北京医准智能科技有限公司 Segmentation model training method, coronary calcified plaque segmentation method and related device
CN114972376A (en) * 2022-05-16 2022-08-30 北京医准智能科技有限公司 Coronary calcified plaque segmentation method, segmentation model training method and related device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160117587A1 (en) * 2014-10-27 2016-04-28 Zhicheng Yan Hierarchical deep convolutional neural network for image classification
CN108776807A (en) * 2018-05-18 2018-11-09 复旦大学 It is a kind of based on can the double branch neural networks of skip floor image thickness grain-size classification method
CN108932714A (en) * 2018-07-23 2018-12-04 苏州润心医疗器械有限公司 The patch classification method of coronary artery CT image
CN109288536A (en) * 2018-09-30 2019-02-01 数坤(北京)网络科技有限公司 Obtain the method, apparatus and system of Coronary Calcification territorial classification
CN109389592A (en) * 2018-09-30 2019-02-26 数坤(北京)网络科技有限公司 Calculate the method, apparatus and system of coronary artery damage
CN111312374A (en) * 2020-01-21 2020-06-19 上海联影智能医疗科技有限公司 Medical image processing method, device, storage medium and computer equipment
CN111445449A (en) * 2020-03-19 2020-07-24 上海联影智能医疗科技有限公司 Region-of-interest classification method and device, computer equipment and storage medium
CN111598174A (en) * 2020-05-19 2020-08-28 中国科学院空天信息创新研究院 Training method of image ground feature element classification model, image analysis method and system
CN111598160A (en) * 2020-05-14 2020-08-28 腾讯科技(深圳)有限公司 Training method and device of image classification model, computer equipment and storage medium
CN111951285A (en) * 2020-08-12 2020-11-17 湖南神帆科技有限公司 Optical remote sensing image woodland classification method based on cascade deep convolutional neural network

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160117587A1 (en) * 2014-10-27 2016-04-28 Zhicheng Yan Hierarchical deep convolutional neural network for image classification
CN108776807A (en) * 2018-05-18 2018-11-09 复旦大学 It is a kind of based on can the double branch neural networks of skip floor image thickness grain-size classification method
CN108932714A (en) * 2018-07-23 2018-12-04 苏州润心医疗器械有限公司 The patch classification method of coronary artery CT image
CN109288536A (en) * 2018-09-30 2019-02-01 数坤(北京)网络科技有限公司 Obtain the method, apparatus and system of Coronary Calcification territorial classification
CN109389592A (en) * 2018-09-30 2019-02-26 数坤(北京)网络科技有限公司 Calculate the method, apparatus and system of coronary artery damage
CN111312374A (en) * 2020-01-21 2020-06-19 上海联影智能医疗科技有限公司 Medical image processing method, device, storage medium and computer equipment
CN111445449A (en) * 2020-03-19 2020-07-24 上海联影智能医疗科技有限公司 Region-of-interest classification method and device, computer equipment and storage medium
CN111598160A (en) * 2020-05-14 2020-08-28 腾讯科技(深圳)有限公司 Training method and device of image classification model, computer equipment and storage medium
CN111598174A (en) * 2020-05-19 2020-08-28 中国科学院空天信息创新研究院 Training method of image ground feature element classification model, image analysis method and system
CN111951285A (en) * 2020-08-12 2020-11-17 湖南神帆科技有限公司 Optical remote sensing image woodland classification method based on cascade deep convolutional neural network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
BIN KONG ET.AL: "Learning tree-structured representation for 3D coronary artery segmentation", 《COMPUTERIZED MEDICAL IMAGING AND GRAPHICS》, vol. 80, 31 March 2020 (2020-03-31), pages 1 - 9 *
DIJIA WU ET.AL: "Multi-Task Convolutional Neural Network for Joint Bone Age Assessment and Ossification Center Detection from Hand Radiograph", 《LECTURE NOTES IN ARTIFICIAL INTELLIGENCE》, 28 July 2020 (2020-07-28), pages 681 - 689 *
吴秋雯;周书怡;耿辰;李郁欣;曹鑫;耿道颖;杨丽琴;: "基于深度学习的计算机体层摄影血管造影颈动脉斑块分割初步研究", 上海医学, no. 05, 25 May 2020 (2020-05-25), pages 32 - 35 *
张卫卫: "基于深度神经网络的图像分割算法研究及其在心室和冠脉钙化上的应用", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》, 15 July 2019 (2019-07-15), pages 138 - 1203 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114529724A (en) * 2022-02-15 2022-05-24 推想医疗科技股份有限公司 Image target identification method and device, electronic equipment and storage medium
CN114943699A (en) * 2022-05-16 2022-08-26 北京医准智能科技有限公司 Segmentation model training method, coronary calcified plaque segmentation method and related device
CN114972376A (en) * 2022-05-16 2022-08-30 北京医准智能科技有限公司 Coronary calcified plaque segmentation method, segmentation model training method and related device
CN114943699B (en) * 2022-05-16 2023-01-17 北京医准智能科技有限公司 Segmentation model training method, coronary calcified plaque segmentation method and related device
CN114972376B (en) * 2022-05-16 2023-08-25 北京医准智能科技有限公司 Coronary calcified plaque segmentation method, segmentation model training method and related device

Similar Documents

Publication Publication Date Title
CN111899245B (en) Image segmentation method, image segmentation device, model training method, model training device, electronic equipment and storage medium
CN113077441A (en) Coronary artery calcified plaque segmentation method and method for calculating coronary artery calcified score
CN107230204B (en) A kind of method and device for extracting the lobe of the lung from chest CT image
US7756316B2 (en) Method and system for automatic lung segmentation
CN111696089B (en) Arteriovenous determination method, device, equipment and storage medium
US11972571B2 (en) Method for image segmentation, method for training image segmentation model
CN108765392B (en) Digestive tract endoscope lesion detection and identification method based on sliding window
US20080071711A1 (en) Method and System for Object Detection Using Probabilistic Boosting Cascade Tree
CN111899244B (en) Image segmentation method, network model training method, device and electronic equipment
CN112820399A (en) Method and device for automatically diagnosing benign and malignant thyroid nodules
Chen et al. A lung dense deep convolution neural network for robust lung parenchyma segmentation
CN111932495B (en) Medical image detection method, device and storage medium
Dong et al. Simultaneous segmentation of multiple organs using random walks
Wu et al. Cascaded fully convolutional DenseNet for automatic kidney segmentation in ultrasound images
WO2022105735A1 (en) Coronary artery segmentation method and apparatus, electronic device, and computer-readable storage medium
CN111814832A (en) Target detection method, device and storage medium
CN112288718B (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
CN114822823A (en) Tumor fine classification system based on cloud computing and artificial intelligence fusion multi-dimensional medical data
CN112529918B (en) Method, device and equipment for segmenting brain room area in brain CT image
CN113744215B (en) Extraction method and device for central line of tree-shaped lumen structure in three-dimensional tomographic image
CN113222989A (en) Image grading method and device, storage medium and electronic equipment
Hiraman et al. Efficient region of interest detection for liver segmentation using 3D CT scans
Wu et al. A multi-stage DCNN method for liver tumor segmentation
CN111583288B (en) Video multi-target association and segmentation method and system
CN109859214B (en) Automatic retina layer segmentation method and device with CSC lesion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination