CN113012173A - Heart segmentation model and pathology classification model training, heart segmentation and pathology classification method and device based on cardiac MRI - Google Patents

Heart segmentation model and pathology classification model training, heart segmentation and pathology classification method and device based on cardiac MRI Download PDF

Info

Publication number
CN113012173A
CN113012173A CN202110391121.3A CN202110391121A CN113012173A CN 113012173 A CN113012173 A CN 113012173A CN 202110391121 A CN202110391121 A CN 202110391121A CN 113012173 A CN113012173 A CN 113012173A
Authority
CN
China
Prior art keywords
layer
cardiac
heart
magnetic resonance
resonance imaging
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110391121.3A
Other languages
Chinese (zh)
Inventor
王怡宁
李书芳
马啸天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking Union Medical College Hospital Chinese Academy of Medical Sciences
Original Assignee
Peking Union Medical College Hospital Chinese Academy of Medical Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking Union Medical College Hospital Chinese Academy of Medical Sciences filed Critical Peking Union Medical College Hospital Chinese Academy of Medical Sciences
Priority to CN202110391121.3A priority Critical patent/CN113012173A/en
Publication of CN113012173A publication Critical patent/CN113012173A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention provides a method and a device for training a heart segmentation model and a pathology classification model based on cardiac MRI, the method for training the heart segmentation model and the pathology classification model based on the cardiac MRI, the method for training the heart segmentation model inhibits the residual background part with little pixel gray scale change through a standard deviation filter, highlights a left ventricle, a right ventricle and a myocardium, further obtains the center position of the myocardial wall of the left ventricle and draws a rectangular mask through canny edge detection and circular Hough transform, cuts a two-dimensional image based on the rectangular mask to be used as input for training a preset neural network model for training, can greatly inhibit background interference and promote the rapid convergence of neural network training. The pathology classification model training method comprises the steps of segmenting a two-dimensional image obtained by segmenting short shafts of cardiac magnetic resonance imaging of each frame in a cardiac cycle based on a cardiac segmentation model, calculating a classification characteristic value, and constructing a random forest based on the classification characteristic values of a plurality of samples and pathology classification to obtain a heart pathology classification model, so that automatic pathology classification is achieved.

Description

Heart segmentation model and pathology classification model training, heart segmentation and pathology classification method and device based on cardiac MRI
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a device for training a heart segmentation model and a pathology classification model based on cardiac Resonance Imaging (MRI).
Background
The heart is one of the most important organs of the human body, is the main organ in the circulatory system, and provides power for the blood flow. Cardiovascular Disease (CVD), also known as circulatory Disease, is a systemic vasculopathy or a manifestation of a systemic vasculopathy in the heart, has the characteristics of high morbidity, high disability rate and high mortality, and has become the most common cause of death in humans. Diagnosis of CVD is usually performed late in the symptoms, and late interventions are costly and have greatly reduced therapeutic efficacy, and therefore quantitative assessment and diagnosis of cardiac function at an early stage is of paramount importance. Analysis of cardiac function plays an important role in clinical cardiology for patient management, disease diagnosis, risk assessment, and treatment decisions. Both global and local cardiac function can be quantitatively analyzed by deriving clinical parameters such as ventricular volume, stroke volume, ejection fraction, and myocardial mass. The calculation of these parameters is premised on the accurate delineation of the endocardial and epicardial contours of the Left (LV) and Right (Right) ventricles.
Currently, Imaging techniques for clinically evaluating Cardiac function mainly include echocardiography, Cardiac nuclide Imaging, Multislice Computed Tomography (MSCT), Cardiac Magnetic Resonance Imaging (CMRI), and the like. Generally, the work of delineating vital organs and structures from medical images is done manually by radiologists, however, the manual analysis by radiologists requires a significant amount of effort and time, and the results may vary from person to person. For example, different radiologists are influenced by subjective experiences, environments, working conditions, and the like, and the delineation result of the same area cannot be reproduced in percentage. Further, the classification of pathologies performed on this basis also produces deviations.
Cardiac MRI is typically three-dimensional data requiring slice-by-slice analysis, the amount of image data generated during a cardiac cycle is large, and there is a large amount of background in each image. The existing segmentation algorithm is not easy to converge in the face of a large amount of data in the training process, and meanwhile, as the background part in the cardiac MRI image is high in proportion, the trained model weight is more biased to the recognition of the background, so that the recognition effect is not ideal.
Disclosure of Invention
The embodiment of the invention provides a method and a device for training a heart segmentation model and a pathology classification model based on cardiac MRI (magnetic resonance imaging), heart segmentation and pathology classification, and aims to solve the problems of slow convergence and poor recognition effect of the segmentation model in the training process.
The technical scheme of the invention is as follows:
in one aspect, the present invention provides a magnetic resonance imaging heart segmentation model training method, including:
obtaining a training sample set, wherein the training sample set at least comprises a plurality of two-dimensional images obtained by performing short-axis slicing on cardiac nuclear magnetic resonance imaging three-dimensional data of a plurality of samples in a cardiac cycle frame by frame, and segmentation labels related to a left ventricle, a right ventricle, a myocardium and a residual background on each two-dimensional image;
standardizing the two-dimensional images to obtain standard graphs, filtering each frame of standard graphs corresponding to slices at the same position in a cardiac cycle by adopting a standard deviation filter, and acquiring pixel gray level zeroing of pixel points in each standard graph corresponding to the slices at the position, wherein the standard deviation of the pixel gray level change along the cardiac cycle is lower than a set value, so as to obtain filter graphs corresponding to the two-dimensional images;
canny edge detection is carried out on each filter image, and the central position of the myocardial wall of the left ventricle and the maximum likelihood circular ring of the myocardial wall of the left ventricle are obtained through circular Hough transformation;
drawing a rectangular mask for the maximum likelihood circular ring corresponding to each two-dimensional image and cutting the corresponding two-dimensional image into a set size by taking the rectangular mask as a center;
and taking each two-dimensional image which is cut in the training sample set as input, taking the segmentation labels of the left ventricle, the right ventricle, the myocardium and the residual background which correspond to each two-dimensional image as real values, and training a preset neural network model to obtain a heart segmentation model.
In some embodiments, before the normalizing the two-dimensional image to obtain the standard map, the method further includes:
performing size normalization processing on the two-dimensional image, and filling the two-dimensional image to 256 × 256 by using the minimum pixel value of the two-dimensional image when the size of the two-dimensional image is less than 256 × 256; in the case where the two-dimensional image size is larger than 256 × 256, the two-dimensional image is clipped to 256 × 256.
In some embodiments, the set size is 128 x 128.
In some embodiments, after drawing a rectangular mask on the maximum likelihood ring corresponding to each two-dimensional image and cutting the corresponding two-dimensional image into a set size by taking the rectangular mask as a center, performing data enhancement operation on each cut two-dimensional image, wherein the data enhancement operation comprises-5 degrees of random rotation angle operation, -5 mm of random translation operation, -0.2 of random scaling operation, 0.01 of standard deviation with zero mean added, and/or elastic deformation operation by using B spline interpolation.
In some embodiments, the canny edge detection is smoothed using gaussian filtering using a two-dimensional gaussian distribution function with a value of σ of 3.
In some embodiments, the preset neural network model comprises: the integrated circuit comprises an input layer, an inclusion layer, a first channel cascade layer, a first intensive connecting block, a second channel cascade layer, a first down-sampling transition layer, a second intensive connecting block, a third channel cascade layer, a second down-sampling transition layer, a third intensive connecting block, a fourth channel cascade layer, a third down-sampling transition layer, a fourth intensive connecting block, a first element level addition layer, a first up-sampling transition layer, a second element level addition layer, a fifth intensive connecting block, a third element level addition layer, a second up-sampling transition layer, a fourth element level addition layer, a sixth intensive connecting block, a fifth element level addition layer, a third up-sampling transition layer, a sixth element level addition layer, a seventh intensive connecting block, a seventh element level addition layer, a 1 × 1 convolution and softmax layer, an Argmax layer and an output layer which are connected in sequence;
the first channel cascade layer cascades the 3 characteristic maps output by the inclusion layer;
the second channel cascade layer cascades the characteristic diagrams output by the first channel cascade layer and the first dense connection block;
the third channel cascade layer cascades the characteristic graphs output by the first down-sampling transition layer and the second dense connection block;
the fourth channel cascade layer cascades the feature maps output by the second downsampling transition layer and the third dense connection block;
the first element-level addition layer performs element-level addition on the feature map output by the third downsampling transition layer through the first skip connection layer and the feature map output by the fourth layer dense connection block;
the second element-level addition layer performs element-level addition on the feature map output by the fourth channel cascade layer through the second jump connection layer and the feature map output by the first up-sampling transition layer;
the third element-level addition layer performs element-level addition on the feature map output by the second element-level addition layer through a third jump connection layer and the feature map output by the fifth dense connection block;
the fourth element-level addition layer performs element-level addition on the feature map output by the third channel cascade layer through a fourth jump connection layer and the feature map output by the second up-sampling transition layer;
the fifth element-level addition layer performs element-level addition on the feature map output by the fourth element-level addition layer through a fifth skip connection layer and the feature map output by the sixth dense connection block;
the sixth element-level addition layer performs element-level addition on the feature map output by the second channel cascade layer through a sixth jump connection layer and the feature map output by the third up-sampling transition layer;
the seventh element-level addition layer performs element-level addition on the feature map output by the sixth element-level addition layer through a seventh skip connection layer and the feature map output by the seventh dense connection block;
wherein the inclusion layer performs convolution operations of three scales, namely 3 × 3, 5 × 5 and 7 × 7; the first Dense connection Block, the second Dense connection Block, the third Dense connection Block, the fifth Dense connection Block, the sixth Dense connection Block and the seventh Dense connection Block adopt a Dense Block network structure, and each layer of Dense connection is composed of batch normalized BN, ReLU activation function, 3 × 3 convolution and Dropout;
the first, second, and third downsampled transition layers are comprised of batch normalized BN, ReLU activation function, 3 × 3 convolution, Dropout, and 2 × 2 maximal pooling;
the first upsampling transition layer, the second upsampling transition layer, and the third upsampling transition layer are convolved with a 3 × 3 transpose with a step size of 2;
the fourth layer of dense connecting blocks is a bottleneck layer;
the first, second, third, fourth, fifth, sixth and seventh hopping connection layers consist of BN, ReLU activation function, 1 × 1 convolution and Dropout;
the preset neural network model adopts a cross entropy loss function to calculate the weighted cross entropy loss L at each voxelWCECalculated from the following formula:
Figure BDA0003016741860000041
wherein W ═ W1,w2,…,wl) Set of learnable weights, wlIs a weight matrix corresponding to the l layer of the deep learning network, X is a training sample, tiIs voxel xiE.g. the target class label corresponding to X, wmap(xi) For at each voxel xiThe estimated weight, p (t)i|xi(ii) a W) pair voxel x representing the final output layer of the networkiIs predicted.
In another aspect, the present invention further provides a magnetic resonance imaging cardiac segmentation method, including:
acquiring a two-dimensional image of cardiac nuclear magnetic resonance imaging to be segmented;
and inputting the two-dimensional image into the heart segmentation model obtained by the magnetic resonance imaging heart segmentation model training method, and calculating to obtain a target heart segmentation image.
In another aspect, the present invention further provides a method for training a cardiac pathology classification model based on magnetic resonance imaging, including:
acquiring multi-frame cardiac magnetic resonance imaging three-dimensional data of a plurality of samples in a cardiac cycle and corresponding pathological classification thereof, wherein the pathological classification at least comprises the following steps: normal, old myocardial infarction, right ventricular abnormalities, dilated heart disease, and hypertrophic heart disease;
slicing the multi-frame cardiac magnetic resonance imaging three-dimensional data of each sample to obtain a plurality of two-dimensional images, inputting the two-dimensional images into a cardiac segmentation model obtained by the magnetic resonance imaging cardiac segmentation model training method, and obtaining a cardiac segmentation map of each slice corresponding to the cardiac magnetic resonance imaging three-dimensional data of each sample;
calculating a plurality of classification characteristic values corresponding to each sample based on all cardiac segmentation maps corresponding to each position slice of each sample in one cardiac cycle, wherein the classification characteristic values at least comprise: left ventricular end-diastolic volume, right ventricular end-diastolic volume, myocardial end-diastolic volume, left ventricular end-systolic volume, right ventricular end-systolic volume, myocardial end-systolic volume, left ventricular ejection fraction, right ventricular ejection fraction, myocardial ejection fraction, end-diastolic left ventricular posterior wall thickness, and end-systolic left ventricular posterior wall thickness;
obtaining the classification characteristic value and the pathological classification corresponding to each sample, and establishing a sample training set;
and constructing a random forest based on the sample training set to obtain a heart pathology classification model.
In some embodiments, the value of the left ventricular end-diastolic volume is the sum of left ventricular pixel points in all cardiac segmentation maps in a frame corresponding to the end-diastolic of the cardiac cycle; the value of the volume of the right ventricle at the end diastole is the sum of the pixel points of the right ventricle in all the heart segmentation maps in the corresponding frames of the sample at the end diastole of the cardiac cycle; the value of the myocardial end diastole volume is the sum of pixel points of the right ventricle in all heart segmentation maps in a corresponding frame of the end diastole of the cardiac cycle of the sample; the value of the left ventricle end-systolic volume is the sum of the pixels of the left ventricle in all the heart segmentation maps in the corresponding frame of the sample at the end systole of the cardiac cycle; the value of the volume of the right ventricle at the end systole is the sum of the pixel points of the right ventricle in all the heart segmentation maps in the corresponding frame of the sample at the end systole of the cardiac cycle; the value of the myocardial end-systolic volume is the sum of pixel points of the right ventricle in all heart segmentation maps in a frame corresponding to the end systole of the cardiac cycle of the sample; the left ventricular ejection fraction is the ratio of the difference between the left ventricular end-diastolic volume and the left ventricular end-systolic volume to the left ventricular end-diastolic volume; the right ventricular ejection fraction is the ratio of the difference between the right ventricular end-diastolic volume and the right ventricular end-systolic volume to the right ventricular end-diastolic volume; the myocardial ejection fraction is a ratio of a difference between the myocardial end-diastolic volume and the myocardial end-systolic volume to the myocardial end-diastolic volume.
In another aspect, the present invention further provides a method for classifying cardiac pathologies based on magnetic resonance imaging, including:
acquiring multi-frame cardiac nuclear magnetic resonance imaging three-dimensional data of an object to be classified in a cardiac cycle and performing short-axis slicing on each frame of cardiac nuclear magnetic resonance imaging three-dimensional data to obtain a plurality of two-dimensional images;
inputting each two-dimensional image into a heart segmentation model obtained by the magnetic resonance imaging heart segmentation model training method for operation to obtain a heart segmentation picture corresponding to each two-dimensional image;
calculating a plurality of classification characteristic values corresponding to the object to be classified according to each heart segmentation picture, wherein the classification characteristic values at least comprise: left ventricular end-diastolic volume, right ventricular end-diastolic volume, myocardial end-diastolic volume, left ventricular end-systolic volume, right ventricular end-systolic volume, myocardial end-systolic volume, left ventricular ejection fraction, right ventricular ejection fraction, myocardial ejection fraction, end-diastolic left ventricular posterior wall thickness, and end-systolic left ventricular posterior wall thickness;
and inputting the classification characteristic value into the heart pathology classification model of the heart pathology classification model training method based on magnetic resonance imaging to obtain a corresponding pathology classification result of the heart nuclear magnetic resonance imaging three-dimensional data to be classified.
In another aspect, the present invention also provides a cardiac segmentation and pathology classification apparatus for magnetic resonance imaging, comprising:
the input module is used for acquiring multi-frame cardiac nuclear magnetic resonance imaging three-dimensional data of an object to be classified in one cardiac cycle;
a preprocessing module: the short-axis slice is used for performing short-axis slicing on each frame of cardiac magnetic resonance imaging three-dimensional data in a cardiac cycle to obtain a plurality of two-dimensional images;
the segmentation module is used for acquiring each standard image and inputting the standard image into the heart segmentation model obtained by the magnetic resonance imaging heart segmentation model training method for carrying out parallel operation to obtain a target heart segmentation image corresponding to each standard image;
the pathology classification module is used for acquiring each heart segmentation image corresponding to each frame of cardiac magnetic resonance imaging three-dimensional data in a cardiac cycle and calculating, and the classification characteristic values corresponding to the objects to be classified at least comprise: left ventricular end-diastolic volume, right ventricular end-diastolic volume, myocardial end-diastolic volume, left ventricular end-systolic volume, right ventricular end-systolic volume, myocardial end-systolic volume, left ventricular ejection fraction, right ventricular ejection fraction, myocardial ejection fraction, end-diastolic left ventricular posterior wall thickness, and end-systolic left ventricular posterior wall thickness; and inputting the classification characteristic values into the heart pathology classification model in the magnetic resonance imaging-based heart pathology classification model training method and calculating to obtain a heart pathology classification result.
In another aspect, the present invention further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the steps of the method are implemented.
The invention has the beneficial effects that:
according to the heart segmentation model based on cardiac MRI and the method and the device for training the pathology classification model, the residual background part with small pixel gray scale change is suppressed through the standard deviation filter, the left ventricle, the right ventricle and the cardiac muscle are highlighted, the center position of the myocardial wall of the left ventricle is further obtained through canny edge detection and circular Hough transform, a rectangular mask is drawn, and a two-dimensional image is cut based on the rectangular mask to serve as input for training a preset neural network model, so that background interference can be greatly suppressed, the rapid convergence of neural network training is promoted, the training speed and efficiency are improved, and the recognition effect is improved.
Furthermore, after a two-dimensional image obtained by segmenting the short axis of each frame of cardiac magnetic resonance imaging in the cardiac cycle is segmented based on the cardiac segmentation model, the classification characteristic value is calculated, and a random forest is constructed based on the classification characteristic values and pathological classification of a plurality of samples to obtain a cardiac pathological classification model, so that automatic classification is realized on the cardiac pathological classification based on cardiac MRI, and the efficiency and the precision of medical diagnosis are improved.
It will be appreciated by those skilled in the art that the objects and advantages that can be achieved with the present invention are not limited to the specific details set forth above, and that these and other objects that can be achieved with the present invention will be more clearly understood from the detailed description that follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the principles of the invention. In the drawings:
fig. 1 is a schematic flowchart of a method for training a magnetic resonance imaging heart segmentation model according to an embodiment of the present invention;
fig. 2 is a cardiac MRI two-dimensional image before and after filtering processing is performed on a standard deviation filter during image preprocessing in a magnetic resonance imaging cardiac segmentation model training method according to an embodiment of the present invention;
fig. 3 is a cardiac MRI two-dimensional image before and after canny edge detection in image preprocessing in the magnetic resonance imaging cardiac segmentation model training method according to an embodiment of the present invention;
fig. 4 is a cardiac MRI two-dimensional image obtained by drawing a rectangular mask during image preprocessing in the magnetic resonance imaging cardiac segmentation model training method according to an embodiment of the present invention;
fig. 5 is a cardiac MRI two-dimensional image obtained by clipping during image preprocessing in the magnetic resonance imaging cardiac segmentation model training method according to an embodiment of the present invention;
fig. 6 is a diagram of a preset neural network model structure in a magnetic resonance imaging heart segmentation model training method according to an embodiment of the present invention;
fig. 7 is a diagram of an initiation layer structure adopted by a preset neural network model in a magnetic resonance imaging heart segmentation model training method according to an embodiment of the present invention;
fig. 8 is a 5-layer cross validation classification confusion matrix for 100 pathology samples according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the following embodiments and accompanying drawings. The exemplary embodiments and descriptions of the present invention are provided to explain the present invention, but not to limit the present invention.
It should be noted that, in order to avoid obscuring the present invention with unnecessary details, only the structures and/or processing steps closely related to the scheme according to the present invention are shown in the drawings, and other details not so relevant to the present invention are omitted.
It should be emphasized that the term "comprises/comprising" when used herein, is taken to specify the presence of stated features, elements, steps or components, but does not preclude the presence or addition of one or more other features, elements, steps or components.
As medical digitization advances, the number of medical images acquired using various devices also grows rapidly, which further increases the workload of physicians. In computer-aided diagnosis systems, image segmentation techniques are used to segment organs, tissues, lesions in medical images, which are then used to quantitatively analyze relevant clinical parameters. This is a critical step in detecting diseased areas, performing pathological analysis, and is one of the steps that clinically require a great deal of physician effort. Cardiac MRI is typically three-dimensional data requiring slice-by-slice analysis, the amount of image data generated during a cardiac cycle is large, and there is a large amount of background in each image. The existing segmentation algorithm is not easy to converge in the face of a large amount of data in the training process, and meanwhile, as the background part in the cardiac MRI image is high in proportion, the trained model weight is more biased to the recognition of the background, so that the recognition effect is not ideal. Similarly, artificial intelligence models constructed for the classification of pathologies also have similar problems.
The invention provides a magnetic resonance imaging heart segmentation model training method, as shown in fig. 1, comprising the steps of S101-S105:
step S101: the method comprises the steps of obtaining a training sample set, wherein the training sample set at least comprises a plurality of two-dimensional images obtained by conducting short-axis slicing on cardiac nuclear magnetic resonance imaging three-dimensional data of a plurality of samples in a cardiac cycle frame by frame, and segmentation labels related to a left ventricle, a right ventricle, a myocardium and a residual background on each two-dimensional image.
Step S102: the two-dimensional images are standardized to obtain standard graphs, a standard deviation filter is adopted to filter each frame of standard graphs corresponding to slices at the same position in a cardiac cycle, pixel gray zeroing is obtained in each standard graph corresponding to the slices at the position, and the gray zeroing of the pixels is obtained when the standard deviation of the gray of the pixels changing along the cardiac cycle is lower than a set value, so that the filter graphs corresponding to the two-dimensional images are obtained.
Step S103: canny edge detection is carried out on each filter image, and the central position of the myocardial wall of the left ventricle and the maximum likelihood circular ring of the myocardial wall of the left ventricle are obtained through circular Hough transformation.
Step S104: and drawing a rectangular mask for the maximum likelihood circular ring corresponding to each two-dimensional image and cutting the corresponding two-dimensional image into a set size by taking the rectangular mask as a center.
Step S105: and taking each two-dimensional image which is cut in the training sample set as input, taking the segmentation labels of the left ventricle, the right ventricle, the myocardium and the residual background corresponding to each two-dimensional image as real values, and training the preset neural network model to obtain the heart segmentation model.
In step S101, the data in the training sample set is used to train a preset neural network model. The method comprises the steps of obtaining three-dimensional data of cardiac magnetic resonance imaging of a plurality of samples, wherein the data of each sample comprises a plurality of frames obtained by a plurality of detections in the whole cardiac cycle. Specifically, 30 frames of three-dimensional cardiac mri data are continuously acquired during a time period of one cardiac cycle T, including all states of the heart from end systole to end diastole. Each frame of cardiac magnetic resonance imaging three-dimensional data can be segmented into a plurality of layers of two-dimensional images along the short axis direction. Further, for the data in the sample training set, the left ventricle, the right ventricle, and the myocardial contour in each two-dimensional image may be segmented by a medical professional, with the remaining background. Illustratively, an ACDC data set may be used. The data set consisted of cardiac MRIs of 150 different patients, including 100 training samples and 50 test samples, each training sample containing End-Diastole (ED) and End-Systole (ES) left ventricle, right ventricle and myocardium (epicardial contour) expert manual segmentation labeling results. The MRI data for each patient contains a series of 28-40 short axis image slices for the entire cardiac cycle from the bottom to the top of the left ventricle. The spatial resolution of each slice is on average 235-263 pixels.
As one or more examples, end diastole and end systole cardiac MRI are three dimensional data, in nifty format data. And importing a NiBabel data packet for reading in a Python3.5 environment, and slicing the read format data frame by frame to convert three-dimensional data into two-dimensional data.
In step S102, different MRI acquisition apparatuses and different acquisition protocols may cause MRI data to have gray scale imbalance, thereby generating gray scale outliers with extremely high or extremely low gray scale values. To solve this problem, gradation normalization is performed on two-dimensional image data. The method has a good effect on outlier data beyond the value range. The normalization process for the two-dimensional image can adopt Min-max normalization or Z-SCORE normalization, and the method adopts a Z-SCORE method to normalize the gray value of the image. By grey value standardization, the amount of calculation is reduced under the condition of keeping grey difference, and the training speed is further optimized.
The calculation formula of the z-score normalization method in this example is:
Figure BDA0003016741860000091
where y represents the gray-level value subjected to gray-level normalization and x represents the original gray-level value. μ denotes the mean of all gray values in the two-dimensional image, and σ denotes the standard deviation of all gray values.
Further, in order to suppress the influence of the background on the preset neural network, in this embodiment, it is preferred to acquire a region where the heart is located, and cut off the background data as much as possible. Firstly, an area with strong change of pixel intensity along with time in an image sequence of a cardiac cycle is obtained through a standard deviation filter, and due to continuous contraction and relaxation of a cardiac structure in the cardiac cycle, the gray level of a pixel point at a corresponding position in a cardiac MRI two-dimensional picture is continuously changed, so that higher standard deviation is presented. And the pixel point change of the background part is small, and the corresponding standard deviation is low.
Specifically, four-dimensional cardiac MRI image data (x, y, z, t) is read, a slice z1 is designated, the image sequence is converted to an (x, y) image sequence of z1 delay time t, and pixel gray levels p are assigned to all points in the image sequence corresponding to pixel x y(x,y,t)Calculating the standard deviation, taking the pixel point (x1, y1) as an example:
Figure BDA0003016741860000092
wherein p is(x1,y1,t)The gray scale of the cardiac MRI two-dimensional image corresponding to the pixel point (x1, y1) at the time t,
Figure BDA0003016741860000093
is the mean value of the gray levels of all the cardiac MRI two-dimensional images of the pixel points (x1, y1) in the whole cardiac cycle, s(x1,y1)Is the gray scale standard deviation of the pixel point (x1, y 1).
When the corresponding gray standard value of a certain pixel in a cardiac cycle is smaller than the set value, the pixel can be regarded as the background, and the gray or pixel intensity can be subjected to zero setting. The set value can be specifically determined according to different detection equipment, detection objects or practical application environments so as to generate a better suppression effect on the background and keep the heart structure information.
As shown in fig. 2, a region with weak intensity varying with the cardiac cycle in the cardiac MRI two-dimensional image is suppressed by standard deviation filtering, and a region with strong intensity varying with the cardiac cycle, that is, a cardiac structural part is retained, so that a filtering map is obtained.
In some embodiments, before step S102, before the normalizing the two-dimensional image to obtain the standard map, the method further includes: performing size normalization processing on the two-dimensional image, and filling the two-dimensional image to 256 × 256 by using the minimum pixel value of the two-dimensional image when the size of the two-dimensional image is less than 256 × 256; in the case where the two-dimensional image size is larger than 256 × 256, the two-dimensional image is clipped to 256 × 256.
In step S103, the Canny edge detection algorithm may mainly include (1) to (4):
(1) and (6) smoothing the image. And removing the noise in the image by adopting Gaussian smoothing filtering. In the image edge detection process, the edges of the image and the noise are difficult to distinguish, and the influence of the noise on the edge detection process and the result cannot be completely counteracted only by an edge detection algorithm, so that the original image needs to be preprocessed. Common filtering methods in image preprocessing are mean filtering, median filtering, and gaussian filtering. Compared to mean filtering and medianAnd filtering, namely Gaussian filtering can well keep the gray distribution condition in the image when the image is subjected to smooth filtering. Let f (x, y) be input data, fs(x, y) is the image smoothed by the gaussian convolution, and G (x, y) represents a gaussian function, and the expression is as follows:
Figure BDA0003016741860000101
fs(x,y)=f(x,y)*G(x,y) (4)
preferably σ in the gaussian function is 3.
(2) The image gradient strength and direction are calculated. The basic idea of the Canny algorithm is to find the position in an image where the intensity of the gray scale changes most strongly, i.e. the gradient direction. The gradient of each pixel point in the smoothed image is calculated by a Sobel operator. First, gradients Gx and Gy in the horizontal (x) and vertical (y) directions are obtained using the following convolution arrays Sx and Sy, respectively:
Figure BDA0003016741860000102
Figure BDA0003016741860000111
then, the gradient amplitude of each pixel point is obtained by using formula 5:
Figure BDA0003016741860000112
in places with severe changes, that is, at boundaries, a large gradient metric G is obtained, however, these boundaries are usually very thick, it is difficult to calibrate the true positions of the boundaries, and in order to calibrate the boundaries, the direction angle θ information of the gradient is further calculated, as shown in formula 6:
Figure BDA0003016741860000113
(3) Non-Maximum Suppression (Non-Maximum Suppression) technique is applied to eliminate edge false detection. And (3) for each pixel point, approximating the gradient direction to one of (0, 45, 90,135,180, 225, 270 and 315), comparing the gradient strength of the pixel point with the gradient strength of two pixel points in the positive and negative gradient directions, if the gradient strength of the pixel point is maximum, keeping the pixel point, and regarding the pixel point as an edge, otherwise, inhibiting. After the processing, a brightest thin line at the boundary is remained, and the edge of the image is obviously thinned. But these thin line portions may be local maxima due to noise or other causes.
(4) A double threshold is applied to determine possible edges. In this embodiment, the Canny algorithm applies a dual-threshold technique, that is, an upper threshold limit T is set1And a threshold lower time T2If the gradient value of the pixel point in the image exceeds T1Called strong edge, less than T1Greater than T2Is called weak edge, is less than T2Is not an edge. By setting a sufficiently high T1The gradient of the strong edge pixel point is large enough (the change is severe enough), and the strong edge is necessarily the edge point. In order to judge whether the weak edge is an edge or noise, when strong edge pixels exist in 8 neighborhood pixels around the weak edge pixel, the weak edge point is reserved as the edge pixel, so that the strong edge is supplemented, and if the strong edge is not present, the strong edge is removed. Preferably, the high and low thresholds T1:T2The ratio of the ratio is 2: 1.
As shown in fig. 3, when canny edge detection is used, the σ value of the gaussian function can be set according to the requirements of the application scenario, and in the application scenario, the value of σ is preferably 3, so that more concise cardiac structure edge lines can be retained.
Further, for hough transform, the principle is to apply the transform between two coordinate spaces, and by using the duality of points and lines, map a straight line or a curve with the same shape in one space to one point in the other coordinate space to form a peak value, thereby converting the problem of detecting an arbitrary shape into a statistical peak value problem. Taking the Circular Hough Transform (CHT) as an example, the Hough Transform first extracts the edge of the image, and then maps each point in the edge image into another three-dimensional space, which is called Hough space, by using the equation of a circle. The first two dimensions of the Hough space represent the coordinates of the center of a circle to be detected, and the third dimension represents the radius of the circle to be detected. Each point in the edge image is mapped into a conical surface in the Hough space, the mapping of the points on the same circle in the Hough space intersects with one point, and the corresponding coordinates of the points are the parameters of the circle to be detected.
The equation for a circle in two-dimensional space is represented by:
(x-a)2+(y-b)2=r2 (7)
at any point on the circle (x)i,yi) Is mapped to a surface in the three-dimensional parameter space a-b-r, the surface equation being represented by:
(xi-a)2+(yi-b)2=r2 (8)
each point on the circle is subjected to the above mapping process. Provided that the conical surfaces corresponding to each point intersect at a point (a)0,b0,r0) All points are on the circle determined by the three parameters, and the circle is the circle to be detected.
Since the left ventricle can be approximated to a circle, the detection of a circle using hough transform is used to detect the left ventricle area as a region of interest based on Canny edge detection, and its center position and its maximum likelihood circle are determined.
In step S104, as shown in fig. 4 and 5, a rectangular mask is drawn on the maximum likelihood circle corresponding to each two-dimensional image, and the corresponding two-dimensional image is clipped to a set size with the rectangular mask as the center, and when 256 × 256 is used for the size normalization processing, the set size may be 128 × 128. Most of the background image can be eliminated by the clipping in step S104.
In some embodiments, after step S104, that is, after the rectangular mask is drawn on the maximum likelihood ring corresponding to each two-dimensional image and the corresponding two-dimensional image is cut to a set size with the rectangular mask as the center, data enhancement operation is performed on each cut two-dimensional image, where the data enhancement operation includes-5 to 5 degrees of random rotation angle operation, -5 to 5mm of random translation operation, -0.2 to 0.2 of random scaling operation, 0.01 of standard deviation with zero mean added, and/or elastic deformation operation using B-spline interpolation.
In step S105, the two-dimensional image obtained by clipping is used as an input after the background is removed, and is used for training a preset neural network model. Different MRI acquisition equipment can lead to different sizes of MRI data in clinic. Neural network training requires a significant amount of time. To increase the training rate, the present example uses the method of batch to input data into the neural network. Therefore, there is a need to standardize cardiac MRI data size to unify data size.
The preset neural network model, as shown in fig. 6, includes: the integrated circuit comprises an input layer, an inclusion layer, a first channel cascade layer, a first intensive connecting block, a second channel cascade layer, a first down-sampling transition layer, a second intensive connecting block, a third channel cascade layer, a second down-sampling transition layer, a third intensive connecting block, a fourth channel cascade layer, a third down-sampling transition layer, a fourth intensive connecting block, a first element level addition layer, a first up-sampling transition layer, a second element level addition layer, a fifth intensive connecting block, a third element level addition layer, a second up-sampling transition layer, a fourth element level addition layer, a sixth intensive connecting block, a fifth element level addition layer, a third up-sampling transition layer, a sixth element level addition layer, a seventh intensive connecting block, a seventh element level addition layer, a 1 × 1 convolution and softmax layer, an Argmax layer and an output layer which are connected in sequence;
the first channel cascade layer cascades 3 characteristic maps output by the inclusion layer.
And the second channel cascade layer cascades the characteristic diagrams output by the first channel cascade layer and the first dense connection block.
And the third channel cascade layer cascades the characteristic graphs output by the first down-sampling transition layer and the second dense connection block.
And the fourth channel cascade layer cascades the feature maps output by the second downsampling transition layer and the third dense connection block.
And the first element-level addition layer performs element-level addition on the feature map output by the third downsampled transition layer through the first skip connection layer and the feature map output by the fourth layer of dense connection blocks.
And the second element-level addition layer performs element-level addition on the feature map output by the fourth channel cascade layer through the second jump connection layer and the feature map output by the first up-sampling transition layer.
And the third element-level addition layer performs element-level addition on the feature map output by the second element-level addition layer through the third jump connection layer and the feature map output by the fifth dense connection block.
And the fourth element-level addition layer performs element-level addition on the feature map output by the third channel cascade layer through the fourth jump connection layer and the feature map output by the second up-sampling transition layer.
And the fifth element-level addition layer performs element-level addition on the feature graph output by the fourth element-level addition layer through the fifth jump connection layer and the feature graph output by the sixth dense connection block.
And the sixth element-level addition layer performs element-level addition on the feature map output by the second channel cascade layer through the sixth jump connection layer and the feature map output by the third up-sampling transition layer.
And the seventh element-level addition layer performs element-level addition on the feature graph output by the sixth element-level addition layer through the seventh skip connection layer and the feature graph output by the seventh dense connection block.
Wherein, the Inceptation layer carries out convolution operations of three scales of 3 × 3, 5 × 5 and 7 × 7; the first Dense connecting Block, the second Dense connecting Block, the third Dense connecting Block, the fifth Dense connecting Block, the sixth Dense connecting Block and the seventh Dense connecting Block adopt a Dense Block network structure, and each layer of the Dense connection is composed of batch normalized BN, a ReLU activation function, 3 x 3 convolution and Dropout.
The first, second, and third downsampled transition layers are composed of batch normalized BN, ReLU activation function, 3 × 3 convolution, Dropout, and 2 × 2 max pooling.
The first, second and third upsampled transition layers are convolved with a 3 x 3 transpose with a step size of 2.
The fourth layer of dense connecting blocks is a Bottleneck layer, namely a Bottleneck layer, and the main purpose is to reduce the number of parameters, thereby reducing the amount of calculation, and training data and extracting features more effectively and intuitively after dimension reduction.
The first, second, third, fourth, fifth, sixth and seventh hopping connection layers consist of BN, ReLU activation function, 1 × 1 convolution and Dropout.
Specifically, the cardiac MRI two-dimensional image has the problems of uneven gray scale distribution, complex image characteristics, artifacts, and the like, and the conventional convolutional neural network has the problems of excessive parameters, insufficient characteristic application, and the like. Aiming at MRI, the invention constructs an improved neural network model for ventricular structure segmentation, and the neural network model can segment the left ventricle, the right ventricle and the myocardium simultaneously.
As shown in fig. 7, the invention uses an improved inclusion module, the original inclusion module is composed of convolution operations with maximum pooling and convolution kernel sizes of 1 × 1, 3 × 3 and 5 × 5, respectively, of which the step length is 2, and the improved inclusion module is composed of convolution operations with convolution kernels sizes of 3 × 3, 5 × 5 and 7 × 7, so that the maximum pooling operation is removed, and a convolution operation with a larger convolution kernel size of 7 × 7 is introduced, thereby enlarging the receptive field of the model, facilitating the detection of a large target region, and reducing the false positive region misjudgment similar to the target.
In the preset neural network, the convolution layer is used for carrying out convolution operation on the image, and the aim of the convolution layer is to extract effective features of the image. The pooling layer is used for concentrating the extracted characteristics of the convolution layer, extracting more effective characteristics, reducing parameters and improving the training efficiency of the model. In the neural network, a part of structures adopt a maximum pooling mode, and compared with other pooling modes, the maximum pooling mode can more effectively store the texture information of the image so as to facilitate high-precision segmentation. ReLU is used as an activation function to provide the nonlinear modeling capability of the network. Without the activation function, the network can only express a linear mapping, when the entire network is equivalent to a single layer neural network, even if there are more hidden layers. Only after the activation function is added, the deep neural network has the layered nonlinear mapping learning capability. The gradient vanishing problem can be suppressed to some extent by using ReLU as an activation function. Compared with other activation functions, the ReLU also has the advantages of high calculation speed and high convergence of the neural network, and the significance of the deep neural network constructed by the method is remarkable. The formula for ReLU is:
ReLU=max(0,x) (9)
referring to fig. 6, the operation principle of the neural network model includes:
step 1: firstly, the preprocessed two-dimensional MRI is read in, and the improved neural network model is input into the improved neural network model by the quantity of 16 of batch size. The 128 x1 image first enters the modified inclusion layer. The output feature map channel dimension is 3 128 × 128 × 12.
Step 2: the 3 feature maps passing through the inclusion layer are input into the first channel cascade layer, and the channel dimension of the output feature map is 128 × 128 × 36.
And step 3: the feature map passing through the channel cascade layer is input into a first Dense connection Block (3 layers, k is 12), and the output feature map channel dimension is 128 × 128 × 36.
And 4, step 4: and inputting the feature map obtained by the first dense connection block processing and the feature map passing through the first channel cascade layer into the second channel cascade layer, wherein the channel dimension of the output feature map is 128 multiplied by 72.
And 5: and inputting the feature map passing through the second channel cascade layer into the first down-sampling transition layer, wherein the channel dimension of the output feature map is 64 × 64 × 72.
Step 6: the feature map output by the first downsampled transition layer is input into the second dense connection block (4 layers, k is 12), and the channel dimension of the output feature map is 64 × 64 × 48.
And 7: and inputting the characteristic diagram output by the second dense connection block and the characteristic diagram passing through the first down-sampling transition layer into the third channel cascade layer, wherein the channel dimension of the output characteristic diagram is 64 multiplied by 120.
And 8: and inputting the feature map passing through the third channel cascade layer into a second down-sampling transition layer, wherein the channel dimension of the output feature map is 32 x 60.
And step 9: the feature map output by the second down-sampling transition layer is input into a third dense connection block (5 layers, k is 12), and the channel dimension of the output feature map is 32 × 32 × 60.
Step 10: and inputting the feature map passing through the third dense connection block and the feature map passing through the second downsampling transition layer into a fourth channel cascade layer, wherein the channel dimension of the output feature map is 32 x 120.
Step 11: inputting the feature map output by the fourth channel cascade layer into a third down-sampling transition layer, wherein the channel dimension of the output feature map is 16 multiplied by 120;
step 12: inputting the feature map subjected to the third down-sampling transition layer into a fourth dense connection block (7 layers, k is 12), and outputting a feature map with channel dimensions of 16 × 16 × 84;
step 13: and inputting the feature map output by the third down-sampling transition layer into the first skip connection layer, and performing element-level addition on the feature map passing through the fourth dense connection block, wherein the channel dimension of the output feature map is 16 × 16 × 84.
Step 14: inputting the feature map output by the first element-level addition layer into a first up-sampling transition layer, wherein the channel dimension of the output feature map is 32 × 32 × 84; the upsampling Transition layer (Transition Up) achieves the recovery of the feature map resolution by a 3 × 3 transpose convolution with step size 2.
Step 15: and the feature map output by the fourth channel cascade layer through the second jump connection layer and the feature map output by the first up-sampling transition layer are input to the second element-level addition layer for element-level addition, and the channel dimension of the output feature map is 32 multiplied by 84.
Step 16: the feature map output by the second element-level addition layer is input into a fifth dense-connected block (5 layers, k is 12), and the output feature map channel dimension is 32 × 32 × 60.
And step 17: and inputting the feature map output by the second element-level addition layer through the third jump connection layer and the feature map output by the fifth dense connection block into the third element-level addition layer for element-level addition, wherein the channel dimension of the output feature map is 32 multiplied by 60.
Step 18: and inputting the feature map output by the third element-level addition layer into the second up-sampling transition layer, wherein the channel dimension of the output feature map is 64 multiplied by 60.
Step 19: and inputting the feature map output by the third channel cascade layer through the fourth jump connection layer and the feature map output by the second up-sampling transition layer into the fourth pixel-level addition layer, wherein the channel dimension of the output feature map is 64 multiplied by 60.
Step 20: the feature map output by the fourth element-level addition layer is input into the sixth dense-connected block (4 layers, k is 12), and the feature map channel dimension of the output is 64 × 64 × 48.
Step 21: and inputting the feature map output by the fourth element-level addition layer through the fifth skip connection layer and the feature map output by the sixth dense connection block into the fifth element-level addition layer, wherein the channel dimension of the output feature map is 64 multiplied by 48.
Step 22: and inputting the feature map output by the fifth element stage addition layer into a third up-sampling transition layer, wherein the channel dimension of the output feature map is 128 multiplied by 48.
Step 23: inputting the feature map output by the second channel cascade layer through the sixth jump connection layer and the feature map output by the third up-sampling transition layer into the sixth element-level addition layer for element-level addition, wherein the channel dimension of the output feature map is 128 multiplied by 48.
Step 24: the feature map output by the sixth element-level addition layer is input into the seventh dense-connected block (3 layers, k is 12), and the output feature map channel dimension is 128 × 128 × 36.
Step 25: and inputting the feature map output by the sixth element-level addition layer through the seventh skip connection layer and the feature map output by the seventh dense connection block into the seventh element-level addition layer for element-level addition, wherein the channel dimension of the output feature map is 128 multiplied by 36.
Step 26: inputting the feature map output by the seventh element-level addition layer into a 1 × 1 convolution and softmax layer, and performing convolution and softmax operations, wherein the convolution kernel size is 1 × 1, the step size is 1, and the padding is same as the name, and the layer plays a role in classifying one pixel. Finally, a 128 × 128 × 4 segmentation feature map is obtained.
Step 27: and performing argmax operation on the segmentation feature map obtained in the step 26, and obtaining a final segmentation map of 128 × 128 × 1 through argmax.
Further, as one or more embodiments, the weighted cross entropy of the neural network structure application design constructed by the present invention is a loss function, and an Adam optimizer is used to optimize the loss function.
In cardiac MRI, pixels corresponding to cardiac-related structures are foreground pixels, and pixels corresponding to tissue organs around the ventricles are background pixels. In the whole cardiac MRI, most of the pixels belong to the background pixels, so the result of the network prediction may be more biased to the background pixels, and if the foreground pixels are not weighted enough, the network training tends to give higher weight to the background pixels, resulting in a large number of misclassifications of the pixels. The cross entropy loss measures the cumulative error of all voxels by computing the voxel error probability between the predicted output class and the target class. Considering that the problem of class imbalance caused by singly using the cross entropy loss cannot be solved, the invention designs a weighting scheme for calculating the weighted cross entropy loss.
The weight map generated from the manual segmentation of the true annotation image is used for a weighted cross entropy loss formula to calculate loss at each voxel, the voxel is a unit of cardiac MRI three-dimensional data, and the voxel of the cardiac MRI three-dimensional data corresponds to a pixel of a two-dimensional picture obtained after slice processing. Weighted cross entropy loss LWCECalculated from the following formula,
Figure BDA0003016741860000171
wherein W ═ W1,w2,…,wl) Set of learnable weights, wlIs a weight matrix corresponding to the l layer of the deep learning network, X is a training sample, tiIs voxel xiE.g. the target class label corresponding to X, wmap(xi) For at each voxel xiThe estimated weight, p (t)i|xi(ii) a W) pair voxel x representing the final output layer of the networkiIs predicted.
Further, in some embodiments, a portion of the data in the training sample set is used to test the segmentation effect of the heart segmentation model. Specifically, the present embodiment uses the Jaccard coefficient, the Dice overlap coefficient, and the Hausdorff Distance (Hausdorff Distance) to evaluate the test data segmentation effect, which have been standardized in the evaluation of medical image processing.
The Jaccard Coefficient is also called Jaccard Similarity Coefficient (Jaccard Similarity Coefficient) and is used for comparing Similarity and difference between sample sets, and the larger the Jaccard Coefficient value is, the higher the sample Similarity is. The following equation is defined, where P and G are the sets of pixels enclosed by the predicted and true label contours, respectively:
Figure BDA0003016741860000172
the Dice coefficient basically measures the similarity degree of a predicted label and a true value, the value of the Dice coefficient is between 0 and 1, and the higher the Dice coefficient, the better the segmentation performance. Defined as follows:
Figure BDA0003016741860000173
the hausdorff distance is a measure describing the distance between two point sets, which measures the maximum degree of mismatch between the two point sets, and represents that the two point sets do not closely match, defined as follows:
H(P,G)=max(h(P,G),h(G,P)) (13)
Figure BDA0003016741860000174
the evaluation of segmentation effect is shown in table 1 below, and the deep learning network structure obtained by the training of the present invention can obtain a high-precision segmentation image.
TABLE 1
Left ventricle Right ventricle Cardiac muscle
Jaccard coefficient 0.904 0.859 0.765
Dice coefficient 0.949 0.923 0.866
Housdov distance 4.037 8.227 4.838
In another aspect, the present invention further provides a magnetic resonance imaging cardiac segmentation method, including steps S201 to S202:
step S201: a two-dimensional image of cardiac magnetic resonance imaging to be segmented is acquired.
Step S202: the two-dimensional image is input into the heart segmentation model obtained by the magnetic resonance imaging heart segmentation model training method in steps S101 to S105, and the target heart segmentation image is obtained through calculation.
In the present embodiment, the cardiac MRI two-dimensional image is segmented based on the cardiac segmentation model obtained in steps S101 to S105. The two-dimensional graph is obtained by slicing three-dimensional data of cardiac magnetic resonance imaging. In some embodiments, the two-dimensional image may also be cropped to remove portions of the background.
On the other hand, the invention also provides a heart pathology classification model training method based on magnetic resonance imaging, which comprises the following steps of S301-S305:
step S301: acquiring multi-frame cardiac magnetic resonance imaging three-dimensional data of a plurality of samples in a cardiac cycle and corresponding pathological classification, wherein the pathological classification at least comprises the following steps: normal, old myocardial infarction, right ventricular abnormalities, dilated heart disease, and hypertrophic heart disease.
Step S302: and (3) slicing the multi-frame cardiac magnetic resonance imaging three-dimensional data of each sample to obtain a plurality of two-dimensional images, inputting the two-dimensional images into the cardiac segmentation model obtained by the magnetic resonance imaging cardiac segmentation model training method in the steps S101 to S105, and obtaining the cardiac segmentation image of each slice corresponding to the cardiac magnetic resonance imaging three-dimensional data of each sample.
Step S303: calculating a plurality of classification characteristic values corresponding to each sample based on all cardiac segmentation maps corresponding to each position slice of each sample in one cardiac cycle, wherein the classification characteristic values at least comprise: left ventricular end-diastolic volume, right ventricular end-diastolic volume, myocardial end-diastolic volume, left ventricular end-systolic volume, right ventricular end-systolic volume, myocardial end-systolic volume, left ventricular ejection fraction, right ventricular ejection fraction, myocardial ejection fraction, end-diastolic left ventricular posterior wall thickness, and end-systolic left ventricular posterior wall thickness.
Step S304: and obtaining the classification characteristic value and the pathological classification corresponding to each sample, and establishing a sample training set.
Step S305: and constructing a random forest based on the sample training set to obtain a heart pathology classification model.
In step S301, multiple frames of cardiac mri three-dimensional data in the entire cardiac cycle, specifically including multiple frames of end systole to end diastole data, are acquired for each sample, and the cardiac mri three-dimensional data in each frame may be divided into multiple two-dimensional pictures along the short-axis slice. The case classification of the sample is diagnosed and marked by a specialist. The present embodiment relates to classifiers for five types of heart pathologies, including Dilated Cardiomyopathy (DCM), Hypertrophic Cardiomyopathy (HCM), chronic Myocardial Infarction (MINF), right ventricular Abnormalities (ARV), and normal persons (NOR), with the classification criteria as shown in table 2:
TABLE 2
Figure BDA0003016741860000181
Figure BDA0003016741860000191
In step S302, segmenting each frame of cardiac magnetic resonance imaging three-dimensional data of each sample in the cardiac cycle to obtain a plurality of two-dimensional pictures, and segmenting the two-dimensional pictures by using the cardiac segmentation models obtained in steps S101 to S105 to obtain cardiac segmentation maps.
In step S303, a classification feature value is calculated based on a cardiac segmentation map obtained by segmenting and segmenting cardiac magnetic resonance imaging three-dimensional data of each frame of a cardiac cycle.
Specifically, the classifier uses the following features: end-diastole ED and end-systole ES, volumes of the left and right ventricles and myocardium; ejection fraction of the left and right ventricles and myocardium; the wall thickness of the left ventricle at end diastole and the wall thickness of the left ventricle at end systole; and the mean and standard deviation of various features, etc.
The ventricular volume is the sum of left and right ventricular or myocardial pixel points in all MRI slices of a patient, and the calculation formula is as follows:
V=∑ijkn×s (15)
in the formula, n represents the number of the pixels in the layer, s represents the spatial resolution, and the range of s in this embodimentIs 1.37 to 1.68mm2Pixel/pixel.
The ejection fraction EF is the ratio of the difference between the end-diastolic volume and the end-systolic volume of the left ventricle or the right ventricle or the myocardium of a patient to the end-diastolic volume:
EF=(EDV-ESV)×100%/EDV (16)
in the formula, EDV is the end diastolic volume of left and right ventricles or myocardial muscle, and ESV is the end systolic volume of left and right ventricles or myocardial muscle.
Specifically, the value of the left ventricular end diastole volume is the sum of left ventricular pixel points in all cardiac segmentation maps in a corresponding frame of the sample at the end diastole of the cardiac cycle; the value of the end diastole volume of the right ventricle is the sum of pixel points of the right ventricle in all heart segmentation maps in a frame corresponding to the end diastole of the cardiac cycle of the sample; the value of the myocardial end diastole volume is the sum of the pixel points of the right ventricle in all the heart segmentation maps in the corresponding frames of the end diastole of the cardiac cycle of the sample; the value of the volume of the left ventricle at the end systole is the sum of pixel points of the left ventricle in all heart segmentation maps in a frame corresponding to the end systole of the cardiac cycle; the value of the volume of the right ventricle at the end systole is the sum of the pixel points of the right ventricle in all the heart segmentation maps in the corresponding frame of the sample at the end systole of the cardiac cycle; the value of the myocardial end-systolic volume is the sum of the pixel points of the right ventricle in all the heart segmentation maps in the frame corresponding to the end systole of the cardiac cycle; the left ventricular ejection fraction is the proportion of the difference between the left ventricular end-diastolic volume and the left ventricular end-systolic volume in the left ventricular end-diastolic volume; the right ventricular ejection fraction is the ratio of the difference between the right ventricular end-diastolic volume and the right ventricular end-systolic volume to the right ventricular end-diastolic volume; myocardial ejection fraction is the ratio of the difference between myocardial end-diastolic volume and myocardial end-systolic volume to myocardial end-diastolic volume.
In steps S304 and S305, a cardiac pathology classification model is constructed by a random forest algorithm. The basic unit of the random forest is a decision tree, each tree is a classifier, classification and voting results of all the trees are integrated through a machine learning integration learning method, and the category with the largest voting times is designated as final output. When a random forest is generated, if the size of the training set is N, N training samples are randomly and repeatedly extracted from the training set for each tree to serve as the training set of the tree, so that the training randomness of each tree is guaranteed, and the classification result of the random forest is more reliable; if the feature dimension of each sample is M, a constant M < < M is appointed, M feature subsets are randomly selected from the M features, and the optimal feature subset is selected from the M features when the tree is split each time; each tree grows to the maximum extent possible and there is no pruning process. The random forest is not easy to fall into overfitting due to the introduction of randomness of the random forest, and the random forest has excellent anti-noise capability.
Further, training a sample training set according to 5 layers of cross validation, randomly dividing the sample training set into 5 layers, training a model by using 4 layers of the 5 layers, and then validating by using the 5 th layer; recording the error obtained from each prediction; repeating the process until each layer of data is subjected to a verification set; the average of the 5 errors recorded, called cross-validation error, can be used as a criterion to measure model performance.
In an embodiment, 100 patients, 20 cases of 5 pathologies, and 5-level cross validation classification confusion matrices are shown in fig. 8, which shows that the accuracy of the pathology classification of the heart pathology classifier obtained in this embodiment is 95%, and a high-accuracy pathology classification result can be obtained.
On the other hand, the invention also provides a heart pathology classification method based on magnetic resonance imaging, which comprises the following steps of S401 to S404:
step S401: obtaining a plurality of frames of cardiac nuclear magnetic resonance imaging three-dimensional data of an object to be classified in a cardiac cycle, and performing short-axis slicing on each frame of cardiac nuclear magnetic resonance imaging three-dimensional data to obtain a plurality of two-dimensional images.
Step S402: and inputting each two-dimensional image into the heart segmentation model obtained by the magnetic resonance imaging heart segmentation model training method in the steps S101 to S105, and calculating to obtain a heart segmentation picture corresponding to each two-dimensional image.
Step S403: calculating a plurality of classification characteristic values corresponding to the object to be classified according to each heart segmentation picture, wherein the classification characteristic values at least comprise: left ventricular end-diastolic volume, right ventricular end-diastolic volume, myocardial end-diastolic volume, left ventricular end-systolic volume, right ventricular end-systolic volume, myocardial end-systolic volume, left ventricular ejection fraction, right ventricular ejection fraction, myocardial ejection fraction, end-diastolic left ventricular posterior wall thickness, and end-systolic left ventricular posterior wall thickness.
Step S404: and inputting the classification characteristic values into the heart pathology classification model of the heart pathology classification model training method based on magnetic resonance imaging in the steps S301 to S305 to obtain a corresponding pathology classification result of the three-dimensional data of the heart nuclear magnetic resonance imaging to be classified.
In another aspect, the present invention also provides a cardiac segmentation and pathology classification apparatus for magnetic resonance imaging, comprising:
the input module is used for acquiring multi-frame cardiac magnetic resonance imaging three-dimensional data of the object to be classified in one cardiac cycle.
A preprocessing module: the method is used for performing short-axis slicing on each frame of cardiac magnetic resonance imaging three-dimensional data in a cardiac cycle to obtain a plurality of two-dimensional images.
And the segmentation module is used for acquiring each standard image and inputting the standard image into the heart segmentation model obtained by the magnetic resonance imaging heart segmentation model training method in the steps S101 to S105 for carrying out parallel operation to obtain a target heart segmentation image corresponding to each standard image.
The pathology classification module is used for acquiring each heart segmentation image corresponding to each frame of cardiac magnetic resonance imaging three-dimensional data in a cardiac cycle and calculating, and the classification characteristic values at least comprise: left ventricular end-diastolic volume, right ventricular end-diastolic volume, myocardial end-diastolic volume, left ventricular end-systolic volume, right ventricular end-systolic volume, myocardial end-systolic volume, left ventricular ejection fraction, right ventricular ejection fraction, myocardial ejection fraction, end-diastolic left ventricular posterior wall thickness, and end-systolic left ventricular posterior wall thickness; and (4) inputting the classification characteristic values into the heart pathology classification model in the magnetic resonance imaging-based heart pathology classification model training method in the steps S301 to S305, and calculating to obtain a heart pathology classification result.
In another aspect, the present invention further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the steps of the method are implemented.
The invention provides a cardiac MRI ventricular structure segmentation and auxiliary diagnosis method, which comprises the following steps of 1-5:
1. image preprocessing: cardiac MRI image data is acquired and converted into normalized data with a pixel gray scale mean of 0 and a variance of 1.
2. And (3) detecting a region of interest: filtering by using a standard deviation filter to find a region with strong variation of pixel intensity along time in the image sequence of the cardiac cycle, and then finding a region of interest containing the heart by using Canny edge detection and Hough transform technology and clipping.
3. Data enhancement: the training data set is artificially augmented using data enhancement techniques.
4. Heart segmentation: and constructing a deep learning network for a training set containing a segmentation gold standard, obtaining a convolution network model with optimal weight parameters through random gradient descent training, inputting an image containing a heart region of interest into the model, and obtaining segmentation results of the left ventricle, the right ventricle and the myocardium. The steps S101 to S105 can be referred to for training the segmentation model.
5. And (4) pathological classification: and calculating and extracting features based on the segmentation result, and using a random forest classifier to realize classification of the heart disease. The training of the random forest classifier can refer to steps S301 to S305.
In another aspect, the present invention further provides a cardiac MRI ventricular structure segmentation and diagnosis assisting system, including:
an input module configured to input a cardiac MRI to be segmented.
And the preprocessing module is configured to preprocess the input cardiac nuclear magnetic resonance image MRI to be segmented and detect the region of interest.
And the segmentation module is configured to call the trained neural network model, segment the preprocessed cardiac MRI detected in the region of interest, complete ventricular structure segmentation, and output left ventricle, right ventricle and myocardial segmentation results. The steps S101 to S105 can be referred to for training the segmentation model.
A secondary diagnosis module configured to compute and extract features based on the segmentation results, enabling classification of the cardiac disease using a random forest classifier. The training of the random forest classifier can refer to steps S301 to S305.
The invention realizes end-to-end cardiac MRI segmentation by constructing a deep learning network. The method can output objective high-precision segmentation results, and makes powerful contribution to artificial intelligent precise medical treatment. Can simultaneously segment the left ventricle, the myocardium and the right ventricle of MRI, and provides convenience for the next clinical diagnosis and treatment. The deep learning network is adopted to realize segmentation, end-to-end segmentation is realized, pixel positioning is more accurate, segmentation precision is higher, a better training result can be obtained under the condition of limited data quantity, and the quantity and the calculated quantity of parameters are greatly reduced. MRI imaging on such soft tissue imaging of the heart is better than other imaging modalities. Therefore, the fully automatic cardiac MRI segmentation method can enable doctors to obtain accurate segmentation results of the left ventricle, the right ventricle and the myocardium in a short time, which is of great significance for improving diagnosis efficiency and reducing misdiagnosis rate. Meanwhile, the pathological classification provides assistance for diagnosis of doctors, and the pathological classification is used as a reference for diagnosis of the doctors, so that the diagnosis efficiency is improved, and the misdiagnosis rate is reduced.
In summary, according to the cardiac segmentation model based on cardiac MRI and the method and apparatus for training the pathological classification model, the cardiac segmentation and pathological classification, provided by the invention, the residual background part with little pixel gray scale change is suppressed by the standard deviation filter, the left ventricle, the right ventricle and the myocardium are highlighted, the center position of the myocardial wall of the left ventricle is further obtained by canny edge detection and circular hough transform, a rectangular mask is drawn, and a two-dimensional image is cut based on the rectangular mask to be used as the input for training the preset neural network model, so that background interference can be greatly suppressed, the fast convergence of neural network training is promoted, the training speed and efficiency are improved, and the recognition effect is improved.
Furthermore, after a two-dimensional image obtained by segmenting the short axis of each frame of cardiac magnetic resonance imaging in the cardiac cycle is segmented based on the cardiac segmentation model, the classification characteristic value is calculated, and a random forest is constructed based on the classification characteristic values and pathological classification of a plurality of samples to obtain a cardiac pathological classification model, so that automatic classification is realized on the cardiac pathological classification based on cardiac MRI, and the efficiency and the precision of medical diagnosis are improved.
Those of ordinary skill in the art will appreciate that the various illustrative components, systems, and methods described in connection with the embodiments disclosed herein may be implemented as hardware, software, or combinations of both. Whether this is done in hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, plug-in, function card, or the like. When implemented in software, the elements of the invention are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine-readable medium or transmitted by a data signal carried in a carrier wave over a transmission medium or a communication link. A "machine-readable medium" may include any medium that can store or transfer information. Examples of a machine-readable medium include electronic circuits, semiconductor memory devices, ROM, flash memory, Erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, Radio Frequency (RF) links, and so forth. The code segments may be downloaded via computer networks such as the internet, intranet, etc.
It should also be noted that the exemplary embodiments mentioned in this patent describe some methods or systems based on a series of steps or devices. However, the present invention is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, may be performed in an order different from the order in the embodiments, or may be performed simultaneously.
Features that are described and/or illustrated with respect to one embodiment may be used in the same way or in a similar way in one or more other embodiments and/or in combination with or instead of the features of the other embodiments in the present invention.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and various modifications and changes may be made to the embodiment of the present invention by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (12)

1. A magnetic resonance imaging heart segmentation model training method is characterized by comprising the following steps:
obtaining a training sample set, wherein the training sample set at least comprises a plurality of two-dimensional images obtained by performing short-axis slicing on cardiac nuclear magnetic resonance imaging three-dimensional data of a plurality of samples in a cardiac cycle frame by frame, and segmentation labels related to a left ventricle, a right ventricle, a myocardium and a residual background on each two-dimensional image;
standardizing the two-dimensional images to obtain standard graphs, filtering each frame of standard graphs corresponding to slices at the same position in a cardiac cycle by adopting a standard deviation filter, and acquiring pixel gray level zeroing of pixel points in each standard graph corresponding to the slices at the position, wherein the standard deviation of the pixel gray level change along the cardiac cycle is lower than a set value, so as to obtain filter graphs corresponding to the two-dimensional images;
canny edge detection is carried out on each filter image, and the central position of the myocardial wall of the left ventricle and the maximum likelihood circular ring of the myocardial wall of the left ventricle are obtained through circular Hough transformation;
drawing a rectangular mask for the maximum likelihood circular ring corresponding to each two-dimensional image and cutting the corresponding two-dimensional image into a set size by taking the rectangular mask as a center;
and taking each two-dimensional image which is cut in the training sample set as input, taking the segmentation labels of the left ventricle, the right ventricle, the myocardium and the residual background which correspond to each two-dimensional image as real values, and training a preset neural network model to obtain a heart segmentation model.
2. The method of claim 1, wherein before normalizing the two-dimensional image to obtain a normalized graph, the method further comprises:
performing size normalization processing on the two-dimensional image, and filling the two-dimensional image to 256 × 256 by using the minimum pixel value of the two-dimensional image when the size of the two-dimensional image is less than 256 × 256; in the case where the two-dimensional image size is larger than 256 × 256, the two-dimensional image is clipped to 256 × 256.
3. The method of claim 2, wherein the set size is 128 x 128.
4. The method of claim 1, wherein after the drawing a rectangular mask for the maximum likelihood circle corresponding to each two-dimensional image and cutting the corresponding two-dimensional image to a set size with the rectangular mask as a center, the method further comprises:
and performing data enhancement operation on each cut two-dimensional graph, wherein the data enhancement operation comprises-5-degree random rotation angle operation, -5 mm random translation operation, -0.2 random scaling operation, Gaussian noise operation of adding zero mean value and standard deviation 0.01 and/or elastic deformation operation by using B-spline interpolation.
5. The method of claim 1, wherein the canny edge detection is smoothed by a gaussian filter with a two-dimensional gaussian distribution function with a σ value of 3.
6. The method for training the magnetic resonance imaging heart segmentation model according to claim 1, wherein the presetting neural network model comprises:
the integrated circuit comprises an input layer, an inclusion layer, a first channel cascade layer, a first intensive connecting block, a second channel cascade layer, a first down-sampling transition layer, a second intensive connecting block, a third channel cascade layer, a second down-sampling transition layer, a third intensive connecting block, a fourth channel cascade layer, a third down-sampling transition layer, a fourth intensive connecting block, a first element level addition layer, a first up-sampling transition layer, a second element level addition layer, a fifth intensive connecting block, a third element level addition layer, a second up-sampling transition layer, a fourth element level addition layer, a sixth intensive connecting block, a fifth element level addition layer, a third up-sampling transition layer, a sixth element level addition layer, a seventh intensive connecting block, a seventh element level addition layer, a 1 × 1 convolution and softmax layer, an Argmax layer and an output layer which are connected in sequence;
the first channel cascade layer cascades the 3 characteristic maps output by the inclusion layer;
the second channel cascade layer cascades the characteristic diagrams output by the first channel cascade layer and the first dense connection block;
the third channel cascade layer cascades the characteristic graphs output by the first down-sampling transition layer and the second dense connection block;
the fourth channel cascade layer cascades the feature maps output by the second downsampling transition layer and the third dense connection block;
the first element-level addition layer performs element-level addition on the feature map output by the third downsampling transition layer through the first skip connection layer and the feature map output by the fourth layer dense connection block;
the second element-level addition layer performs element-level addition on the feature map output by the fourth channel cascade layer through the second jump connection layer and the feature map output by the first up-sampling transition layer;
the third element-level addition layer performs element-level addition on the feature map output by the second element-level addition layer through a third jump connection layer and the feature map output by the fifth dense connection block;
the fourth element-level addition layer performs element-level addition on the feature map output by the third channel cascade layer through a fourth jump connection layer and the feature map output by the second up-sampling transition layer;
the fifth element-level addition layer performs element-level addition on the feature map output by the fourth element-level addition layer through a fifth skip connection layer and the feature map output by the sixth dense connection block;
the sixth element-level addition layer performs element-level addition on the feature map output by the second channel cascade layer through a sixth jump connection layer and the feature map output by the third up-sampling transition layer;
the seventh element-level addition layer performs element-level addition on the feature map output by the sixth element-level addition layer through a seventh skip connection layer and the feature map output by the seventh dense connection block;
wherein the inclusion layer performs convolution operations of three scales, namely 3 × 3, 5 × 5 and 7 × 7; the first Dense connection Block, the second Dense connection Block, the third Dense connection Block, the fifth Dense connection Block, the sixth Dense connection Block and the seventh Dense connection Block adopt a Dense Block network structure, and each layer of Dense connection is composed of batch normalized BN, ReLU activation function, 3 × 3 convolution and Dropout;
the first, second, and third downsampled transition layers are comprised of batch normalized BN, ReLU activation function, 3 × 3 convolution, Dropout, and 2 × 2 maximal pooling;
the first upsampling transition layer, the second upsampling transition layer, and the third upsampling transition layer are convolved with a 3 × 3 transpose with a step size of 2;
the fourth layer of dense connecting blocks is a bottleneck layer;
the first, second, third, fourth, fifth, sixth and seventh hopping connection layers consist of BN, ReLU activation function, 1 × 1 convolution and Dropout;
the preset neural network model adopts a cross entropy loss function to calculate the weighted cross entropy loss L at each voxelWCECalculated from the following formula:
Figure FDA0003016741850000031
wherein W ═ W1,w2,…,wl) Set of learnable weights, wlIs a weight matrix corresponding to the l layer of the deep learning network, X is a training sample, tiIs voxel xiE.g. the target class label corresponding to X, wmap(xi) For at each voxel xiThe estimated weight, p (t)i|xi(ii) a W) pair voxel x representing the final output layer of the networkiIs predicted.
7. A magnetic resonance imaging cardiac segmentation method, comprising:
acquiring a two-dimensional image of cardiac nuclear magnetic resonance imaging to be segmented;
inputting the two-dimensional image into a heart segmentation model obtained by the magnetic resonance imaging heart segmentation model training method according to any one of claims 1 to 6, and calculating to obtain a target heart segmentation image.
8. A heart pathology classification model training method based on magnetic resonance imaging is characterized by comprising the following steps:
acquiring multi-frame cardiac magnetic resonance imaging three-dimensional data of a plurality of samples in a cardiac cycle and corresponding pathological classification thereof, wherein the pathological classification at least comprises the following steps: normal, old myocardial infarction, right ventricular abnormalities, dilated heart disease, and hypertrophic heart disease;
slicing the multi-frame cardiac magnetic resonance imaging three-dimensional data of each sample to obtain a plurality of two-dimensional images, inputting the two-dimensional images into a cardiac segmentation model obtained by the magnetic resonance imaging cardiac segmentation model training method according to any one of claims 1 to 6, and obtaining a cardiac segmentation map of each slice corresponding to the cardiac magnetic resonance imaging three-dimensional data of each sample;
calculating a plurality of classification characteristic values corresponding to each sample based on all cardiac segmentation maps corresponding to each position slice of each sample in one cardiac cycle, wherein the classification characteristic values at least comprise: left ventricular end-diastolic volume, right ventricular end-diastolic volume, myocardial end-diastolic volume, left ventricular end-systolic volume, right ventricular end-systolic volume, myocardial end-systolic volume, left ventricular ejection fraction, right ventricular ejection fraction, myocardial ejection fraction, end-diastolic left ventricular posterior wall thickness, and end-systolic left ventricular posterior wall thickness;
obtaining the classification characteristic value and the pathological classification corresponding to each sample, and establishing a sample training set;
and constructing a random forest based on the sample training set to obtain a heart pathology classification model.
9. The MRI-based cardiology classification model training method of claim 8, wherein the value of the L-diastolic volume is a sum of L-ventricular pixel points in all cardiac segmentation maps of the sample in the corresponding frame of the L-diastolic phase of the cardiac cycle;
the value of the volume of the right ventricle at the end diastole is the sum of the pixel points of the right ventricle in all the heart segmentation maps in the corresponding frames of the sample at the end diastole of the cardiac cycle;
the value of the myocardial end diastole volume is the sum of pixel points of the right ventricle in all heart segmentation maps in a corresponding frame of the end diastole of the cardiac cycle of the sample;
the value of the left ventricle end-systolic volume is the sum of the pixels of the left ventricle in all the heart segmentation maps in the corresponding frame of the sample at the end systole of the cardiac cycle;
the value of the volume of the right ventricle at the end systole is the sum of the pixel points of the right ventricle in all the heart segmentation maps in the corresponding frame of the sample at the end systole of the cardiac cycle;
the value of the myocardial end-systolic volume is the sum of pixel points of the right ventricle in all heart segmentation maps in a frame corresponding to the end systole of the cardiac cycle of the sample;
the left ventricular ejection fraction is the ratio of the difference between the left ventricular end-diastolic volume and the left ventricular end-systolic volume to the left ventricular end-diastolic volume;
the right ventricular ejection fraction is the ratio of the difference between the right ventricular end-diastolic volume and the right ventricular end-systolic volume to the right ventricular end-diastolic volume;
the myocardial ejection fraction is a ratio of a difference between the myocardial end-diastolic volume and the myocardial end-systolic volume to the myocardial end-diastolic volume.
10. A method for classifying cardiac pathologies based on magnetic resonance imaging, comprising:
acquiring multi-frame cardiac nuclear magnetic resonance imaging three-dimensional data of an object to be classified in a cardiac cycle and performing short-axis slicing on each frame of cardiac nuclear magnetic resonance imaging three-dimensional data to obtain a plurality of two-dimensional images;
inputting each two-dimensional image into a heart segmentation model obtained by the magnetic resonance imaging heart segmentation model training method according to any one of claims 1 to 6 for operation to obtain a heart segmentation picture corresponding to each two-dimensional image;
calculating a plurality of classification characteristic values corresponding to the object to be classified according to each heart segmentation picture, wherein the classification characteristic values at least comprise: left ventricular end-diastolic volume, right ventricular end-diastolic volume, myocardial end-diastolic volume, left ventricular end-systolic volume, right ventricular end-systolic volume, myocardial end-systolic volume, left ventricular ejection fraction, right ventricular ejection fraction, myocardial ejection fraction, end-diastolic left ventricular posterior wall thickness, and end-systolic left ventricular posterior wall thickness;
inputting the classification characteristic value into the heart pathology classification model of the magnetic resonance imaging-based heart pathology classification model training method according to any one of claims 8 to 9, and obtaining a corresponding pathology classification result of the three-dimensional cardiac magnetic resonance imaging data to be classified.
11. A cardiac segmentation and pathology classification apparatus for magnetic resonance imaging, comprising:
the input module is used for acquiring multi-frame cardiac nuclear magnetic resonance imaging three-dimensional data of an object to be classified in one cardiac cycle;
a preprocessing module: the short-axis slice is used for performing short-axis slicing on each frame of cardiac magnetic resonance imaging three-dimensional data in a cardiac cycle to obtain a plurality of two-dimensional images;
a segmentation module, configured to acquire each standard map and input a heart segmentation model obtained by the magnetic resonance imaging heart segmentation model training method according to any one of claims 1 to 6 for performing a blending operation to obtain a target heart segmentation image corresponding to each standard map;
the pathology classification module is used for acquiring each heart segmentation image corresponding to each frame of cardiac magnetic resonance imaging three-dimensional data in a cardiac cycle and calculating, and the classification characteristic values corresponding to the objects to be classified at least comprise: left ventricular end-diastolic volume, right ventricular end-diastolic volume, myocardial end-diastolic volume, left ventricular end-systolic volume, right ventricular end-systolic volume, myocardial end-systolic volume, left ventricular ejection fraction, right ventricular ejection fraction, myocardial ejection fraction, end-diastolic left ventricular posterior wall thickness, and end-systolic left ventricular posterior wall thickness; inputting each classification characteristic value into the heart pathology classification model in the magnetic resonance imaging-based heart pathology classification model training method according to any one of claims 8 to 9, and performing operation to obtain a heart pathology classification result.
12. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1 to 10 are implemented when the processor executes the program.
CN202110391121.3A 2021-04-12 2021-04-12 Heart segmentation model and pathology classification model training, heart segmentation and pathology classification method and device based on cardiac MRI Pending CN113012173A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110391121.3A CN113012173A (en) 2021-04-12 2021-04-12 Heart segmentation model and pathology classification model training, heart segmentation and pathology classification method and device based on cardiac MRI

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110391121.3A CN113012173A (en) 2021-04-12 2021-04-12 Heart segmentation model and pathology classification model training, heart segmentation and pathology classification method and device based on cardiac MRI

Publications (1)

Publication Number Publication Date
CN113012173A true CN113012173A (en) 2021-06-22

Family

ID=76388444

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110391121.3A Pending CN113012173A (en) 2021-04-12 2021-04-12 Heart segmentation model and pathology classification model training, heart segmentation and pathology classification method and device based on cardiac MRI

Country Status (1)

Country Link
CN (1) CN113012173A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114332547A (en) * 2022-03-17 2022-04-12 浙江太美医疗科技股份有限公司 Medical object classification method and apparatus, electronic device, and storage medium
CN114862799A (en) * 2022-05-10 2022-08-05 同心医联科技(北京)有限公司 Full-automatic brain volume segmentation algorithm for FLAIR-MRI sequence
CN115349851A (en) * 2022-08-29 2022-11-18 江苏师范大学 Cardiac function diagnosis method based on atrioventricular plane pump model
CN115553818A (en) * 2022-12-05 2023-01-03 湖南省人民医院(湖南师范大学附属第一医院) Myocardial biopsy system based on fusion positioning
CN115965621A (en) * 2023-02-15 2023-04-14 中国医学科学院阜外医院 Main heart adverse event prediction device based on magnetic resonance imaging
CN116071286A (en) * 2021-10-29 2023-05-05 重庆药羚科技有限公司 Method and system for monitoring and identifying end point in liquid separation process, storage medium and terminal
CN117788472A (en) * 2024-02-27 2024-03-29 南京航空航天大学 Method for judging corrosion degree of rivet on surface of aircraft skin based on DBSCAN algorithm
CN117893611A (en) * 2024-03-14 2024-04-16 浙江华诺康科技有限公司 Image sensor dirt detection method and device and computer equipment
WO2024119372A1 (en) * 2022-12-06 2024-06-13 深圳先进技术研究院 Myocardial detection method combining region-of-interest distance metric learning and transfer learning

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071286A (en) * 2021-10-29 2023-05-05 重庆药羚科技有限公司 Method and system for monitoring and identifying end point in liquid separation process, storage medium and terminal
CN114332547A (en) * 2022-03-17 2022-04-12 浙江太美医疗科技股份有限公司 Medical object classification method and apparatus, electronic device, and storage medium
CN114862799A (en) * 2022-05-10 2022-08-05 同心医联科技(北京)有限公司 Full-automatic brain volume segmentation algorithm for FLAIR-MRI sequence
CN115349851A (en) * 2022-08-29 2022-11-18 江苏师范大学 Cardiac function diagnosis method based on atrioventricular plane pump model
CN115553818A (en) * 2022-12-05 2023-01-03 湖南省人民医院(湖南师范大学附属第一医院) Myocardial biopsy system based on fusion positioning
WO2024119372A1 (en) * 2022-12-06 2024-06-13 深圳先进技术研究院 Myocardial detection method combining region-of-interest distance metric learning and transfer learning
CN115965621A (en) * 2023-02-15 2023-04-14 中国医学科学院阜外医院 Main heart adverse event prediction device based on magnetic resonance imaging
CN115965621B (en) * 2023-02-15 2023-06-20 中国医学科学院阜外医院 Magnetic resonance imaging-based prediction device for main heart adverse events
CN117788472A (en) * 2024-02-27 2024-03-29 南京航空航天大学 Method for judging corrosion degree of rivet on surface of aircraft skin based on DBSCAN algorithm
CN117788472B (en) * 2024-02-27 2024-05-14 南京航空航天大学 Method for judging corrosion degree of rivet on surface of aircraft skin based on DBSCAN algorithm
CN117893611A (en) * 2024-03-14 2024-04-16 浙江华诺康科技有限公司 Image sensor dirt detection method and device and computer equipment
CN117893611B (en) * 2024-03-14 2024-06-11 浙江华诺康科技有限公司 Image sensor dirt detection method and device and computer equipment

Similar Documents

Publication Publication Date Title
US11813047B2 (en) Automatic quantification of cardiac MRI for hypertrophic cardiomyopathy
Moradi et al. MFP-Unet: A novel deep learning based approach for left ventricle segmentation in echocardiography
CN113012173A (en) Heart segmentation model and pathology classification model training, heart segmentation and pathology classification method and device based on cardiac MRI
US9968257B1 (en) Volumetric quantification of cardiovascular structures from medical imaging
US11379985B2 (en) System and computer-implemented method for segmenting an image
Saikumar et al. A novel implementation heart diagnosis system based on random forest machine learning technique.
US20230038364A1 (en) Method and system for automatically detecting anatomical structures in a medical image
Ahirwar Study of techniques used for medical image segmentation and computation of statistical test for region classification of brain MRI
US20230005140A1 (en) Automated detection of tumors based on image processing
US20090028403A1 (en) System and Method of Automatic Prioritization and Analysis of Medical Images
Nurmaini et al. Accurate detection of septal defects with fetal ultrasonography images using deep learning-based multiclass instance segmentation
CN116681958B (en) Fetal lung ultrasonic image maturity prediction method based on machine learning
US20230394670A1 (en) Anatomically-informed deep learning on contrast-enhanced cardiac mri for scar segmentation and clinical feature extraction
CN112750531A (en) Automatic inspection system, method, equipment and medium for traditional Chinese medicine
de Albuquerque et al. Fast fully automatic heart fat segmentation in computed tomography datasets
He et al. Automated segmentation and area estimation of neural foramina with boundary regression model
Masood et al. Automated segmentation of skin lesions: Modified Fuzzy C mean thresholding based level set method
CN112884759B (en) Method and related device for detecting metastasis state of axillary lymph nodes of breast cancer
Alam et al. Ejection Fraction estimation using deep semantic segmentation neural network
Masood et al. Level set initialization based on modified fuzzy c means thresholding for automated segmentation of skin lesions
Agarwala et al. Automated segmentation of lung field in HRCT images using active shape model
CN112766333B (en) Medical image processing model training method, medical image processing method and device
US11893735B2 (en) Similarity determination apparatus, similarity determination method, and similarity determination program
CN115439423A (en) CT image-based identification method, device, equipment and storage medium
CN112766332A (en) Medical image detection model training method, medical image detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination