CN110969633A - Automatic optimal phase recognition method for cardiac CT imaging - Google Patents
Automatic optimal phase recognition method for cardiac CT imaging Download PDFInfo
- Publication number
- CN110969633A CN110969633A CN201911191975.6A CN201911191975A CN110969633A CN 110969633 A CN110969633 A CN 110969633A CN 201911191975 A CN201911191975 A CN 201911191975A CN 110969633 A CN110969633 A CN 110969633A
- Authority
- CN
- China
- Prior art keywords
- network
- phase
- cardiac
- images
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000000747 cardiac effect Effects 0.000 title claims abstract description 39
- 238000000034 method Methods 0.000 title claims abstract description 27
- 238000013170 computed tomography imaging Methods 0.000 title claims abstract description 18
- 238000012549 training Methods 0.000 claims abstract description 45
- 238000012360 testing method Methods 0.000 claims abstract description 22
- 238000013528 artificial neural network Methods 0.000 claims abstract description 13
- 238000013135 deep learning Methods 0.000 claims abstract description 6
- 230000011218 segmentation Effects 0.000 claims description 19
- 230000006870 function Effects 0.000 claims description 11
- 210000003516 pericardium Anatomy 0.000 claims description 7
- 238000002360 preparation method Methods 0.000 claims description 6
- 238000011478 gradient descent method Methods 0.000 claims description 5
- 238000013527 convolutional neural network Methods 0.000 claims description 3
- 230000004807 localization Effects 0.000 claims description 2
- 230000001360 synchronised effect Effects 0.000 claims description 2
- 230000033001 locomotion Effects 0.000 abstract description 10
- 238000002591 computed tomography Methods 0.000 abstract description 4
- 238000010801 machine learning Methods 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000007796 conventional method Methods 0.000 description 2
- 210000004351 coronary vessel Anatomy 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 206010003211 Arteriosclerosis coronary artery Diseases 0.000 description 1
- 201000000057 Coronary Stenosis Diseases 0.000 description 1
- 206010011089 Coronary artery stenosis Diseases 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000010009 beating Methods 0.000 description 1
- 210000004204 blood vessel Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 230000010247 heart contraction Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 230000036244 malformation Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 208000031225 myocardial ischemia Diseases 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000000241 respiratory effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 210000001519 tissue Anatomy 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30048—Heart; Cardiac
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Medical Informatics (AREA)
- Quality & Reliability (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an automatic optimal phase recognition method for cardiac CT imaging, which comprises the following steps: collecting projection data of N patient heart scans, wherein each patient reconstructs N CT images with different phases, and each phase corresponds to a label value; establishing a phase estimation network based on a deep learning network model; inputting the training set and the test set data into a network model, and training to obtain network parameters of a phase estimation network; CT images of the same patient with different phases are randomly selected from the data set to serve as test images, and the phase corresponding to the optimal label value is determined according to the label value of each CT image. The invention adopts a machine learning and neural network technology to construct a phase network model, can quickly find the optimal CT scanning phase, reduces the motion artifact and improves the image quality.
Description
Technical Field
The invention relates to the technical field of medical imaging, in particular to a cardiac CT imaging automatic optimal phase recognition method based on machine learning and neural network technology.
Background
The scanning of human heart and coronary artery based on CT is always an important detection means in the aspects of coronary artery stenosis, coronary artery calcification score, coronary malformation variation, myocardial ischemia and the like; due to the special feature of rapid beating of the heart, the scanning of the human heart and its coronary arteries requires a high imaging time resolution.
The application of real-time electrocardiogram alleviates the above problems to a certain extent, and the characteristic that the diastole movement in the heart beating period is relatively gentle is utilized to acquire a data source (as shown in figure 1) of the time period in the heart period, so as to acquire relatively stable three-dimensional reconstruction data of heart scanning, thereby greatly improving the imaging quality when CT scans the heart. Because the heart is in a static state for only a short time in the whole cardiac cycle, in order to effectively obtain the heart image, the exposure interval of the scanning is set according to each cycle of the electrocardiosignal, and two parameters are generally provided, one is the length of the exposure, and the other is the central phase of the exposure. These two parameters are usually chosen empirically, the length of the exposure is usually the minimum reconstruction interval required for the image plus a redundancy time, and the central phase of the exposure is usually 70%. After scanning, the exposure data is used for carrying out image reconstruction according to the default central phase of exposure so as to obtain the required CT image.
Currently, cardiac scanning is usually performed according to a phase selected in advance, and then image reconstruction is performed on a cardiac image according to scanning data near the corresponding phase. However, this default phase does not necessarily provide the optimal image quality, and it is usually the case that if the doctor finds that the motion artifact in the image is serious after the image reconstruction is performed, the reconstruction phase is changed to acquire the images of other scan intervals according to the representation of the motion artifact. And then until a relatively suitable reconstructed phase is found as the final clinical diagnostic image.
Because of the complexity of motion artifacts, conventional methods do not have a good way to quantitatively identify the magnitude of the motion artifact, and therefore there is no automatic way to automatically identify the optimal phase. In addition, the traditional manual method needs to traverse different phases, so that much reconstruction and processing time is needed, and the efficiency is difficult to effectively improve.
Disclosure of Invention
The technical purpose is as follows: aiming at the technical problems, the invention provides an automatic optimal phase recognition method for cardiac CT imaging, which is based on machine learning and neural network technology, constructs a phase network model, can quickly find the optimal phase, reduces the influence of motion artifact and improves the image quality.
The technical scheme is as follows: in order to realize the technical scheme, the invention adopts the following technical scheme:
an automatic optimal phase identification method for cardiac CT imaging is characterized by comprising the following steps:
a1, data set preparation: collecting projection data of heart scanning of M patients, reconstructing M CT images of different phases for each patient, wherein each phase corresponds to a label value representing image quality, randomly dividing the obtained CT image data into a training set, a testing set and a verifying set, wherein M is not less than 30, and M is not less than 20;
a2, designing a neural network: constructing a network model as a phase estimation network, wherein the input of the phase estimation network is three-dimensional volume data obtained by reconstructing N different phase points, and the output of the phase estimation network is a vector representing the image quality corresponding to each phase;
a3, network training: inputting the training set and the test set data into a network model, and training to obtain network parameters of a phase estimation network;
a4, optimal phase estimation: selecting CT images of N phases of the same patient from the data set as test images, inputting the test images into a phase estimation network to obtain label values and N phases of the N CT images, and determining the phase corresponding to the optimal label value according to the label values of the CT images.
Preferably, the phase estimation network in step a2 adopts a fully connected network or a convlstm network structure, and the network loss function takes a mean square error MSE as an objective function.
Preferably, in the step a4, a phase corresponding to the optimal label value is searched according to the label value of each CT image; or fitting a straight line or a curve through the label values, and expressing the straight line or the curve by y ═ fun (x), wherein x is the measured label value, y is the phase value, and finally obtaining the optimal central phase through the fitting result; or directly outputting a phase-image quality vector of the K point, and obtaining the optimal phase according to the optimal label value.
Preferably, a heart segmentation network is added in the step a2 before the phase estimation network is established.
Preferably, the establishing step of the heart segmentation network in the step a2 is:
b1, constructing a data set of the heart segmentation network: collecting projection data of heart scanning of K patients, reconstructing K CT images of different phases for each patient, and marking a pericardial region in the obtained CT images, wherein K is not less than 30 and K is not less than 20;
b2, designing a neural network: constructing a deep learning network structure of U-Net to automatically segment the pericardium;
b3: network training: training the network, inputting the training set and the test set data into the network, and training to obtain network parameters;
b4: pericardial segmentation: and inputting the reconstructed CT image into a network to obtain a segmented CT image, wherein the segmented image of the pericardium corresponds to different phase tags respectively.
Preferably, in step a2, a cardiac region-based attention network is added before the phase estimation network is established.
Preferably, the step of establishing the heart region-based attention network in the step a2 is:
c1, constructing a data set of the attention network based on the cardiac region: collecting projection data of heart scanning of K patients, reconstructing K CT images of different phases for each patient, marking a pericardial region in the obtained CT images, and carrying out Gaussian blur on masks of the pericardial region, wherein K is not less than 30, and K is not less than 20;
c2, designing a neural network, namely constructing a deep learning network structure of U-Net, and positioning the position of the pericardium based on heatmap regression;
c3, network training: training the network, inputting the training set and the test set data into the network, and training to obtain network parameters;
c4 cardiac localization: the reconstructed CT image is input into a network to obtain the approximate position of the heart, and a region with a fixed size is cut around the center of the predicted boundary frame to obtain a heart region.
Preferably, the constructed network model is trained by optimizing the loss function in the step a3 through a gradient descent method.
Preferably, the projection data of the cardiac scan is obtained in an axial scan mode, a step scan mode or a helical scan mode synchronized according to the phase of the heartbeat.
Has the advantages that: the invention has the following advantages:
(1) the automatic phase recognition method for cardiac CT imaging can estimate the optimal phase by reconstructing a few phases, utilizing the artificial intelligence technology and training a certain amount of cardiac data, realize the reconstruction of the optimal phase and reduce motion artifacts;
(2) the automatic phase recognition method for cardiac CT imaging improves the quality of cardiac images by processing images through an algorithm on the basis of the prior art, has wider practicability compared with the prior art, can solve the problems of motion artifacts and the like which cannot be solved by a hardware technology, and can effectively improve the quality of the images;
(3) compared with the traditional method, the method has higher efficiency, does not need to traverse all phase images and then manually screens the phase images; the optimal phase information can be estimated only by automatically processing the images of a plurality of phases; in the training process, the complex segmentation algorithm is not needed to manually segment the blood vessels in the heart, and only the integral image quality needs to be scored, so that the workload of labeling is much less, and the calculation cost is lower.
Drawings
FIG. 1 is a schematic diagram of a real-time electrocardiogram based cardiac scan;
FIG. 2 is a schematic diagram of the architecture of a VGG + LSTM network;
FIG. 3 is a selected cardiac CT scan reconstructed image of several different phases;
FIG. 4 is a diagram illustrating label values corresponding to different phase images according to an embodiment;
fig. 5 is a schematic structural diagram of automatic phase identification in CT cardiac imaging according to the second embodiment.
Detailed Description
The present invention will be further described with reference to the accompanying drawings.
The invention provides an automatic phase recognition method for cardiac CT imaging, which comprises the following steps:
data set preparation: collecting projection data of heart scanning of M patients, reconstructing M CT images with different phases for each patient, and randomly dividing the obtained CT image data into a training set, a testing set and a verification set;
designing a neural network: each phase corresponds to a label value, and a phase estimation network is established;
network training: optimizing a loss function training network by a gradient descent method, inputting training set and test set data into a network model, and training to obtain network parameters;
phase estimation: CT images of the same patient with different phases are randomly selected from the data set to serve as test images, and the phase corresponding to the optimal label value is determined according to the label value of each CT image.
The first embodiment is as follows:
the invention discloses an automatic phase identification method for cardiac CT imaging, which carries out optimal phase estimation on a whole reconstructed image and comprises the following specific steps:
step A1, data preparation: the projection data of 30 cardiac scans of the patient are collected, the phase is between 0 and 1, CT images with different phases are reconstructed, 20 data with different phases are reconstructed for each patient at a step interval of 0.05, as shown in FIG. 3, several reconstructed images with different phases are selected, and the five images in FIG. 3 are CT scan images with the phases of 0.55, 0.65, 0.75, 0.85 and 0.95 respectively. Different phase images correspond to their own phase tag values, respectively. Taking the label set as a linear distribution as an example, as shown in fig. 4, the optimal phase is observed at a position of 0.75, the label value corresponding to the phase of 0.75 is set to be 0, and the label values of the other phases centered at 0.75 are distributed between-0.5 and 0.5. The above data were randomly divided into a training set, a test set and a validation set.
Step A2, designing a neural network: taking a convlstm type network model as an example, a VGG + LSTM network is used for predicting an optimal central phase, and the LSTM is added into a training network to enable the network to learn information which depends on the network for a long time. In the scheme, 5 CT images (for example, phases of 0.55, 0.65, 0.75, 0.85 and 0.95) of different phase positions of the same patient are randomly selected to serve as input images of a network, the size of the input images is 5 multiplied by 1 multiplied by 512, the output is deviation phase values from the optimal phase, the phase deviation from 0.75 serves as an output value, 0 represents the optimal central phase, and the network loss function takes Mean Square Error (MSE) as an objective function.
Step A3, network training: and optimizing a loss function training network by a gradient descent method, inputting the data of the training set and the test set into the network, and training to obtain network parameters.
Step a4, phase estimation: randomly selecting 5 CT images with different phases of the same patient from a data set as a test image, predicting 20 label values to search for the phase corresponding to the optimal label, or fitting a straight line or a curve through the label values, wherein the straight line or the curve is represented by y ═ fun (x), wherein x is the measured label value, and y is a phase value. And finally obtaining the optimal central phase corresponding to the label 0 through the fitting result, and reminding an operator to perform corresponding change.
From the above steps, there are two implementation manners in the present scheme, one of which is: the input of the network is three-dimensional volume data reconstructed by N (for example, N is 5) different phase points, the N individual data are respectively input into the same phase estimation network, the image quality index of the current volume data is output, so that N phases can be obtained, the change of the image quality along with the phase is estimated according to the N phases, and the optimal phase point is estimated; the other realization is that N individual data are sequentially input into a network based on convlstm, an image index and sequence characteristics of the several phases are output, the characteristics are input into a subsequent network, a curve of the phase changing along with time is given, and an optimal phase point is estimated.
Example two:
the difference from the first embodiment is mainly that a heart segmentation network is added before the phase estimation network is constructed in step 2, or an attention network based on the heart region is added, and the advantage of adopting this structure is that the network can pay more attention to the image quality of the heart region and is not easily interfered by the images of other regions. As shown in fig. 5. The heart segmentation network and the classification network are trained separately. The specific process of constructing the heart segmentation network is as follows:
step B1, Segmentation CNN data preparation: collecting projection data of 30 cardiac scans of a patient, reconstructing 30000 CT images with different phases (such as 55%, 65%, 75%, 85%, 95%) and manually marking a pericardial region; if an attention network is adopted, the same mask can be used for carrying out Gaussian blur and then training can be carried out.
Step B2: designing a neural network: taking a deep learning network structure of U-Net as an example to automatically partition the pericardium;
step B3: network training: optimizing a loss function training network by a gradient descent method, inputting training set and test set data into the network, and training to obtain network parameters;
step B4: pericardial segmentation: inputting the reconstructed CT image into a network to obtain a segmented CT image; the images after the pericardial segmentation correspond to different phase labels respectively,
classification CNN data preparation: the images after pericardial segmentation correspond to different phase labels respectively, and training and testing are performed by the method mentioned in the first scheme.
In the embodiment, it is further considered that due to the respiratory motion and the like, the cardiac CT reconstructed images with different phases not only have differences in cardiac regions, but also other tissue structures are difficult to keep consistent. A heart segmentation network is constructed, segmentation processing is carried out on the heart region, and the optimal heart phase can be judged more accurately. The heart region segmentation method may be performed by a conventional method, in addition to the neural network method described above.
The above description is only of the preferred embodiments of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the invention and these are intended to be within the scope of the invention.
Claims (9)
1. An automatic optimal phase identification method for cardiac CT imaging is characterized by comprising the following steps:
a1, data set preparation: collecting projection data of heart scanning of M patients, reconstructing M CT images of different phases for each patient, wherein each phase corresponds to a label value representing image quality, randomly dividing the obtained CT image data into a training set, a testing set and a verifying set, wherein M is not less than 30, and M is not less than 20;
a2, designing a neural network: constructing a network model as a phase estimation network, wherein the input of the phase estimation network is three-dimensional volume data obtained by reconstructing N different phase points, and the output of the phase estimation network is a vector representing the image quality corresponding to the N phases;
a3, network training: inputting the training set and the test set data into a network model, and training to obtain network parameters of a phase estimation network;
a4, optimal phase estimation: selecting CT images of N phases of the same patient from the data set as test images, inputting the test images into a phase estimation network to obtain label values and N phases of the N CT images, and determining the phase corresponding to the optimal label value according to the label values of the CT images.
2. The automatic optimal phase identification method for cardiac CT imaging according to claim 1, wherein: the phase estimation network in the step a2 adopts a convolutional neural network, and then is connected with a fully-connected network or a convlstm network, and the network loss function takes a mean square error MSE as a target function.
3. The automatic optimal phase identification method for cardiac CT imaging according to claim 1, wherein: in step a4, a phase corresponding to the optimal label value is searched for according to the label value of each CT image; or fitting a straight line or a curve through the label values, and expressing the straight line or the curve by y ═ fun (x), wherein x is the measured label value, y is the phase value, and finally obtaining the optimal central phase through the fitting result; or directly outputting a phase-image quality vector of the K point, and obtaining the optimal phase according to the optimal label value.
4. The automatic optimal phase identification method for cardiac CT imaging according to claim 1, wherein: in step a2, a heart segmentation network is added before the phase estimation network is established.
5. The method of claim 4, wherein the step of building the cardiac segmentation network in the step A2 comprises:
b1, constructing a data set of the heart segmentation network: collecting projection data of heart scanning of K patients, reconstructing K CT images of different phases for each patient, and marking a pericardial region in the obtained CT images, wherein K is not less than 30 and K is not less than 20;
b2, designing a neural network: constructing a deep learning network structure of U-Net to automatically segment the pericardium;
b3: network training: training the network, inputting the training set and the test set data into the network, and training to obtain network parameters;
b4: pericardial segmentation: and inputting the reconstructed CT image into a network to obtain a segmented CT image, wherein the segmented image of the pericardium corresponds to different phase tags respectively.
6. The automatic optimal phase identification method for cardiac CT imaging according to claim 1, wherein: in step a2, a cardiac region-based attention network is added before the phase estimation network is established.
7. The cardiac CT imaging automatic optimal phase identification method according to claim 6, wherein the step of establishing the cardiac region-based attention network in the step a2 comprises:
c1, constructing a data set of the attention network based on the cardiac region: collecting projection data of heart scanning of K patients, reconstructing K CT images of different phases for each patient, marking a pericardial region in the obtained CT images, and carrying out Gaussian blur on masks of the pericardial region, wherein K is not less than 30, and K is not less than 20;
c2, designing a neural network, namely constructing a deep learning network structure of U-Net, and positioning the position of the pericardium based on heatmap regression;
c3, network training: training the network, inputting the training set and the test set data into the network, and training to obtain network parameters;
c4 cardiac localization: the reconstructed CT image is input into a network to obtain the approximate position of the heart, and a region with a fixed size is cut around the center of the predicted boundary frame to obtain a heart region.
8. The automatic optimal phase identification method for cardiac CT imaging according to claim 1, wherein: and in the step A3, the constructed network model is trained by optimizing the loss function through a gradient descent method.
9. The automatic optimal phase identification method for cardiac CT imaging according to claim 1, wherein: the projection data of the heart scanning is obtained by adopting an axial scanning mode, a stepping scanning mode or a spiral scanning mode which are synchronous according to the heartbeat phase.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911191975.6A CN110969633B (en) | 2019-11-28 | 2019-11-28 | Automatic optimal phase identification method for cardiac CT imaging |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911191975.6A CN110969633B (en) | 2019-11-28 | 2019-11-28 | Automatic optimal phase identification method for cardiac CT imaging |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110969633A true CN110969633A (en) | 2020-04-07 |
CN110969633B CN110969633B (en) | 2024-02-27 |
Family
ID=70032238
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911191975.6A Active CN110969633B (en) | 2019-11-28 | 2019-11-28 | Automatic optimal phase identification method for cardiac CT imaging |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110969633B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111462020A (en) * | 2020-04-24 | 2020-07-28 | 上海联影医疗科技有限公司 | Method, system, storage medium and device for correcting motion artifact of heart image |
CN112288752A (en) * | 2020-10-29 | 2021-01-29 | 中国医学科学院北京协和医院 | Full-automatic coronary calcified focus segmentation method based on chest flat scan CT |
CN113256529A (en) * | 2021-06-09 | 2021-08-13 | 腾讯科技(深圳)有限公司 | Image processing method, image processing device, computer equipment and storage medium |
CN117115577A (en) * | 2023-10-23 | 2023-11-24 | 南京安科医疗科技有限公司 | Cardiac CT projection domain optimal phase identification method, equipment and medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103198299A (en) * | 2013-03-27 | 2013-07-10 | 西安电子科技大学 | Face recognition method based on combination of multi-direction dimensions and Gabor phase projection characteristics |
CN108230338A (en) * | 2018-01-11 | 2018-06-29 | 温州大学 | A kind of stereo-picture dividing method based on convolutional neural networks |
CN108280814A (en) * | 2018-02-08 | 2018-07-13 | 重庆邮电大学 | Light field image angle super-resolution rate method for reconstructing based on perception loss |
CN109993809A (en) * | 2019-03-18 | 2019-07-09 | 杭州电子科技大学 | Rapid magnetic resonance imaging method based on residual error U-net convolutional neural networks |
US20190294878A1 (en) * | 2018-03-23 | 2019-09-26 | NthGen Software Inc. | Method and system for obtaining vehicle target views from a video stream |
CN110309910A (en) * | 2019-07-03 | 2019-10-08 | 清华大学 | The adaptive micro imaging method of optimization and device based on machine learning |
CN110455747A (en) * | 2019-07-19 | 2019-11-15 | 浙江师范大学 | It is a kind of based on deep learning without halo effect white light phase imaging method and system |
-
2019
- 2019-11-28 CN CN201911191975.6A patent/CN110969633B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103198299A (en) * | 2013-03-27 | 2013-07-10 | 西安电子科技大学 | Face recognition method based on combination of multi-direction dimensions and Gabor phase projection characteristics |
CN108230338A (en) * | 2018-01-11 | 2018-06-29 | 温州大学 | A kind of stereo-picture dividing method based on convolutional neural networks |
CN108280814A (en) * | 2018-02-08 | 2018-07-13 | 重庆邮电大学 | Light field image angle super-resolution rate method for reconstructing based on perception loss |
US20190294878A1 (en) * | 2018-03-23 | 2019-09-26 | NthGen Software Inc. | Method and system for obtaining vehicle target views from a video stream |
CN109993809A (en) * | 2019-03-18 | 2019-07-09 | 杭州电子科技大学 | Rapid magnetic resonance imaging method based on residual error U-net convolutional neural networks |
CN110309910A (en) * | 2019-07-03 | 2019-10-08 | 清华大学 | The adaptive micro imaging method of optimization and device based on machine learning |
CN110455747A (en) * | 2019-07-19 | 2019-11-15 | 浙江师范大学 | It is a kind of based on deep learning without halo effect white light phase imaging method and system |
Non-Patent Citations (1)
Title |
---|
李远;解学乾;张皓;王毅;李念云;孟捷;张贵祥: "前瞻性心电门控单心动周期256排CT成像图像质量影响因素", 《中国介入影像与治疗学》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111462020A (en) * | 2020-04-24 | 2020-07-28 | 上海联影医疗科技有限公司 | Method, system, storage medium and device for correcting motion artifact of heart image |
CN111462020B (en) * | 2020-04-24 | 2023-11-14 | 上海联影医疗科技股份有限公司 | Method, system, storage medium and apparatus for motion artifact correction of cardiac images |
CN112288752A (en) * | 2020-10-29 | 2021-01-29 | 中国医学科学院北京协和医院 | Full-automatic coronary calcified focus segmentation method based on chest flat scan CT |
CN113256529A (en) * | 2021-06-09 | 2021-08-13 | 腾讯科技(深圳)有限公司 | Image processing method, image processing device, computer equipment and storage medium |
CN117115577A (en) * | 2023-10-23 | 2023-11-24 | 南京安科医疗科技有限公司 | Cardiac CT projection domain optimal phase identification method, equipment and medium |
CN117115577B (en) * | 2023-10-23 | 2023-12-26 | 南京安科医疗科技有限公司 | Cardiac CT projection domain optimal phase identification method, equipment and medium |
Also Published As
Publication number | Publication date |
---|---|
CN110969633B (en) | 2024-02-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110969633B (en) | Automatic optimal phase identification method for cardiac CT imaging | |
US20220028085A1 (en) | Method and system for providing an at least 3-dimensional medical image segmentation of a structure of an internal organ | |
US20190130578A1 (en) | Vascular segmentation using fully convolutional and recurrent neural networks | |
CN109598722B (en) | Image analysis method based on recurrent neural network | |
CN110246137B (en) | Imaging method, imaging device and storage medium | |
EP3624056B1 (en) | Processing image frames of a sequence of cardiac images | |
KR101978317B1 (en) | Ct image database-based cardiac image segmentation method and an apparatus thereof | |
JP2006075601A (en) | Segmentation method of anatomical structure | |
CN111402278B (en) | Segmentation model training method, image labeling method and related devices | |
CN111047607A (en) | Method for automatically segmenting coronary artery | |
CN113192069A (en) | Semantic segmentation method and device for tree structure in three-dimensional tomography image | |
CN101116104B (en) | A method, and a system for segmenting a surface in a multidimensional dataset | |
US11995823B2 (en) | Technique for quantifying a cardiac function from CMR images | |
Lefebvre et al. | Lassnet: A four steps deep neural network for left atrial segmentation and scar quantification | |
Xu et al. | AI-CHD: an AI-based framework for cost-effective surgical telementoring of congenital heart disease | |
US20230252636A1 (en) | Method and system for the automated determination of examination results in an image sequence | |
CN116168099A (en) | Medical image reconstruction method and device and nonvolatile storage medium | |
CN113744215B (en) | Extraction method and device for central line of tree-shaped lumen structure in three-dimensional tomographic image | |
Moukalled | Segmentation of laryngeal high-speed videoendoscopy in temporal domain using paired active contours | |
CN111093506A (en) | Motion compensated heart valve reconstruction | |
KR102538329B1 (en) | A method for correcting a motion in a coronary image using a convolutional neural network | |
Kurochka et al. | An algorithm of segmentation of a human spine X-ray image with the help of Mask R-CNN neural network for the purpose of vertebrae localization | |
EP3667618A1 (en) | Deep partial-angle coronary restoration | |
Saenz-Gamboa et al. | Automatic semantic segmentation of structural elements related to the spinal cord in the lumbar region by using convolutional neural networks | |
CN113223104B (en) | Cardiac MR image interpolation method and system based on causal relationship |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |