CN114565711A - Heart image reconstruction method and system based on deep learning - Google Patents

Heart image reconstruction method and system based on deep learning Download PDF

Info

Publication number
CN114565711A
CN114565711A CN202111631832.XA CN202111631832A CN114565711A CN 114565711 A CN114565711 A CN 114565711A CN 202111631832 A CN202111631832 A CN 202111631832A CN 114565711 A CN114565711 A CN 114565711A
Authority
CN
China
Prior art keywords
image
images
scanning
slice
deblurring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111631832.XA
Other languages
Chinese (zh)
Inventor
黄建龙
徐斐然
廖志芳
丁雨寒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN202111631832.XA priority Critical patent/CN114565711A/en
Publication of CN114565711A publication Critical patent/CN114565711A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac

Abstract

The invention provides a heart image reconstruction method and a system based on deep learning, which comprises the following steps: selecting hearts of a plurality of subjects to carry out slice scanning imaging, wherein each slice obtains three images including a normal image, an image in an AP direction and an image in an FH direction, and the images obtained by scanning are used as a training data set and comprise a clear image and a fuzzy image; classifying images obtained by cardiac scanning by using a ResNet model, and then performing deblurring operation on blurred images in different directions by using a plurality of SRN-deblurr submodels; and calculating the direction and the size of the blood flow vector in the three-dimensional space by using the image in the AP direction and the image in the FH direction, and measuring the deblurring effect of the simulated blurred image. The four-dimensional flow magnetic resonance imaging performed by using the velocity encoding magnetic resonance imaging has strong potential in cardiovascular blood flow analysis, and the defect image can be reconstructed by deep learning, so that the motion blur in the velocity encoding magnetic resonance imaging is eliminated.

Description

Heart image reconstruction method and system based on deep learning
Technical Field
The invention relates to the field of image processing and reconstruction, in particular to a heart image reconstruction method and system based on deep learning.
Background
Cardiac Magnetic Resonance Imaging (MRI) techniques can provide high resolution images of the soft tissue of the heart in a non-invasive manner to obtain anatomical information of the subject's heart. Doctors can obtain decision information required in the process of diagnosing and treating heart diseases and pathological analysis thereof through the cardiac MRI. However, due to limitations in the accuracy of the imaging device and difficulties in patient compliance, poor quality MRI is inevitably produced, such as low resolution, motion blur, etc. Therefore, the automation of reconstruction of cardiac MRI is of great significance clinically.
Disclosure of Invention
The invention provides a heart image reconstruction method and a heart image reconstruction system based on deep learning, and aims to solve the problem that in speed coding magnetic resonance imaging, a defect image can be reconstructed through the deep learning and motion blur is eliminated.
In order to achieve the above object, the present invention provides a cardiac image reconstruction method based on deep learning, including:
step 1, selecting hearts of a plurality of subjects to carry out slice scanning imaging, wherein each slice obtains three images including a normal image, an image in an AP direction and an image in an FH direction, and the images obtained by scanning are used as a training data set and comprise a clear image and a fuzzy image;
step 2, classifying images obtained by cardiac scanning by using a ResNet model, and then performing deblurring operation on blurred images in different directions by using a plurality of SRN-deblurr submodels;
and 3, calculating the direction and the size of the blood flow vector in a three-dimensional space by using the image in the AP direction and the image in the FH direction, and measuring the deblurring effect of the simulated blurred image.
Wherein, the step 1 specifically comprises:
the slice scanning imaging is performed by using a magnetic resonance imaging technology in the short axis direction of the atrium by retrospective gating, and each slice has 25 phases or time frames;
the magnetic resonance imaging parameters include: echo time TR 47.1ms, repetition time TE 1.6ms, field of view FOV (298340) mm2, (134256) mm2 pixel matrix, in-plane resolution 1.54mm/pixel, determined by pixel spacing, through-plane resolution 6mm based on slice interval.
Wherein the step 2 comprises:
determining the fuzzy direction of the image, classifying training images required by the deblurring model in the fuzzy direction, and feeding back the classified images to the corresponding deblurring sub-model for training;
in training, a cross-entropy function is used as a loss function, and epoch is set to 50.
Wherein the step 3 comprises:
the direction and the size of the blood flow vector in the three-dimensional space are calculated, and the absolute value of the difference between pixels and the distance of the vector are required to be calculated; the system comprises a processor, a display controller and a display controller, wherein the processor is used for processing an image, and the display controller is used for displaying an image;
Figure BDA0003440469840000021
ωPSNRcalculated using the following method, where MAX represents the sum of the maximum vector distances of the useful area;
Figure BDA0003440469840000022
because real slice scan imaging does not have fuzzy and clear mapping pairs, two scans with different heartbeat periods at the same time need to be used for comparison, and the scans with the two heartbeat periods are different at the same time and use omegaPSNRInstead, the vorticity is compared, and the mathematical expression for the two-dimensional vorticity is as follows:
Figure BDA0003440469840000023
wherein the sign of ω has different meanings, wherein positive values represent CCW cycles, negative values represent clockwise CW rotation of the fluid, and magnitude of the values represents the rotational speed;
the cycle Γ is calculated using the line integral of the CCW closed loop C, written in the form of a surface integral as follows
Figure BDA0003440469840000024
The invention also provides a heart image reconstruction system based on deep learning, which comprises:
the data set acquisition module is used for selecting hearts of a plurality of subjects to carry out slice scanning imaging, each slice acquires three images including a normal image, an image in an AP direction and an image in an FH direction, the images acquired by scanning are used as a training data set, and the images acquired by scanning include a clear image and a fuzzy image;
the image processing module is used for classifying images obtained by cardiac scanning by using a ResNet model and then performing deblurring operation on blurred images in different directions by using a plurality of SRN-deblurr submodels;
and the evaluation module is used for calculating the direction and the size of the blood flow vector in the three-dimensional space by utilizing the image in the AP direction and the image in the FH direction, and is used for measuring the deblurring effect of the simulated blurred image.
The scheme of the invention has the following beneficial effects:
the heart image reconstruction method and system based on deep learning provided by the embodiment of the invention use the evaluation standard which is consistent with PC MRI, use the swirl chart vector distance of the atrial region as the evaluation standard, are more convincing, and have better performance in the aspects of visual detection and mathematical evaluation, after the fuzzy factors are removed, the VENC MRI can help radiologists and clinicians to make better clinical judgment, and the diagnosis accuracy is improved. The fuzzy classification of the ResNet model is realized, and the classification accuracy rate exceeds 99%.
Other advantages of the present invention will be described in detail in the detailed description that follows.
Drawings
FIG. 1 is a flow chart of a deep learning based cardiac image reconstruction method of the present invention;
FIG. 2 is a graph histogram of vorticity measurements and calculations according to the present invention;
FIG. 3 is a diagram of a fuzzy classification model architecture according to the present invention;
FIG. 4 is a diagram of the ResNet training process for fuzzy image classification according to the present invention;
FIG. 5 is a visual comparison of different models of the present invention with the deblurring results of a low quality image;
FIG. 6 is a graph of the real VENC MRI deblurred visualization results of the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantages of the present invention more apparent, the following detailed description is given with reference to the accompanying drawings and specific embodiments. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it is to be noted that, unless otherwise explicitly specified or limited, the terms "mounted", "connected" and "connected" are to be understood broadly, for example, as being either a locked connection, a detachable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
As shown in fig. 1, an embodiment of the present invention provides a cardiac image reconstruction method based on deep learning, including: step 1, selecting hearts of a plurality of subjects to carry out slice scanning imaging, wherein each slice obtains three images including a normal image, an image in an AP direction and an image in an FH direction, and the images obtained by scanning are used as a training data set and comprise a clear image and a fuzzy image; step 2, classifying images obtained by cardiac scanning by using a ResNet model, and then performing deblurring operation on blurred images in different directions by using a plurality of SRN-deblurr submodels; and 3, calculating the direction and the size of the blood flow vector in a three-dimensional space by using the image in the AP direction and the image in the FH direction, and measuring the deblurring effect of the simulated blurred image.
Wherein, the step 1 specifically comprises: the slice scan imaging is performed by magnetic resonance imaging technology in the short axis direction of the atrium, retrospective gating is adopted, and each slice has 25 phases or time frames; the magnetic resonance imaging parameters include: echo time TR 47.1ms, repetition time TE 1.6ms, field of view FOV (298340) mm2, (134256) mm2 pixel matrix, in-plane resolution 1.54mm/pixel, determined by pixel spacing, through-plane resolution 6mm based on slice interval.
Wherein the step 2 comprises: determining the fuzzy direction of the image, classifying training images required by the deblurring model in the fuzzy direction, and feeding back the classified images to the corresponding deblurring sub-model for training; in training, a cross-entropy function is used as a loss function, and epoch is set to 50.
Wherein the step 3 comprises: the direction and the size of the blood flow vector in the three-dimensional space are calculated, and the absolute value of the difference between pixels and the distance of the vector are required to be calculated; the system comprises a FHG, a FHB, an APG and an APB, wherein the FHG, the FHB, the APG and the APB respectively represent FH ground truth, FH blured image, AP ground truth and AP blured image, i and j represent the positions of images;
Figure BDA0003440469840000051
ωPSNRcalculated using the following method, where MAX represents the sum of the maximum vector distances of the useful area;
Figure BDA0003440469840000052
because real slice scanning imaging does not have fuzzy and clear mapping pairs, two scans with different heartbeat periods at the same time need to be used for comparison, and the scans of the two heartbeat periods are different at the same time by using omegaPSNRInstead, the vorticity is compared, and the mathematical expression for two-dimensional vorticity is as follows:
Figure BDA0003440469840000053
wherein the sign of ω has different meanings, wherein positive values represent CCW cycles, negative values represent clockwise CW rotation of the fluid, and magnitude of the values represents the rotational speed;
the cycle Γ is calculated using the line integral of the CCW closed loop C, written in the form of a surface integral as follows
Figure BDA0003440469840000054
The invention also provides a heart image reconstruction system based on deep learning, which comprises: the data set acquisition module is used for selecting hearts of a plurality of subjects to carry out slice scanning imaging, each slice acquires three images including a normal image, an image in an AP direction and an image in an FH direction, the images acquired by scanning are used as a training data set, and the images acquired by scanning include a clear image and a fuzzy image; the image processing module is used for classifying images obtained by cardiac scanning by using a ResNet model and then performing deblurring operation on blurred images in different directions by using a plurality of SRN-deblurr submodels; and the evaluation module is used for calculating the direction and the size of the blood flow vector in the three-dimensional space by utilizing the image in the AP direction and the image in the FH direction, and is used for measuring the deblurring effect of the simulated blurred image.
The invention has strong potential in cardiovascular blood flow analysis by utilizing the velocity encoding magnetic resonance imaging to carry out four-dimensional flow magnetic resonance imaging. However, sometimes the patient dynamically adjusts the heart position during imaging, resulting in motion blur. Such blurred images lead to inaccuracies in the analysis. The depth learning can reconstruct a defect image and eliminate motion blur in velocity encoding magnetic resonance imaging. A flow image reconstruction model based on SRN-Deblur (model of Tencent optimal Picture team recorded by CVPR in 2018) is proposed, and the accuracy of flow analysis is evaluated.
In practical medical image processing, low-quality low-resolution (LR) images possess too little texture detail, which is detrimental to the accuracy of the diagnosis of cardiac disorders. With super-resolution, there is a relevant mapping relationship between the low-resolution image and the high-resolution image. If these mappings can be learned by training a large number of images through a deep-learning model, then a true high-resolution image can be reconstructed using the low-resolution images. The mapping relation between the low-resolution image and the high-resolution image is learned by utilizing a three-layer Convolutional Neural Network (CNN), and the high-quality image can be obtained. However, MSE is a loss function, which results in loss of high frequency texture detail features of an image when the image input resolution multiple is large.
Therefore, a more stable and efficient model training is achieved using the lapssrn method, thereby providing a higher perceptual quality for super-resolution of MRI images. Based on an SR architecture and a Laplace pyramid structure, a novel LapSRN model is used, so that the network can perform super-resolution processing on an original low-resolution and noisy MRI image and can also automatically select resolution on an original high-resolution MRI.
Cardiac MRI imaging devices require the patient to hold their breath during the imaging process, which is difficult for some patients. Similarly, some otherwise good photographs may also perform poorly due to blur caused by camera motion. At present, a plurality of deep learning methods for removing motion blur of common images, such as SRN-Deblur, are recorded by CVPR in 2018, and are used as models of Tencent optimal image teams.
The model adopts a Laplacian pyramid frame-based network construction. The model takes the LR image as input (instead of an enlarged version of the LR image) and progressively predicts the residual image at the log2 spaxamid level, which is the up-sampling scale factor. For example, the network consists of 3 pyramid levels for super-resolving LR images at a scale factor of 8. The model includes two branches: (1) feature extraction and (2) image reconstruction.
Wherein the feature extraction branch consists of (1) a feature embedding sub-network for embedding the high-dimensional nonlinear feature map, (2) a transposed convolutional layer for upsampling the extracted features at a scale of 2, and (3) a convolutional layer (Conv _ res) for predicting the sub-band residual image. The first pyramid level has an additional convolutional layer (Conv _ in) that extracts high-dimensional feature maps from the input LR image. At other levels, the feature-embedded sub-networks may directly translate features in the high-level feature maps of the previous pyramid level. In the image reconstruction branch, at each level, the input image is up-sampled by the transposed convolutional layer at a scale of 2 and initialized using 4 × 4 bilinear kernels. The upsampled image (using element-wise summation) is then combined with the predicted residual image to generate a high resolution output image. The reconstructed HR image is then used as input to the +1 level image reconstruction branch. The entire network is a cascaded CNN with each level having the same structure. And the upsampling layer is optimized together with all other layers so as to better learn the upsampling function.
VENC MRI is the result of a scan of cardiac slices, each of which acquires three images, including a normal image, an image in the anterior-posterior (AP) direction, and an image in the foot-head (FH) direction. FIG. 1 is a VENC MRI [10] schematic with maximum blood flow velocity set at 100cm/sec, showing velocity versus phase. Through the short axis of the atrium. All of these images were retrospectively gated, with 25 phases or time frames per slice. MRI imaging parameters include echo time TR 47.1ms, repetition time TE 1.6ms, field of view FOV (298340) mm2, (134256) pixel matrix. The in-plane resolution is 1.54mm/pixel, determined by the pixel pitch, and the through-plane resolution based on the slice spacing is 6 mm. 500 cardiac VENC MRIs of 10 subjects were used as a training data set, including 250 FH and 250 AP orientation images. One set was scanned twice, once by moving the body to obtain 50 clear VENC images and once by moving the body to obtain 50 blurred VENC images.
For training purposes, the methods used are translation of the image, superposition and averaging of pixel values. The direction of translation, the step size of translation and the number of superpositions are all randomly generated, 450 original clear images are used, and 7200 blurred images are generated by the method. In generating the blurred image, the blur direction of the image is recorded and classified into 0 °, 45 °, 90 °, and 135 ° according to the recent angle principle, so that a blur direction classification model is trained later.
The main focus of the VENC MRI heart is on the heart, which is the dominant blood flow condition, and the rest of the image is almost random noise, and practically meaningless, so the evaluation model of the training results uses the most important part of the image. In the subsequent evaluation process, only the important part of the image is focused, and the noise area around the image is ignored. In order to acquire a blurred image from a ground real image for training, the blurred image is generated by a method of superposing average pixel values after copying a translation image.
Meanwhile, a plurality of SRN-Deblur submodels are used for deblurring motion-blurred VENC MRI in different directions. Before this, images were classified using ResNet to determine which sub-model should be used. Unlike traditional algorithms, neural networks can combine feature extraction and learning. In the field of image processing, convolution operation can identify the relationship between adjacent pixels, so that a Convolutional Neural Network (CNN) is widely applied to image processing. The CNN provides an end-to-end deep learning model, and the trained CNN can extract image features and classify images. The depth of the CNN model plays a crucial role in image classification, which leads to a depth in ImageNet competition to participate in the model. When the depth of the network is pursued, the problem of degradation arises. ResNet solves this problem by a residual framework.
The residual representation concept commonly used in the traditional computer vision field is applied to the construction of the CNN model to form a basic block for residual learning. It uses multiple parameter layers to learn residual representations between inputs and outputs, rather than using parameter layers to directly attempt to learn mappings between inputs and outputs, as in general CNN networks. Experiments show that the fact that the universal reference layer is directly used for learning the residual error is simpler and more effective than the fact that the mapping relation between input and output is directly learned.
SRN-Deblur is a more efficient multi-scale image deblurring network structure, where SRN is the initial letter of the scale recursive network. The SRN-Deblur model is based on two structures of a scale cycle structure and an encoder-decoder ResBlock network. The SRN-Deblur technology is based on network weight sharing of different scales, training difficulty is greatly reduced, and stability is improved. This approach has two advantages. First, the number of trainable parameters may be significantly reduced, increasing training speed. Secondly, the SRN-Deblur structure utilizes a loop module to transmit useful information of different scales in the whole network, so as to help the image deblurring. In the field of computer vision, deep learning often uses an encoder-decoder structure. Image deblurring is a computer vision task, and srn deblurring also uses this structure. Instead of using the codec structure directly, the codec structure is combined with ResBlock, which is called an encoder-decoder ResBlock network. Experimental results show that this structure can make training faster and the network more efficient in image deblurring, which is why this structure is called a scale-recursive network (SRN).
Vorticity may be used to measure the angular velocity of a fluid at a point and may be calculated from the velocity gradient of the fluid. As shown in fig. 2, vorticity ω is the circulating area divided by the circulating area, and is equal to the line integral along the tangential velocity of the counterclockwise (CCW) loop containing the point of interest. For image reconstruction (e.g., super-resolution, deblurring, etc.), peak signal-to-noise ratio and structural similarity index are generally used as evaluation criteria. In this article, two VENC nmr and FH and AP directions indicate a single part of a heart. This means that there is a significant correlation between the two mri and the evaluation index should be able to perform a comprehensive evaluation in conjunction with the FH vector image and the AP vector image. On the other hand, since cardiac VENC MRI contains a large amount of unwanted random noise, only partial images containing information about the blood flow in the heart chamber are really important. Therefore, the evaluation index should depend only on the portion of the image related to the cardiac flow.
The fuzzy classification of the ResNet model is realized, and the classification accuracy rate exceeds 99%. And training 4 SRN-Deblur submodels to perform deblurring processing on the image according to the classification result. Finally, the deblurring results of the model and SRN-Deblur alone were compared. The results show that the method is superior to SRN-Deblur in data set. The method is more suitable for complex situations. Different models can be used for different types of images, and images that cannot be processed by SRN-Deblur can be processed. The method is more targeted and performs better than SRN-Deblur on certain types of images.
Through experimentation, each training model was found to be only suitable for certain types of blur, and not capable of removing other types of blur. This means that the SRN-Deblur network does have the ability to resolve VENC MRI ambiguities, but a single SRN-Deblur model is not sufficient to handle all types of ambiguities. To solve this problem, ResNet is used to pre-classify blurred images and multiple SRN-Deblur submodels are introduced to Deblur blurred images of different types.
As shown in fig. 3, the deblurring architecture distributes the image according to its blur direction and then processes it using the corresponding deblurring submodel. A blur direction classification model is first trained, which is capable of determining the blur direction of an image. And then classifying training images required by the deblurring model in the blurring direction, and feeding back the classified images to the corresponding deblurring sub-model for training. The proposed architecture is the first one to consider the blur direction as a sub-problem.
The structure of the classification model used is based on ResNet. As shown in fig. 3, a single residual block contains two convolutional layers, two batch normalization layers, and one Relu layer. Specifically, the beginning of the network consists of a convolutional layer, a batch normalization layer, and a Relu layer. The input size of the convolutional layer is 6464, kernel size 33, padding 1, stride 1. There are next 8 parametric different residual blocks, the last one outputting a tensor of 51211616. At the end of the network, classification of the input images is output using average pooling and full connectivity. The specific parameters of 8 resnets are shown in table 1.
TABLE 1ResNet parameter values
Figure BDA0003440469840000091
Figure BDA0003440469840000101
In training, a cross-entropy function is used as a loss function, and epoch is set to 50. For ResNet, classifying images with different blur directions is a simple task, and the loss and accuracy change in the training process are shown in fig. 4. After short-time training, the accuracy rate is close to 100%, and on the other hand, the loss function is close to 0. This lays a solid foundation for the next step of training the model and deblurring the image in different blurring directions. In other words, since the blur is divided into different categories before deblurring, subsequent deblurring models do not need to handle this task, making srn-the function of the deblurring model more specific. More specific functionality also means that the task requires less model capacity and the model gets better trained.
In creating the dataset, the "black edge" of the image is segmented by translating and averaging the pixels to create a blurred image. The width of the black border is not fixed for different images, it is equal to the number of steps of the translation. Furthermore, the images of the training data set have many different sizes due to different translation directions and different combinations of positions of the cut black edges.
Training is performed using open source SRN-Deblu codes. To train the model, Adam [11 ] was used]And set beta1=0.9,β2=0.999,∈=10-8The learning rate was set to the initial value of the exponential decay, from 1 × 10 over 4000 iterations-4To 1X 10-6Weight equal to 0.3; before the blurred image is input to neural network training, the image is randomly cropped to a size of 128 x 128. The gloot method is used to initialize the network parameters, which are fixed in all experiments.
Under the same hardware condition, the training time of SRN-Deblur is about 3 hours, and the reason of lower time cost is that the training data is less and the size of the training image slice is smaller.
The verification accuracy variation curve of ResNet during training is shown in FIG. 4. The verification accuracy of ResNet rapidly increased in the first 800 cycles, from 0.25 (randomly guessed verification accuracy) to 0.9 or more. During more than 1000 cycles after 800 cycles, the verification accuracy continues to steadily increase, eventually exceeding 0.99. The loss function reaches a good value in a short time. Blurred images were used to test the deblurring effect of different methods.
FIG. 5 shows the difference of deblurring results between sharp and blu, and it can be seen that a plurality of SRN-Deblur submodels (methods) and SRN-Deblur submodels can be clearly deblurred. Careful observation of the method and the SRN-Deblur processing results show that the method is clearer and closer to the original image. However, visual inspection does not fully explain this problem. The difference of the arrows or the difference of the pixels can be better explained by using ωPSNRAnd (4) performing mathematical evaluation.
Motion artifacts impair image quality and can interfere with interpretation, especially in low signal-to-noise ratio Magnetic Resonance Imaging (MRI) applications, such as functional magnetic resonance imaging or diffusion tensor imaging, and imaging small lesions. High resolution images have a higher sensitivity to motion artifacts, which typically extend the scan time, thereby exacerbating the motion artifacts. During scanning, fast imaging techniques and sequences, optimal receiver coils, careful patient positioning, and guidance may minimize motion artifacts. Sources of physiological noise include pulse motion coupled with respiration, blood flow and heart cycle, swallowing reflex and small spontaneous head movements. For example, functional MRI spontaneous neuron activity increases by 12% of the signal change at rest, even under optimal conditions, the signal contribution from physiological noise is still a significant proportion. Motion tracking during imaging may allow for prospective correction or post-processing steps to separate signal and noise. A
Coronary artery three-dimensional magnetic resonance imaging has the potential to provide high resolution and high signal-to-noise ratio, but it is very susceptible to respiratory artifacts, especially respiratory blurring. The resolution loss caused by the respiratory blur in the three-dimensional imaging of the coronary arteries is theoretically analyzed and verified through experiments. Under normal breathing, the width of any gaussian point spread function increases to a new value of at least a few millimeters (about 3-4 millimeters). The study was performed in vivo comparing a breath mock-gated 3D acquisition to a breath-hold 2D acquisition. On average, the overall quality ratio of the pseudo-gated 3D image corresponds to the breath-hold 2D image aberration (P ═ 0.005). In most cases, respiratory blurring results in pseudo-gated three-dimensional data with lower coronary resolution than respiratory two-dimensional data.
To verify the performance of the model in the actual image, the true performance of the model was tested using the two scans of the same person mentioned earlier. Fig. 6 shows the contrast of the low quality image and the enhanced image, with the selection of the sharp and blurred images of the same phase of the two heart cycles, since the blood flow of the two sets of images is identical. Making a vortex arrow diagram with the blood flow, and seeing the blood flow in the fuzzy image from the color of the vortexHaving mixed together, the differences in blood flow in deblurred images are apparent. In addition, the edge of the deblurred image of the arrow is parallel to the tangent line atrium, and the arrow of the blurred image forms the edge atrium with a large angle and a tangent line, so that the method can deblur the real imaging scanning well, and the average eddy value is that of a sharp image, namely-49.99 omega s-1Blurred image-58.25 ω s-1Deblurred image-51.79 ω s-1
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (5)

1. A cardiac image reconstruction method based on deep learning is characterized by comprising the following steps:
step 1, selecting hearts of a plurality of subjects to carry out slice scanning imaging, wherein each slice obtains three images including a normal image, an image in an AP direction and an image in an FH direction, and the images obtained by scanning are used as a training data set and comprise a clear image and a fuzzy image;
step 2, classifying images obtained by cardiac scanning by using a ResNet model, and then performing deblurring operation on blurred images in different directions by using a plurality of SRN-deblurr submodels;
and 3, calculating the direction and the size of the blood flow vector in a three-dimensional space by using the image in the AP direction and the image in the FH direction, and measuring the deblurring effect of the simulated blurred image.
2. The cardiac image reconstruction method based on deep learning according to claim 1, wherein the step 1 specifically comprises:
the slice scan imaging is performed by magnetic resonance imaging technology in the short axis direction of the atrium, retrospective gating is adopted, and each slice has 25 phases or time frames;
the magnetic resonance imaging parameters include: echo timeTR 47.1ms, repetition time TE 1.6ms, field of view FOV (298340) mm2,(134256)mm2The pixel matrix, in-plane resolution is 1.54mm/pixel, determined by pixel pitch, and the through-plane resolution is 6mm based on slice spacing.
3. The deep learning-based cardiac image reconstruction method according to claim 1, wherein the step 2 comprises:
determining the fuzzy direction of the image, classifying training images required by the deblurring model in the fuzzy direction, and feeding back the classified images to the corresponding deblurring sub-model for training;
in training, a cross-entropy function is used as a loss function, and epoch is set to 50.
4. The deep learning-based cardiac image reconstruction method according to claim 1, wherein the step 3 comprises:
the direction and the size of the blood flow vector in the three-dimensional space are calculated, and the absolute value of the difference between pixels and the distance of the vector are required to be calculated; the image processing method comprises the following steps of obtaining an image, wherein FHG, FHB, APG and APB respectively represent the positions of an FH ground transistor, an FHburredimage, an AP ground transistor and an APB image, and i and j represent the positions of the image;
Figure FDA0003440469830000011
ωPSNRcalculated using the following method, where MAX represents the sum of the maximum vector distances of the useful area;
Figure FDA0003440469830000021
because real slice scan imaging does not have fuzzy and clear mapping pairs, two scans with different heartbeat periods at the same time need to be used for comparison, and the scans with the two heartbeat periods are different at the same time and use omegaPSNRInstead of, replaceComparing with vorticity, the mathematical expression of the two-dimensional vorticity is as follows:
Figure FDA0003440469830000022
wherein the sign of ω has different meanings, wherein positive values represent CCW cycles, negative values represent clockwise CW rotation of the fluid, and magnitude of the values represents the rotational speed;
the cycle Γ is calculated using the line integral of the CCW closed loop C, written in the form of a surface integral as follows
Figure FDA0003440469830000023
5. A cardiac image reconstruction system based on deep learning, comprising:
the data set acquisition module is used for selecting hearts of a plurality of subjects to carry out slice scanning imaging, each slice acquires three images including a normal image, an image in an AP direction and an image in an FH direction, the images acquired by scanning are used as a training data set, and the images acquired by scanning include a clear image and a fuzzy image;
the image processing module is used for classifying images obtained by cardiac scanning by using a ResNet model and then performing deblurring operation on blurred images in different directions by using a plurality of SRN-deblurr submodels;
and the evaluation module is used for calculating the direction and the size of the blood flow vector in a three-dimensional space by using the image in the AP direction and the image in the FH direction, and is used for measuring the deblurring effect of the simulated blurred image.
CN202111631832.XA 2021-12-28 2021-12-28 Heart image reconstruction method and system based on deep learning Pending CN114565711A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111631832.XA CN114565711A (en) 2021-12-28 2021-12-28 Heart image reconstruction method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111631832.XA CN114565711A (en) 2021-12-28 2021-12-28 Heart image reconstruction method and system based on deep learning

Publications (1)

Publication Number Publication Date
CN114565711A true CN114565711A (en) 2022-05-31

Family

ID=81712692

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111631832.XA Pending CN114565711A (en) 2021-12-28 2021-12-28 Heart image reconstruction method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN114565711A (en)

Similar Documents

Publication Publication Date Title
Zeng et al. Simultaneous single-and multi-contrast super-resolution for brain MRI images based on a convolutional neural network
US11967072B2 (en) Three-dimensional object segmentation of medical images localized with object detection
JP6855223B2 (en) Medical image processing device, X-ray computer tomographic imaging device and medical image processing method
Singh et al. Multimodal medical image sensor fusion model using sparse K-SVD dictionary learning in nonsubsampled shearlet domain
Shi et al. Cardiac image super-resolution with global correspondence using multi-atlas patchmatch
US8682051B2 (en) Smoothing of dynamic data sets
Frangi Three-dimensional model-based analysis of vascular and cardiac images
Xia et al. Super-resolution of cardiac MR cine imaging using conditional GANs and unsupervised transfer learning
Banerjee et al. A completely automated pipeline for 3D reconstruction of human heart from 2D cine magnetic resonance slices
JP2009512527A (en) Image registration method, algorithm for performing the image registration method, program for registering an image using the algorithm, and biomedical image handling method for reducing image artifacts due to object movement
Piccini et al. Deep learning to automate reference-free image quality assessment of whole-heart MR images
CN111540025A (en) Predicting images for image processing
US20220114699A1 (en) Spatiotemporal resolution enhancement of biomedical images
CN115830016B (en) Medical image registration model training method and equipment
Marin et al. Numerical surrogates for human observers in myocardial motion evaluation from SPECT images
Sander et al. Autoencoding low-resolution MRI for semantically smooth interpolation of anisotropic MRI
EP3658031B1 (en) Motion compensated cardiac valve reconstruction
Woo et al. Super-resolution reconstruction for tongue MR images
Sajda et al. Multi-resolution and wavelet representations for identifying signatures of disease
Kerfoot et al. Automated CNN-based reconstruction of short-axis cardiac MR sequence from real-time image data
CN114565711A (en) Heart image reconstruction method and system based on deep learning
EP3667618A1 (en) Deep partial-angle coronary restoration
JP2008161693A (en) Image processor
Miller et al. Motion compensated extreme MRI: Multi-scale low rank reconstructions for highly accelerated 3D dynamic acquisitions (MoCo-MSLR)
CN112336365B (en) Myocardial blood flow distribution image acquisition method, myocardial blood flow distribution image acquisition system, myocardial blood flow distribution image acquisition medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination