CN116385572A - Motion artifact correction method for velocity coding magnetic resonance imaging - Google Patents

Motion artifact correction method for velocity coding magnetic resonance imaging Download PDF

Info

Publication number
CN116385572A
CN116385572A CN202211475737.XA CN202211475737A CN116385572A CN 116385572 A CN116385572 A CN 116385572A CN 202211475737 A CN202211475737 A CN 202211475737A CN 116385572 A CN116385572 A CN 116385572A
Authority
CN
China
Prior art keywords
image
deblurring
model
training
magnetic resonance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211475737.XA
Other languages
Chinese (zh)
Inventor
黄建龙
贾富仓
李聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202211475737.XA priority Critical patent/CN116385572A/en
Publication of CN116385572A publication Critical patent/CN116385572A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/055Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves  involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/026Measuring blood flow
    • A61B5/0263Measuring blood flow using NMR
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a motion artifact correction method for velocity encoding magnetic resonance imaging. The method comprises the following steps: determining a blur direction type of the velocity encoded magnetic resonance image using the trained classification model; and selecting corresponding deblurring sub-models according to the types of the fuzzy directions to perform deblurring processing to obtain corrected images, wherein the deblurring sub-models are obtained through training, the number of the deblurring sub-models is the same as that of the types of the fuzzy directions, and each deblurring sub-model corresponds to one type of fuzzy directions. The invention can effectively remove the artifact or noise in the image, obtain a clearer correction image and further accurately identify the vortex position in the heart.

Description

Motion artifact correction method for velocity coding magnetic resonance imaging
Technical Field
The invention relates to the technical field of medical image processing, in particular to a motion artifact correction method of velocity coding magnetic resonance imaging.
Background
Velocity-coded magnetic resonance imaging (VENC MRI) is an imaging modality that measures blood flow velocity based on Phase Contrast (PC), and is therefore also known as PC-MRI. VENC MRI provides quantitative information of blood flow without the need to introduce contrast agents into the body. Each pixel in the image corresponds to a blood flow velocity at a respective location. By superimposing the two-dimensional heart flow on the corresponding MR image, the flow situation of the heart anatomy can be referenced. The heart chamber segmentation will further define the boundaries of blood flow and isolate the region of interest for blood flow analysis.
However, during a VENC MRI scan, the relative motion between the imaging device and the patient may result in poor imaging. For example, moving the patient's body during imaging can cause imaging blurring. Furthermore, patients are often required to hold their breath during imaging, but some patients have difficulty holding their breath for long periods of time, and even the slightest breath can result in motion blur. Through researches, during the imaging process, the dynamic adjustment of the heart position can be caused by the sometimes-occurring false triggering of the electrocardiographic gating, arrhythmia of a patient, uncontrollable movement, slight respiration and the like. These conditions can lead to motion blur and cause artifacts, which affect cardiovascular image diagnosis of the patient, and therefore removal of motion blur in VENC MRI is of great practical value.
With the development of Artificial Intelligence (AI) in the field of image processing, the application of deep learning models related to clinical diagnosis in medical image processing is increasing. In the existing image deblurring scheme, blind deblurring (i.e. unknown blur kernel) has great research results. For example, the KCNN model can estimate the motion amplitude and direction of the blurred portion and describe complex motion using a plurality of image blocks, and can cope with heterogeneous blur, but the computational complexity is high. For another example, an end-to-end deblurring model is trained using a GOPRO dataset without regard to the blur kernel, which avoids errors caused by under-estimation of the blur kernel.
However, the current deblurring model is basically an end-to-end model, which means that any type of motion blurred image is input into the model, which will output an image without motion blur. If the model is capable of achieving good performance through an end-to-end framework, the advantages of simplicity and rapidity can be exerted. But in some application areas, the two-phase model may provide more practical effects. Unlike normal natural images, the contours and structures of medical images are not obvious nor are they rich in colors, and are noisy. Especially VENC MRI, the majority of pixels in the image are noise of black and white pixels. This feature of VENC MRI significantly increases the difficulty of learning features.
Disclosure of Invention
It is an object of the present invention to overcome the above-mentioned drawbacks of the prior art by providing a method of motion artifact correction for velocity encoded magnetic resonance imaging, the method comprising:
determining a blur direction type of the velocity encoded magnetic resonance image using the trained classification model;
and selecting corresponding deblurring sub-models according to the types of the fuzzy directions to perform deblurring processing to obtain corrected images, wherein the deblurring sub-models are obtained through training, the number of the deblurring sub-models is the same as that of the types of the fuzzy directions, and each deblurring sub-model corresponds to one type of fuzzy directions.
Compared with the prior art, the motion artifact correction method for the velocity coding magnetic resonance imaging has the advantages that the provided motion artifact correction method for the velocity coding magnetic resonance imaging is carried out in two stages, firstly, the type of the blurring direction of an image is judged by using a residual network, then, artifacts in corresponding deblurring sub-model calibration images are scheduled, the framework for forming a plurality of deblurring sub-models based on classification instead of an end-to-end model can remarkably improve blurred and defective medical images, and the method is used for deblurring, reconstructing blood flow images, evaluating blood flow analysis and the like, and is beneficial to assisting a cardiologist in clinical analysis.
Other features of the present invention and its advantages will become apparent from the following detailed description of exemplary embodiments of the invention, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow chart of a method of motion artifact correction for velocity encoded magnetic resonance imaging in accordance with one embodiment of the present invention;
FIG. 2 is a schematic view of visual blood flow reconstruction in cardiac velocity encoded magnetic resonance imaging in accordance with one embodiment of the present invention;
FIG. 3 is a schematic illustration of acquiring cardiac image data in the short axis direction of the atrium, according to an embodiment of the invention;
FIG. 4 is a vortex flow measurement and histogram thereof in accordance with one embodiment of the present invention;
FIG. 5 is a schematic diagram of a deblurring model architecture according to one embodiment of the invention;
FIG. 6 is a schematic diagram of a fuzzy classification model structure in accordance with an embodiment of the present invention;
FIG. 7 is a schematic diagram of a ResNet training process for classifying blurred images in accordance with one embodiment of the invention;
FIG. 8 is a visual comparison of different model deblurring effects with low quality images according to one embodiment of the present invention;
FIG. 9 is a visual result of a real VENC MRI deblurring, including a visualization of velocity vector and vorticity scalar maps, according to one embodiment of the present invention;
FIG. 10 is a vorticity quantization and vorticity distribution histogram of blurred and deblurred images according to one embodiment of the present invention;
FIG. 11 is a comparison of vortex shedding distribution histogram and quantization vortex shedding for blurred and deblurred images of 5 time frames according to one embodiment of the present invention.
Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless it is specifically stated otherwise.
The following description of at least one exemplary embodiment is merely exemplary in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any specific values should be construed as merely illustrative, and not a limitation. Thus, other examples of exemplary embodiments may have different values.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
Referring to fig. 1, the provided motion artifact correction method for velocity encoded magnetic resonance imaging comprises the following steps:
step S110, collecting a data set, further obtaining one-to-one mapping from a blurred image to a clear image through blurring processing, and labeling a blurring direction type.
The pixels in VENC MRI represent blood flow velocity, the maximum blood flow velocity corresponds to a 180 ° phase shift, and VENC is inversely proportional to the magnitude of these gradients. The larger the gradient, the more phase loss and the smaller the corresponding VENC value.
The image deblurring method adopted by the invention is assumed that the blur kernel is unknown, so that in order to learn the mapping relation between a blurred image and a clear image, a data set needs to be acquired first. In one embodiment, the result obtained by the VENC MRI is a cardiac tomography scan, with three images per layer, including a normal image, an image in the anterior-posterior (AP) direction, and an image in the cephalic (FH) direction. Referring to fig. 2, where the heart is scanned using an imaging device, images of the layers (or slices), including normal images, AP VENC MRI and FH VENC MRI, are generated, by which vortex (or swirl) visualization of blood can be achieved. The correspondence between velocity and phase is also shown in FIG. 2, with a maximum blood flow velocity of 100cm/s.
For example, VENC MRI is acquired in the short axis direction of the atrium. All these images were obtained using retrospective gating with 25 phases or time frames per slice, image data see fig. 3.MRI imaging parameters include: the echo time TR was 47.1ms, the repetition time TE was 1.6ms, and the field of view FOV was (298X 340) mm 2 The pixel matrix is 134×256. The plane resolution is determined by the pixel pitch, 1.54 mm/pixel, and the resolution of the passing plane based on the slice pitch is 6mm. 500 cardiac VENC MRI of 10 subjects were used as training dataset, comprising 250 images of FH direction and 250 images of AP direction. Note that 1 subject received two scans, one to obtain 50 clear VENC images and the other to obtain 50 blurred VENC images by moving the body.
A big problem in the area of single image deblurring is that it is difficult to obtain a training dataset. In the real world, a one-to-one mapping pair of blurred to sharp images cannot be obtained. In order to train the deblurring model, however, appropriate techniques need to be sought to simulate the mapping of blurred images to sharp images. In one embodiment, the blurred image may be obtained by panning the image, superimposing the image, and calculating the average pixel value, wherein the panning direction, the panning step size, and the number of superimposed times are all randomly generated. In this way, 7200 blurred images are generated from 450 original sharp images. After the fuzzy image is generated, the fuzzy direction of the image is recorded, and the fuzzy direction is divided into four classes of fuzzy directions of 0 degree, 45 degree, 90 degree and 135 degree according to the principle of the nearest angle, and the four classes of fuzzy directions are used for subsequent training of a fuzzy direction classification model.
Cardiac VENC MRI focuses mainly on the main blood flow situation associated with the heart, with other parts of the image almost full of random noise. Therefore, the most important part of the image is used for evaluating the model training result later. For this reason, 500 images are manually segmented by medical professionals in the heart field. After filtering the noise data, the important part of the image is evaluated in the subsequent evaluation process. To obtain a training blurred image from a real image, a blurred image is generated by superimposing and calculating an average pixel value after copying a translation image.
Step S120, training a classification model and a deblurring sub-model, wherein the classification model is used for identifying the type of the blurring direction of the image, and the deblurring sub-model carries out deblurring processing on the image in the corresponding blurring direction.
The present invention aims to deblur the VENC MRI and attempt to recover the original velocity encoded information. The deblurring model may take many forms, such as SRN (scale-recurrent network, scale-loop network) networks, etc. The following description will take as an example an SRN-based Deblur model, also referred to as an SRN-Deblur network.
The SRN-Deblur network was trained to Deblur the VENC MRI directly. In most cases, the network deblurring works well. In some special cases, however, the deblurring process may cause the image to be more blurred. It has been found experimentally that each deblurring model is only applicable to certain types of blur, but not others. This means that the SRN-Deblur network is able to Deblur the VENC MRI, but a single SRN-Deblur model is not sufficient to handle all types of blurring. To address this problem, the blurred image is preferably pre-classified using a classification model, and a plurality of SRN-Deblur sub-models are introduced to Deblur different types of blurred images.
The VENC MRI images with different directional motion blur are deblurred using multiple SRN-Deblur sub-models. Prior to this, the image is classified using a classification model to determine which sub-model should be used. The classification model may employ various types of neural network models, such as convolutional neural networks, and the like. The neural network may integrate feature extraction and learning. In the field of image processing, convolution operations can confirm the relationship between adjacent pixels, and thus Convolutional Neural Networks (CNNs) are widely used in image processing.
CNNs provide end-to-end deep learning models, and trained CNNs can extract image features and classify images. The depth of the CNN model plays a critical role in image classification, but pursuing network depth can cause degradation problems. Therefore, the residual network ResNet is preferably used as a classification model to solve this problem. The basic block of residual learning uses multiple parameter layers to learn the residual representation between the input and output, rather than directly attempting to learn the mapping between the input and output using the parameter layers as in a typical CNN network. Experiments have shown that it is easier and more efficient to learn the residual directly using the common reference layer instead of the mapping between input and output.
SRN-Deblur is a more efficient multi-scale image deblurring network structure. The SRN-Deblur model is based on two structures, namely a scale loop structure and an encoder-decoder ResBlock network. The SRN-Deblur technology is based on the shared network weight with different scales, so that the training difficulty can be remarkably reduced and the stability can be increased. This approach has two major advantages. First, SRN-debur can greatly reduce the number of trainable parameters and speed up training. Secondly, the SRN-Deblur structure adopts a circulation module, and useful information of each scale can be transmitted in the whole network, so that the image deblurring is facilitated.
Image deblurring is a computer vision task, and SRN-debur adopts an encoder-decoder structure, but the encoder-decoder ResBlock network does not directly use the encoding and decoding structure, but combines the encoding and decoding structure with ResBlock. According to experimental results, this structure can increase training speed and make the network more efficient in image deblurring, and is thus named as a scale loop network (SRN).
After training the classification model and the deblurring sub-model by adopting the constructed training data set, the optimization parameters of the model, such as weight, bias and the like, can be obtained. The training samples of the classification model reflect the correspondence between the blurred image and the blurred direction type, and the training samples of the deblurring sub-model reflect the correspondence between the blurred image and the blurred direction type.
Step S130, correcting artifacts of the velocity encoded magnetic resonance imaging using the trained classification model and the deblurring sub-model.
In the practical application process, the trained classification model can be utilized to identify the fuzzy direction type of the to-be-processed speed coding magnetic resonance image; and selecting a corresponding deblurring sub-model according to the identified blurring direction type to perform deblurring processing, so as to obtain a corrected image. Further, the corrected image may be used to identify intra-cardiac vortexes, calculate vortexes, and the like.
Vorticity can be used to measure the angular velocity of a fluid at a point and can be calculated from the velocity gradient of the fluid. As shown in fig. 4, fluid rotation may be represented using finite elements and flow vectors, with vorticity calculations based on the speed rotation at a point, and numerical calculations using vectors of contours around a point in the flow field. Vortex quantity ω is the component of the angular velocity in the direction of the plane normal vector, equal to the tangential velocity line integral of the counterclockwise (CCW) loop containing the target point.
First, according to Stokes' law, the cycle Γ is calculated using the line integral of the CCW closed loop C, written as the following:
Figure SMS_1
wherein, the mathematical expression of two-dimensional vorticity is as follows:
Figure SMS_2
wherein V represents the rotation linear velocity, S represents the area of the closed curved surface, and V x Representing the component of the linear velocity in the x-direction, V y The signs of ω are different meaning that the component of the linear velocity in the y-direction is represented, positive value represents the fluid rotating counterclockwise (CCW), negative value represents the fluid rotating Clockwise (CW), and the magnitude of the value represents the rotational velocity.
Entropy can be defined as an indicator of the degree of confusion in the system, so that the histogram also provides information about the complexity of the image in the form of entropy descriptors. The higher the entropy, the more complex and chaotic the image. The mathematical formula for entropy is defined as follows:
Figure SMS_3
wherein p is i Representing a probability mass function.
In the present invention, entropy is used to verify the presence of vortices. In blurred images, the tendency of blood flow to swirl is not obvious and the quantified blood flow direction is affected by motion blur. This makes everything prone to motion blur and most of the vorticity is concentrated around 0, with smaller gradients representing a smaller number of colors. Thus, the lower the image complexity, the lower the entropy value. After deblurring, the blood flow direction is obviously vortex-shaped, and is more complex, the vortex quantity distribution is more discrete, the gradient is more, and more colors are presented.
For image reconstruction (e.g., super resolution, deblurring, etc.), peak signal to noise ratio (PSNR) and structural similarity index (SSMI) are typically used as evaluation criteria. In the present invention, two VENC MRI related to FH and AP directions indicate a single part of the heart. This means that there is a significant correlation between the two MRI's, and the evaluation index should combine the FH vector image and the AP vector image for comprehensive evaluation. On the other hand, since cardiac VENC MRI contains a lot of useless random noise, only the image portion with blood flow information in the heart chamber is really important. Therefore, the evaluation index should depend only on the part of the image that is related to the heart blood flow.
According to the above requirements, evaluation criteria for cardiac VENC MRI are proposed: vortex quantity omega PSNR . Inviting medical professionals to assist in manually segmenting important parts of the heart, ω PSNR Only the pixels of this part of the image will be considered. Using the FH and AP images, the direction and size of the blood flow vector in 3D space is calculated. In the normal PSNR, the absolute value of the difference between pixels is to be calculated. And at omega PSNR In this, the distance of the vector is calculated. FHG, FHB, APG and APB represent FH real image, FH blurred image, AP real image and AP blurred image, respectively, i and j represent the positions of the images.
Figure SMS_4
The vorticity ω is calculated using the following formula PSNR Where MAX represents the sum of the maximum vector distances of the useful regions.
Figure SMS_5
Then, ω is used PSNR The deblurring effect of the simulated blurred image is measured. Since the actual VENC MRI has no mapping pair of blurred and sharp images, only two scan results of different heart cycles at the same instant can be used for comparison. Even at the same time, the scan results of the two heart cycles are different, so that the evaluation criterion based on pixel-by-pixel correspondence (e.g., ω PSNR )。
To further verify the effect of the present invention, experiments were performed, and the experimental procedure and detailed information are as follows.
First, the SRN-Deblu network is trained to Deblur the VENC MRI directly. Experiments have found that each training model is only suitable for certain types of blurring, but not for others. This means that the SRN-Deblur network is indeed able to Deblur the VENC MRI, but a single SRN-Deblur model is not sufficient to handle all types of blurring. To solve this problem, the blurred image is pre-classified using ResNet, and multiple SRN-Deblur submodels are introduced to Deblur different types of blurred images.
Training the deblurring sub-model includes the steps of:
collecting 40% of images by a hierarchical sampling method, training ResNet, and classifying the remaining 60% of images by using a trained ResNet model;
hierarchically acquiring 40% of images from 60% of the classified images for training a plurality of SRN-Deblur sub-models;
the remaining 20% of the images were input into the trained ResNet for classification, and then each classification result was input into the corresponding SRN-Deblu model for deblurring.
In training ResNet, it was found that the classification accuracy of ResNet can exceed 99% and even reach 100%. On the other hand, since there are not many images in the dataset, the following improvements are made to the process:
dividing 80% and 20% of all images into a training set and a testing set respectively by a layered sampling method;
dividing the training set into four sub-training sets according to the image labels, wherein each sub-training set is used for training an SRN-Deblur sub-model;
in the test, the test cases are classified by using a trained ResNet model, and deblurred by using a corresponding SRN-Deblur sub-model according to the classification result.
As shown in fig. 5, a blur direction classification model is first trained, which is capable of determining the blur direction of an image. Then, the training images required by the deblurring model are classified in the blurring direction, and the classified images are input into the corresponding deblurring sub-model for training. Compared with the prior art, the architecture considers the fuzzy direction as a sub-problem for the first time.
In experiments, a classification model structure was built based on ResNet. As shown in fig. 6, the classification model contains 8 residual blocks, each containing two convolutional layers, two batch normalization layers, and one Relu layer. Specifically, the beginning of the classification model includes a convolutional layer, a batch normalization layer, and a Relu layer. The input size of the convolution layer is 64×64, and the convolution kernel size, feature map fill width, and convolution step size are 3×3, 1, and 1, respectively. Next, there are 8 residual blocks of different parameters, the last residual block outputting a 512 x 1 x 16 tensor. At the end of the classification model, the classification results are output using an average pooling and fully connected layer. The parameters of the 8 residual blocks are shown in table 1.
Table 1: parameter values of residual block
Figure SMS_6
In table 1, there are 8 sequentially connected res net blocks, each having two 2D convolutional layers, each input or output having a respective size (labeled s.) and channel (labeled c.), and the corresponding convolutional parameters are listed.
In training the classification model, the cross entropy function is taken as the loss function and Epoch is set to 50. For ResNet, the loss and accuracy changes during training are shown in FIG. 7, where the curve generally above represents training accuracy and the curve generally below represents loss values. As can be seen from fig. 7, after 2500 cycles, the loss was reduced to a very low level, with an accuracy approaching 1.0. That is, with short training (about 1500s required to complete the training process), the accuracy is close to 100%, and the loss value is close to 0. This lays a solid foundation for subsequent training of the deblurring model and deblurring of images with different blur directions. In other words, since the ambiguities are divided into different categories prior to deblurring, this task need not be handled by the subsequent deblurring model, which may make the function of the SRN-Deblur model more specific. More specific functions also mean that this task requires less model capacity and better model training results.
In constructing the training dataset, a blurred image is created by translating and computing the average pixels and "black edges" of the segmented image. The width of the black edge is not fixed for different images, equal to the translation step. Furthermore, the images of the training dataset have many different sizes due to the different translation directions and the different combinations of positions of the cut black edges.
In the experimental process, in order to train the deblurring model, adam parameters are adopted and set as beta 1 =0.9、β 2 =0.999、∈=10 -8 . The learning rate is set as the initial value of exponential decay, and is 1 multiplied by 10 -4 To 1X 10 -6 The Epoch is equal to 4000 and the power is equal to 0.3. The images were randomly cut into 128 x 128 sizes prior to inputting the blurred images into the neural network for training. Network parameters were initialized using the gloort method and are fixed parameters in all experiments.
See table 2 for the equipment used in the experiment.
Table 2: platform configuration
Figure SMS_7
In the same hardware, the training of SRN-Deblur takes about three hours. The reason for the less time cost is that the training data is less and the size of the training image layer is smaller.
As shown in FIG. 7, the ResNet verification accuracy profile during training increases rapidly from 0.25 (random guess verification accuracy) to over 0.9 during the first 800 cycles. The verification accuracy continues to steadily rise over 1000 cycles after 800 cycles until eventually exceeding 0.99. Furthermore, the change in the actual loss function value during ResNet training tends to stabilize over a short period of time.
Further, blurred images were used to test the deblurring effect of different methods. Fig. 8 shows the difference in deblurring results for two different deblurring methods. As can be seen from FIG. 8, the multiple SRN-Deblur submodels (of the present invention) and the SRN-Deblur effect are evident, and the Deblur image of the present invention is clearer and more similar to the original image. However, naked eye observation does not fully explain this problem. Can be obtained by using omega PSNR Mathematical evaluations were performed to better account for differences in pixels.
Table 3 lists ω of blurred images, SRN-Deblur images, and Deblur images of the present invention PSNR . In general, image ω deblurred using the present invention PSNR Highest, and can remove blurring in all directions of the image.
Table 3: omega PSNR Comparison results
Figure SMS_8
To verify the performance of the model on the actual image, the actual performance of the model was tested using the results of the two scans of the same subject described above. FIG. 9 shows a low quality image and enhancement mapAnd (5) comparing the images. A sharp image and a blurred image of the same phase of the two heart cycles are selected, because the blood flow in the two sets of images is consistent. The blood flow is used for drawing a vortex arrow chart, and the blood flow in the blurred image is mixed together, so that the blood flow in the deblurred image is quite obvious. In addition, the arrow of the deblurred image is parallel to the tangent line of the atrial edge, and the arrow of the blurred image and the tangent line of the atrial edge form a larger angle, which proves that the method of the invention has good deblurring effect on the actual imaging scanning, and the average vortex value is as follows: clear image: -49.99 omega(s) -1 ) Blurring the image: -58.25 ω(s) -1 ) Deblurring the image: 51.79 omega(s) -1 )。
As shown in fig. 9, it is difficult to recognize a vortex in a blurred image, but it is easy to recognize a vortex in a clear image. The vortex quantity quantization map and the vortex quantity distribution histogram of the blurred image and the sharp image for comparison are shown with reference to fig. 10, in which the horizontal axis and the vertical axis represent the magnitude interval of the vortex quantity and the vortex quantity in each interval, respectively. e is the entropy of the image.
In fig. 10, in the AP and FH directions, the average vorticity measurement ω= 0.2860, the standard deviation σ= 1.4681, and the entropy e= 23.6731 of the blurred image. Note that the average vorticity ω= -0.5519, standard deviation σ= 3.3258, and entropy e= 26.7137 of the normal image. The standard deviation of the blurred image is smaller than that of the normal image, so that the vorticity distribution is more concentrated, the entropy value is lower, and the image complexity is lower. As can be seen from fig. 10, most of the vorticity is concentrated around 0, indicating no vorticity regions in the image. In contrast, the normal image has more discrete vorticity distribution, higher entropy, more complex image appearance, and two regions of different polarities in the image. Since the mean value of vorticity is negative, vorticity falls more to the left of 0, so vorticity in the clockwise direction is dominant in fig. 10.
Furthermore, the vorticity distribution of the heart beat at five different moments was measured. The result of comparing the blurred image and the deblurred image is shown in FIG. 11, which is a vortex distribution histogram sum of the blurred image and the deblurred image for 5 time framesComparison of vorticity was quantified. In the vorticity distribution histogram, vorticity distributions of a blurred image and a deblurred image are respectively illustrated. Sigma (sigma) b Sum sigma r The standard deviation of vorticity distribution of the blurred image and the deblurred image, h b And h r The entropy values of the blurred image and the deblurred image are represented, respectively. It can be seen that the deblurred image can accurately identify the vortex location. In addition, the vortex quantity distribution corresponding to the deblurred image is more discrete, the color representation is more, the entropy value is higher, the vortex position cannot be recognized by the blurred image at all, the vortex quantity distribution is close to 0, the entropy value is lower, and the color representation is less. This shows that the measurement method used in the present invention is very effective.
In summary, the motion blurred images have obviously different directions, and the target problem of the deblurring model can be simplified by pre-classifying the motion blurred images by using the information of the blur directions before deblurring, namely, the deblurring process is only required to be carried out on the blurred images in the same blur direction, so that the size of the known space is reduced, and the model is easier to fit. In addition, by introducing a plurality of SRN-Deblur sub-models, images with different definition can be corrected under the conditions of less training data and different training image blurring conditions, and the model robustness is enhanced.
In summary, the present invention provides a new model, which is a deblurring model that combines ResNet and multiple SRN-Deblur models. Fuzzy classification is performed by using a ResNet model, and the classification accuracy exceeds 99%. According to the classification result, four SRN-Deblur sub-models are trained to Deblur the image, and the four trained SRN-Deblur sub-models can output high-quality Deblur images. Finally, the deblurred results of the model are compared to the results of the SRN-Deblur alone. The results show that the invention is more suitable for complex situations. Different models can be used for different types of images and images that are not processable by the SRN-Deblur can be processed. In VENC MRI, the difference between the deblurred image and the actual image is significantly smaller than the difference between the blurred image and the actual image. Experimental results indicate that the vortex quantity of the deblurred image is closer to that of the clear image than that of the blurred image. In addition, the model of the invention is superior to the prior art in visual inspection and mathematical evaluation, and the obtained clear VENC MRI can help radiologists and clinicians to make better clinical judgment and improve diagnosis accuracy after removing the blurring factors, and can be used in wider fields.
The present invention may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present invention.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for carrying out operations of the present invention may be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++, python, and the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present invention are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information for computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, implementation by software, and implementation by a combination of software and hardware are all equivalent.
The foregoing description of embodiments of the invention has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the technical improvements in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the invention is defined by the appended claims.

Claims (10)

1. A method of motion artifact correction for velocity encoded magnetic resonance imaging comprising the steps of:
determining a blur direction type of the velocity encoded magnetic resonance image using the trained classification model;
and selecting corresponding deblurring sub-models according to the types of the fuzzy directions to perform deblurring processing to obtain a corrected image, wherein the deblurring sub-models are obtained through training, the number of the deblurring sub-models is the same as that of the types of the fuzzy directions, and each deblurring sub-model corresponds to one type of fuzzy directions.
2. The method of claim 1, wherein the classification model is trained according to the steps of:
acquiring a velocity encoding magnetic resonance image of cardiac tomography, wherein each layer acquires three images including a normal image, a front-rear direction image and an image in the direction of the foot head;
blurring processing is carried out on the acquired image, and the blurring direction types of the image are marked, wherein the blurring direction types comprise 0 degree, 45 degrees, 90 degrees and 135 degrees;
constructing a training data set, wherein the training data set reflects the corresponding relation between the blurred image and the blurred direction type;
and training the classification model by using the training data set to obtain optimization parameters by taking the set loss function minimization as an optimization target.
3. The method of claim 1, wherein the blurring process comprises:
and aiming at the acquired image, generating a blurred image by means of image translation, image copying, image superposition and average pixel value calculation, wherein the translation direction, the translation step length and the superposition times are randomly generated.
4. The method of claim 1, wherein the classification model is a residual network comprising, in order, a first convolution layer, a batch normalization layer, an activation layer, a plurality of residual blocks, a second activation layer, an averaging pooling layer, and a full connection layer.
5. The method of claim 1, wherein the imaging parameters of the velocity encoded magnetic resonance image are set to: the echo time TR is 47.1ms, the repetition time TE is 1.6ms, and the field of view FOV is 298X 340mm 2 The pixel matrix was 134×256, the planar resolution was 1.54 mm/pixel, and the resolution of the passing plane based on the interlayer spacing was 6mm.
6. The method of claim 1, wherein the deblurring sub-model is a scale-loop structure and encoder-decoder residual block network.
7. The method as recited in claim 1, further comprising: and identifying the intra-cardiac vortex by using the corrected image and calculating the vortex quantity.
8. The method of claim 1, wherein the loss function is a cross entropy function.
9. A computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor realizes the steps of the method according to any of claims 1 to 8.
10. A computer device comprising a memory and a processor, on which memory a computer program is stored which can be run on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 8 when the computer program is executed.
CN202211475737.XA 2022-11-23 2022-11-23 Motion artifact correction method for velocity coding magnetic resonance imaging Pending CN116385572A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211475737.XA CN116385572A (en) 2022-11-23 2022-11-23 Motion artifact correction method for velocity coding magnetic resonance imaging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211475737.XA CN116385572A (en) 2022-11-23 2022-11-23 Motion artifact correction method for velocity coding magnetic resonance imaging

Publications (1)

Publication Number Publication Date
CN116385572A true CN116385572A (en) 2023-07-04

Family

ID=86971804

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211475737.XA Pending CN116385572A (en) 2022-11-23 2022-11-23 Motion artifact correction method for velocity coding magnetic resonance imaging

Country Status (1)

Country Link
CN (1) CN116385572A (en)

Similar Documents

Publication Publication Date Title
US9968257B1 (en) Volumetric quantification of cardiovascular structures from medical imaging
EP3295374B1 (en) Method and system for landmark detection in medical images using deep neural networks
Shaw et al. MRI k-space motion artefact augmentation: model robustness and task-specific uncertainty
CN107886508B (en) Differential subtraction method and medical image processing method and system
JP2018192264A (en) Medical image processing apparatus
CN109949349B (en) Multi-mode three-dimensional image registration and fusion display method
US8682051B2 (en) Smoothing of dynamic data sets
CN111540025A (en) Predicting images for image processing
CN113424222A (en) System and method for providing stroke lesion segmentation using a conditional generation countermeasure network
WO2022086910A1 (en) Anatomically-informed deep learning on contrast-enhanced cardiac mri
EP3869401A1 (en) Out-of-distribution detection of input instances to a model
EP2498222A2 (en) Method and system for regression-based 4D mitral valve segmentation from 2D+T magnetic resonance imaging slices
Beache et al. Fully automated framework for the analysis of myocardial first‐pass perfusion MR images
EP3658031B1 (en) Motion compensated cardiac valve reconstruction
CN113222985B (en) Image processing method, image processing device, computer equipment and medium
CN116385572A (en) Motion artifact correction method for velocity coding magnetic resonance imaging
Chitiboi et al. Contour tracking and probabilistic segmentation of tissue phase mapping MRI
WO2014106747A1 (en) Methods and apparatus for image processing
Arega et al. Using Polynomial Loss and Uncertainty Information for Robust Left Atrial and Scar Quantification and Segmentation
JP2021105960A (en) Medical information processing device
JP5132559B2 (en) Digital image segmentation method and computer-readable program storage device
KR20200041773A (en) Apparatus for compansating cancer region information and method for the same
CN114155208B (en) Atrial fibrillation assessment method and device based on deep learning
CN116958217B (en) MRI and CT multi-mode 3D automatic registration method and device
EP4343680A1 (en) De-noising data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination