CN112419378A - Medical image registration method, electronic device, and storage medium - Google Patents

Medical image registration method, electronic device, and storage medium Download PDF

Info

Publication number
CN112419378A
CN112419378A CN202011313238.1A CN202011313238A CN112419378A CN 112419378 A CN112419378 A CN 112419378A CN 202011313238 A CN202011313238 A CN 202011313238A CN 112419378 A CN112419378 A CN 112419378A
Authority
CN
China
Prior art keywords
image
enhanced
sample
target
component
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011313238.1A
Other languages
Chinese (zh)
Other versions
CN112419378B (en
Inventor
周庆
姜娈
曹晓欢
薛忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Intelligent Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Intelligent Healthcare Co Ltd filed Critical Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority to CN202011313238.1A priority Critical patent/CN112419378B/en
Publication of CN112419378A publication Critical patent/CN112419378A/en
Application granted granted Critical
Publication of CN112419378B publication Critical patent/CN112419378B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • G06T2207/10096Dynamic contrast-enhanced magnetic resonance imaging [DCE-MRI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention discloses a medical image registration method, an electronic device and a storage medium, wherein the medical image registration method comprises the following steps: acquiring an enhanced image to be processed and a corresponding target plain scan image; inputting the enhanced image to be processed into a medical image enhanced component network model to obtain a target enhanced component; acquiring a target de-emphasis image according to the to-be-processed emphasis image and the target emphasis component; inputting the target de-enhancement image and the target flat-scan image into an image registration network model to obtain a target deformation field of the target de-enhancement image relative to the target flat-scan image; and carrying out deformation processing on the target de-emphasis image according to the target deformation field to obtain a registered target registration image. The invention improves the accuracy of image registration, effectively reduces the calculated amount in the image registration process, shortens the registration time and greatly improves the image registration efficiency.

Description

Medical image registration method, electronic device, and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a medical image registration method, an electronic device, and a storage medium.
Background
Dynamic contrast enhanced DCE-MRI (magnetic resonance dynamic enhanced imaging) is an MR (magnetic resonance) examination method reflecting tissue microcirculation blood flow perfusion, records the change of tissue signal intensity by using repeated imaging to track the condition that contrast agent diffuses into surrounding tissues along with time, and is a quantitative MRI technology for researching the tumor microvascular leakage characteristics. DCE-MRI can not only provide pathological change morphological information, but also reflect the change of pathological changes in physiology, and provide richer and more accurate information for the diagnosis and curative effect evaluation of primary liver cancer and liver metastasis.
Clinically, the technique of TACE (catheter hepatic artery chemoembolization) is a minimally invasive interventional operation technique, and is particularly suitable for patients with multiple cancer or tumors in the liver which can not be effectively resected through an operation. However, TACE treatment has limitations, and incomplete embolization, establishment of collateral circulation in the tumor, blood supply to multiple arteries, and the like all result in tumor survival and recurrence. However, coagulative necrotic foci show high signals of different degrees on T1W (imaging parameter in magnetic resonance examination), and are easily confused with strengthened tumor tissue after MR enhanced scanning, and the sensitivity of MR to recurrent nodules is reduced. In current clinical applications, the subtraction of images before and after enhancement can eliminate the effect of the high signal of T1W, and highlight the strengthened tumor tissue, so that the subtraction image is an effective means for evaluating the residual active tumor after TACE surgery. However, in the DCE-MRI acquisition process, the patient's movement and the liver deformation caused by respiration cause the front and rear positions of the tumor to move, thereby affecting the quality of subtraction images and failing to accurately identify the active tumor region.
Currently, the negative impact of intensity variations in DCE-MRI registration is mainly reduced by: (1) the multi-mode image registration is realized by the similarity index with invariance to the image gray level, such as mutual information, normalized gradient and the like; (2) the registration performance of DCE-MRI is improved by extracting the region of interest, and jointly segmenting and registering; (3) the temporal information is used to estimate the enhancement mode of spatial variation, transforming all images into a common space, thereby reducing the effect of contrast agent induced intensity variations. (4) In order to separate the motion component from the contrast agent induced intensity variation, a robust principal component analysis framework and a related weighted sparse representation framework are proposed, and the motion induced deformation component is extracted for DCE-MRI registration. However, the image registration methods have the disadvantages of large calculation amount and long time consumption, so that the high image registration requirements cannot be met.
Disclosure of Invention
The technical problem to be solved by the invention is to overcome the defects that image registration methods in the prior art have large calculated amount and long time consumption, so that higher image registration requirements cannot be met, and to provide a medical image registration method, electronic equipment and a storage medium.
The invention solves the technical problems through the following technical scheme:
the invention provides a medical image registration method, which comprises the following steps:
acquiring an enhanced image to be processed and a corresponding target plain scan image;
inputting the enhanced image to be processed into a medical image enhanced component network model to obtain a target enhanced component;
acquiring a target de-enhancement image according to the to-be-processed enhancement image and the target enhancement component;
inputting the target de-emphasis image and the target pan image into an image registration network model to obtain a target deformation field of the target de-emphasis image relative to the target pan image;
and carrying out deformation processing on the target de-emphasis image according to the target deformation field to obtain a registered target registration image.
Preferably, the step of obtaining the medical image enhancement component network model comprises:
acquiring a plurality of sample horizontal scanning images in a training set and corresponding sample enhanced images;
acquiring a sample enhancement component corresponding to the sample enhancement image based on the sample sweep image;
and training to obtain a medical image enhanced component network model according to the different sample enhanced images and the corresponding sample enhanced components.
Preferably, the step of obtaining a sample enhanced component corresponding to the sample enhanced image based on the sample pan image comprises:
registering the sample horizontal scanning image to the sample enhanced image by adopting a preset registration method to obtain a reference enhanced image;
preferably, the step of subtracting the reference enhanced image from the sample enhanced image to obtain the sample enhanced component corresponding to the sample enhanced image comprises:
randomly traversing and selecting a plurality of first enhanced image blocks in the sample enhanced image and a plurality of second enhanced image blocks in the reference enhanced image;
wherein the same position of each first enhanced image block corresponds to one second enhanced image block;
subtracting the second enhanced image block at the corresponding position from each first enhanced image block to obtain a corresponding intermediate enhanced component;
preferably, the step of training a network model of enhanced components of medical images according to different enhanced images of samples and corresponding enhanced components of samples includes:
and taking a plurality of first enhanced image blocks corresponding to the sample enhanced image as input, taking the corresponding sample enhanced component as output, and obtaining the medical image enhanced component network model by adopting deep convolutional neural network training.
Preferably, the step of obtaining the image registration network model comprises:
acquiring a first enhanced component of the sample enhanced image in a training set by adopting the medical image enhanced component network model;
subtracting the first enhanced component from the sample enhanced image to obtain a sample de-enhanced image corresponding to the sample enhanced image;
and training to obtain the image registration network model based on the sample de-emphasis image and the corresponding sample flat scanning image.
Preferably, the step of training the image registration network model based on the sample de-emphasis image and the corresponding sample pan image includes:
randomly traversing and selecting a plurality of de-enhanced image blocks in the sample de-enhanced image and a plurality of flat scanning image blocks in the sample flat scanning image;
wherein the same position of each de-enhanced image block corresponds to one flat scanning image block;
inputting the de-enhanced image blocks in the sample de-enhanced image and the flat scan image blocks in the sample flat scan image into a deep convolutional neural network to train to obtain the image registration network model.
Preferably, the enhanced image to be processed comprises DCE-MRI enhanced images or CT (computed tomography) enhanced images corresponding to different organs; or the like, or, alternatively,
the target enhancement component is used for representing the interference quantity of the contrast agent in the artery and the vein of different organs; or the like, or, alternatively,
the target registration image is used for reflecting the pathological condition of the primary liver cancer and/or liver metastasis of the patient.
The present invention also provides a medical image registration system, comprising:
the target flat-scan image acquisition module is used for acquiring an enhanced image to be processed and a corresponding target flat-scan image; the target enhancement component acquisition module is used for inputting the enhanced image to be processed into a medical image enhancement component network model so as to acquire a target enhancement component; the target de-enhancement image acquisition module is used for acquiring a target de-enhancement image according to the to-be-processed enhancement image and the target enhancement component; a target deformation field obtaining module, configured to input the target de-emphasis image and the target pan image into an image registration network model, so as to obtain a target deformation field of the target de-emphasis image relative to the target pan image; and the target registration image acquisition module is used for carrying out deformation processing on the target de-enhancement image according to the target deformation field so as to acquire a registered target registration image.
Preferably, the medical image registration system further comprises:
the sample enhanced image acquisition module is used for acquiring a plurality of sample horizontal scanning images in the training set and corresponding sample enhanced images; the sample enhancement component acquisition module is used for acquiring a sample enhancement component corresponding to the sample enhancement image based on the sample flat scanning image; and the component network model acquisition module is used for training to obtain a medical image enhanced component network model according to different sample enhanced images and corresponding sample enhanced components.
Preferably, the sample enhancement component acquisition module includes:
a reference enhanced image acquisition unit, configured to adopt a preset registration method to register the sample flat scan image to the sample enhanced image to acquire a reference enhanced image; and the sample enhanced component acquisition unit is used for subtracting the reference enhanced image from the sample enhanced image to obtain the sample enhanced component corresponding to the sample enhanced image.
Preferably, the sample enhancement component acquisition unit includes:
the first image block selecting subunit is used for randomly traversing and selecting a plurality of first enhanced image blocks in the sample enhanced image and a plurality of second enhanced image blocks in the reference enhanced image; wherein the same position of each first enhanced image block corresponds to one second enhanced image block; the intermediate enhancement component acquiring subunit is configured to subtract the second enhancement image block at the corresponding position from each first enhancement image block to obtain a corresponding intermediate enhancement component; and the sample enhancement component acquisition subunit is used for summing the plurality of intermediate enhancement components to acquire the sample enhancement component corresponding to the sample enhancement image.
Preferably, the component network model obtaining module is configured to use a plurality of first enhancement image blocks corresponding to the sample enhancement image as input, use the corresponding sample enhancement component as output, and obtain the medical image enhancement component network model by adopting deep convolutional neural network training.
Preferably, the medical image registration system comprises:
a first enhanced component obtaining module, configured to obtain a first enhanced component of the sample enhanced image in a training set by using the medical image enhanced component network model; a sample de-enhanced image obtaining module, configured to subtract the first enhanced component from the sample enhanced image to obtain a sample de-enhanced image corresponding to the sample enhanced image; and the registration network model acquisition module is used for training to obtain the image registration network model based on the sample de-emphasis image and the corresponding sample flat scan image.
Preferably, the registration network model obtaining module includes:
the second image block selecting unit is used for randomly traversing and selecting a plurality of de-enhanced image blocks in the sample de-enhanced image and a plurality of flat scanning image blocks in the sample flat scanning image; wherein the same position of each de-enhanced image block corresponds to one flat scanning image block; a registration network model obtaining unit, configured to input the de-enhanced image blocks in the sample de-enhanced image and the flat scan image blocks in the sample flat scan image to a deep convolutional neural network to train to obtain the image registration network model.
Preferably, the enhanced image to be processed comprises DCE-MRI enhanced images or CT enhanced images corresponding to different organs; or the like, or, alternatively,
the target enhancement component is used for representing the interference quantity of the contrast agent in the artery and the vein of different organs; or the like, or, alternatively,
the target registration image is used for reflecting the pathological condition of the primary liver cancer and/or liver metastasis of the patient.
The invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the medical image registration method described above when executing the computer program.
The invention also provides a computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the medical image registration method as described above.
On the basis of the common knowledge in the field, the preferred conditions can be combined randomly to obtain the preferred embodiments of the invention.
The positive progress effects of the invention are as follows: a medical image enhanced component network model is established through a deep convolutional neural network so as to achieve the purpose of quantitatively acquiring enhanced components in an enhanced image, the accuracy of acquiring the enhanced components is ensured, and the accuracy of a subsequent image registration result is further ensured; the method comprises the steps of establishing a registration network model based on de-enhanced images and flat-scan images to obtain a deformation field of each de-enhanced image relative to the flat-scan images, and registering the de-enhanced images according to the deformation fields to obtain a registered target registration image, so that the accuracy of deformation field result determination is improved, the accuracy of image registration is ensured, the calculated amount in the image registration process is effectively reduced, the registration time is shortened, and the image registration efficiency is greatly improved.
Drawings
Fig. 1 is a flowchart of a medical image registration method according to embodiment 1 of the present invention.
Fig. 2 is a schematic diagram of registration of DCE-MRI enhanced images of liver in embodiment 1 of the present invention.
Fig. 3 is a first flowchart of acquiring a medical image enhanced component network model according to embodiment 2 of the present invention.
Fig. 4 is a second flowchart of acquiring a medical image enhanced component network model according to embodiment 2 of the present invention.
Fig. 5 is a first flowchart of acquiring an image registration network model according to embodiment 2 of the present invention.
Fig. 6 is a second flowchart of acquiring an image registration network model according to embodiment 2 of the present invention.
Fig. 7 is a block diagram of a medical image registration system according to embodiment 3 of the present invention.
Fig. 8 is a block diagram of a medical image registration system according to embodiment 4 of the present invention.
Fig. 9 is a schematic structural diagram of an electronic device implementing the medical image registration method in embodiment 5 of the present invention.
Detailed Description
The invention is further illustrated by the following examples, which are not intended to limit the scope of the invention.
The medical image can be DCE-MRI images or CT images corresponding to different organs, such as liver DCE-MRI images, and the registration image is obtained by adopting the medical image registration mode provided by the invention so as to determine the pathological change condition of the primary liver cancer and/or liver metastasis of a patient.
Example 1
As shown in fig. 1, the medical image registration method of the present embodiment includes:
s101, acquiring an enhanced image to be processed and a corresponding target plain scan image;
s102, inputting the enhanced image to be processed into a medical image enhanced component network model to obtain a target enhanced component;
s103, acquiring a target de-enhancement image according to the enhancement image to be processed and the target enhancement component;
specifically, the target enhancement component is subtracted from the enhanced image to be processed to obtain an intermediate de-enhanced image;
and performing image resolution enhancement processing, multiplying preset numerical value processing and the like on the intermediate de-enhanced image to obtain a target de-enhanced image so as to ensure the accuracy of obtaining the target de-enhanced image and further ensure the accuracy of subsequent registration model training and the accuracy of a registration result.
S104, inputting the target de-enhancement image and the target flat-scan image into an image registration network model to obtain a target deformation field of the target de-enhancement image relative to the target flat-scan image;
and S105, carrying out deformation processing on the target de-enhancement image according to the target deformation field to obtain a target registration image after registration.
The method has the advantages that the enhanced component corresponding to the enhanced image to be processed is directly obtained through the medical image enhanced component network model, the target deformation field of the target flat-scan image is directly output to enhance the image through the image registration network model, the image processing efficiency is improved, the calculated amount of the processing process is small, the consumed time is short, and meanwhile, the integral image registration efficiency is also improved.
Specifically, a spatial transformation network is adopted to carry out deformation processing on the target de-emphasis image according to a target deformation field so as to obtain a target registration image after registration; and matching the position of each corresponding pixel point in the target flat-scan image and the target de-enhancement image after deformation processing.
When the medical image registration of the embodiment is applied to the liver DCE-MRI image registration processing, the target enhancement component is used for representing the interference amount of the contrast agent in the artery and vein of different organs; the target registration image is used for reflecting the pathological condition of the primary liver cancer and/or liver metastasis of the patient.
As shown in fig. 2, the registration process for the DCE-MRI enhanced image of the liver is as follows:
inputting the DCE-MRI enhanced image (a) into a medical image enhanced component network model (b) to obtain a corresponding enhanced component, and further calculating to obtain a de-enhanced image (c); inputting the de-enhanced image (c) and the corresponding flat-scan image (d) into an image registration network model (e) to obtain a deformation field (f) of the de-enhanced image (c) relative to the flat-scan image (d), and finally applying the deformation field (f) to the de-enhanced image (c) to obtain a registered target registration image (g).
Experiments prove that when the de-enhancement-registration cascade deep learning network framework of the embodiment is adopted to register the DCE-MRI image of the liver, compared with the traditional medical image registration method, the time consumption is greatly shortened, the time consumption of the traditional method is 321s, and the time consumption of the registration method of the embodiment is only about 24s, so that the image registration time is effectively shortened, and the higher image registration requirement can be met.
In the embodiment, the aim of quantitatively acquiring the enhanced component in the enhanced image is fulfilled by establishing the medical image enhanced component network model, so that the accuracy of acquiring the enhanced component is ensured, and the accuracy of a subsequent image registration result is further ensured; the method comprises the steps of establishing a registration network model based on de-enhanced images and flat-scan images to obtain a deformation field of each de-enhanced image relative to the flat-scan images, and registering the de-enhanced images according to the deformation fields to obtain a registered target registration image, so that the accuracy of deformation field result determination is improved, the accuracy of image registration is ensured, the calculated amount in the image registration process is effectively reduced, the registration time is shortened, and the image registration efficiency is greatly improved.
Example 2
The medical image registration method of the present embodiment is a further improvement of embodiment 1, specifically:
as shown in fig. 3, the step of acquiring the medical image enhanced component network model in the medical image registration method of the present embodiment includes:
s201, acquiring a plurality of sample horizontal scanning images in a training set and corresponding sample enhanced images;
s202, acquiring a sample enhancement component corresponding to a sample enhancement image based on the sample flat scanning image;
s203, training to obtain a medical image enhanced component network model according to different sample enhanced images and corresponding sample enhanced components.
For a plurality of groups of sample flat-scan images and sample enhanced images in a training set, calculating an enhanced component corresponding to each group of sample enhanced images to perform model training, wherein the obtained medical image enhanced component network model is used for obtaining the enhanced component of the input enhanced image compared with the set enhanced image, so that the situation that the image registration effect is not ideal due to the intensity change of the enhanced image caused by a contrast agent is avoided, the accuracy of determining the enhanced component of the enhanced image is improved, and the speed of the subsequent image registration process and the accuracy and the effectiveness of the registration result are ensured.
Specifically, as shown in fig. 4, step S202 includes:
s2021, registering the sample flat scanning image to a sample enhanced image by adopting a preset registration method to obtain a reference enhanced image;
and S2022, subtracting the reference enhanced image from the sample enhanced image to obtain a sample enhanced component corresponding to the sample enhanced image.
The method comprises the steps of adopting the existing medical image registration methods based on a gray information method, a transform domain method, a characteristic method and the like to perform registration processing on a sample flat-scan image so as to obtain a flat-scan image after the sample flat-scan image is registered to a sample enhanced image, and calculating the difference value between the sample enhanced image of each group and the registered flat-scan image so as to obtain the enhanced component of the sample enhanced image compared with a reference enhanced image, so that the accuracy of subsequent model training is ensured.
Wherein, step S2022 specifically includes:
s20221, respectively preprocessing the sample enhanced image and the sample flat scanning image after registration;
the preprocessing process comprises the steps of resampling and normalizing the sample enhanced image and the registered sample parallel scanning image respectively;
s20222, randomly traversing and selecting a plurality of first enhanced image blocks in the preprocessed sample enhanced image and a plurality of second enhanced image blocks in the preprocessed reference enhanced image;
wherein, the same position of each first enhanced image block corresponds to a second enhanced image block;
s20223, subtracting the second enhanced image block at the corresponding position from each first enhanced image block to obtain a corresponding intermediate enhanced component;
s20224, performing summation processing on the plurality of intermediate enhanced components to obtain sample enhanced components corresponding to the sample enhanced image.
The method comprises the steps of sequentially carrying out resampling processing (namely, specifying a resolution image) on each group of sample enhanced images and the registered sample flat-scan images, carrying out normalization processing on the whole images, randomly traversing and selecting image blocks with set sizes from the complete images, respectively calculating the difference value between the image block in the sample enhanced image of each group and the corresponding image block in the registered flat-scan image to finally obtain a sample enhanced component corresponding to the sample enhanced image, and better adapting to factors such as a GPU (graphics processing unit) video memory and the like while improving the model training precision and speed.
Step S203 specifically includes:
s2031, taking a plurality of first enhancement image blocks corresponding to the sample enhancement image as input, taking corresponding sample enhancement components as output, and training by adopting a deep convolution neural network to obtain a medical image enhancement component network model.
And the image blocks and the enhanced components corresponding to the sample enhanced images are trained by adopting the deep convolutional neural network, so that the precision and the speed of model training are improved, and the accuracy of determining the enhanced components of the enhanced images is ensured.
The deep convolutional neural network includes, but is not limited to, a two-dimensional convolutional neural network and a three-dimensional convolutional neural network. When a three-dimensional convolutional neural network is employed, the three-dimensional convolutional neural network includes, but is not limited to, a V-Net network, a U-Net network. Preferably, the V-Net network is adopted for training, so that a better medical image enhanced component network model can be obtained to further ensure the accuracy of the enhanced component of the enhanced image.
In addition, the medical image enhanced component network model of the present embodiment is trained by using an L1 norm loss function, an SSIM loss function, and the like to obtain the medical image enhanced component network model.
The L1 norm loss function and the SSIM loss function are adopted to train a medical image enhanced component network model together, and when iteration converges, the optimal model parameters are obtained, namely, the effectiveness of the model parameters obtained by model training is ensured, so that the more accurate enhanced component can be obtained by utilizing the optimal model parameters obtained by training in the data testing stage.
The following describes the training and testing process of the medical image enhanced component network model of this embodiment with reference to examples:
(1) preprocessing of several DCE-MRI images in a training set
Resampling processing, namely specifying a resolution image; specifically, the method comprises the following steps: the resolution of the de-emphasis network is set to [2mm,2mm,2mm ] according to the resolution distribution of the training set data. The reason that the resampling is the same resolution is that the resolution is the obvious characteristic that the medical image is different from the natural image, and the uniform resolution in the training stage is favorable for training convergence.
Performing normalization processing on the whole image, specifically: dividing the resampled three-dimensional sample enhanced image and the sample flat scanning image by the same fixed value lambda respectively to enable pixel values of 0-99.5% percentile in most images in the data set to be distributed within 10. The image is normalized to control the gray distribution of the image within a specified range, so that the convergence of the neural network model is accelerated.
And randomly and repeatedly selecting image blocks with set sizes from the whole image, for example, the sizes of the image blocks are all taken [112,112,112 ]. The image block training is adopted instead of the whole original image, the limitation of GPU video memory is mainly considered, and the partial image training can be regarded as a regularization means, so that the performance of the neural network model is better.
(2) Training medical image enhanced component network model
Training is performed in a supervised manner, in particular: firstly, a flat scan image is registered to an enhanced image space by using a traditional registration method (such as a registration method based on image gray) to obtain a reference enhanced image, then an image block corresponding to the sample enhanced image is used as the input of a three-dimensional convolution neural network (such as a V-Net network), the registered flat scan image is subtracted from the sample enhanced image to obtain an enhanced component corresponding to each image block and is used as the output of network training, and after multiple iterations, a training model file corresponding to a medical image enhanced component network model is saved when a training loss function is low.
The method comprises the steps of training image blocks by using a batch size (batch size) as an input V-Net network, training a loss function by using an L1 loss function and a structure similarity index loss function, and obtaining optimal model parameters when iteration converges. And accurate enhancement components can be obtained in a data testing stage by using the optimal model parameters obtained by training.
(3) Data testing phase
Inputting the DCE-MRI images in the test set into a medical image enhancement component network model for testing:
preprocessing different DCE-MRI images in the test set, specifically: down-sampling the enhanced image to a resolution [2mm,2mm,2mm ] specified in the training phase; normalizing the image by adopting a normalization mode of a model training stage to obtain a preprocessed image; randomly traversing and selecting a plurality of image blocks with set sizes from the complete image;
calculating an enhancement component, in particular: inputting a plurality of image blocks corresponding to the preprocessed image into the medical image enhanced component network model to obtain an enhanced component corresponding to each image block, and then summing the enhanced components to obtain an enhanced component corresponding to the whole preprocessed image;
obtaining a de-emphasized image, in particular: and subtracting the enhanced component from the preprocessed image, restoring the resolution of the original enhanced image, and multiplying the resolution by the lambda in the normalization of the training stage to obtain a de-enhanced image. Of course, the above-mentioned parameters can be arbitrarily set according to the characteristics of the medical image.
Because the pre-trained model parameters are loaded, only forward network calculation (main operation is multiplication and addition operation) is needed, the operation speed on the GPU is high, and the image registration processing efficiency is effectively ensured.
As shown in fig. 5 and fig. 6, the step of acquiring the image registration network model in the medical image registration method of the present embodiment includes:
s301, acquiring a first enhanced component of a sample enhanced image in a training set by adopting a medical image enhanced component network model;
s302, subtracting the first enhancement component from the sample enhanced image to obtain a sample de-enhanced image corresponding to the sample enhanced image;
and S303, training to obtain an image registration network model based on the sample de-enhancement image and the corresponding sample flat scanning image.
Wherein the image registration network model is used to acquire a deformation field of the input de-emphasized image relative to the corresponding flat scan image.
Compared with the existing training model directly based on the enhanced image and the flat scan image, the accuracy of the registration network model is effectively improved, and then the accuracy of the deformation field is improved to ensure the image registration efficiency.
Specifically, step S303 includes:
s3031, respectively preprocessing the sample de-emphasis image and the sample flat scanning image;
specifically, the sample de-enhanced image and the sample parallel-scanning image are subjected to resampling processing, normalization processing and histogram matching processing respectively;
s3032, randomly traversing and selecting a plurality of de-enhanced image blocks in the preprocessed sample de-enhanced image and a plurality of flat-scan image blocks in the preprocessed sample flat-scan image;
wherein, the same position of each de-enhanced image block corresponds to a flat scanning image block;
and S3033, inputting the plurality of de-enhanced image blocks in the sample de-enhanced image and the plurality of flat scanning image blocks in the sample flat scanning image into a deep convolutional neural network to train so as to obtain an image registration network model.
The de-enhanced image and the corresponding plain scan image are subjected to resampling processing, normalization processing, histogram matching processing and random traversal selection of image blocks with set sizes from the complete image in sequence, and the image is preprocessed before model training to ensure that model training achieves a better training effect and improve the speed of model training.
The deep convolutional neural network includes, but is not limited to, a two-dimensional convolutional neural network and a three-dimensional convolutional neural network. When a three-dimensional convolutional neural network is employed, the three-dimensional convolutional neural network includes, but is not limited to, a V-Net network, a U-Net network. Preferably, the U-Net network is adopted for training, so that a better image registration network model can be obtained to further ensure the accuracy of an image registration result.
In addition, the image registration network model of the embodiment is trained by adopting an NCC loss function and/or a mutual information loss function to obtain the image registration network model. The regularization terms in the training process include a global mean first order gradient and/or a jacobian mean to which the deformation field corresponds. The NCC loss function and the MI loss function are adopted, the model regular term is the global average first-order gradient, the Jacobian average value and the like of the deformation field to ensure the smoothness of the deformation field, namely the model is jointly optimized through the loss function and the regular term, the optimal model parameters are obtained when iteration converges, and the accurate deformation field result can be obtained in the data testing stage by utilizing the optimal model parameters obtained through training.
The following describes the training process of the image registration network model of this embodiment with reference to examples:
(1) several cases of DCE-MRI images in the training set were preprocessed:
and (4) resampling, namely, specifying a resolution image. Specifically, the method comprises the following steps: the resolution of the de-emphasis network is set to [2mm,2mm,2mm ] according to the resolution distribution of the training set data. Performing normalization processing on the whole image, specifically: the pixel values in the three-dimensional de-enhanced image and the plain image are respectively normalized to be in the range of 0-255, the pixel value with the percentile of 99.9% in the image is defined as Max, the pixels which are greater than or equal to the Max are mapped to be 255, the pixels which are smaller than the Max are firstly divided by the Max and then multiplied by the 255. Histogram matching process, specifically: and matching the de-enhanced images in each pair of training images to the flat-scan images, so that the pixel distribution ranges of the de-enhanced images and the flat-scan images are consistent, thereby accelerating the convergence of the model and improving the accuracy of the registration network model. Randomly traversing and selecting image blocks with set sizes from the complete image, specifically: the image patch sizes in the registration network model are all taken [112,112,112 ].
(2) Training an image registration network model:
training is performed in an unsupervised manner, specifically: and inputting the image blocks corresponding to the de-enhanced image and the sweep image into a three-dimensional convolutional neural network (such as a U-Net network) for training, and storing a training model file corresponding to the image registration network model after multiple iterations when the loss function of training is low. The method comprises the steps of inputting an image block into a U-Net network for training by taking batch size as 1, adopting normalized cross correlation as a loss function, adopting a global average first-order gradient of a deformation field as a model regular term to ensure smoothness of the deformation field, jointly optimizing a model by the loss function and the regular term, obtaining an optimal model parameter when iteration is converged, and obtaining an accurate deformation field result at a data test stage by utilizing the optimal model parameter obtained by training.
(3) Data testing phase
Preprocessing the de-enhanced image and the panned image, specifically: down-sampling the horizontal scanning image and the de-enhancement image into a resolution [2mm,2mm and 2mm ] specified in a training stage; and the image normalization respectively normalizes the plain scan image and the de-enhanced image by adopting a normalization mode in a model training stage. Calculating the deformation field, specifically: the preprocessed scout image and the de-emphasized image are both input into an image registration network model to obtain a deformation field of the de-emphasized image relative to the scout image. Acquiring a registration image: an STN (spatial transform network) is used to apply the deformation field to the de-enhanced image to obtain a registered image. Because the pre-trained model parameters are loaded and only forward network calculation (main operation is multiplication and addition operation) is needed, the operation speed on the GPU is high, the registration of the DCE-MRI liver can be accurately and effectively realized, the image registration time is shortened, and the image registration efficiency is improved. Of course, the above-mentioned parameters can be arbitrarily set according to the characteristics of the medical image.
In addition, the general adaptability of establishing the registration network model based on the deep convolutional neural network is strong, and for the imaging devices of different manufacturers and the MR images acquired by different imaging sequences, only the matched training data needs to be changed.
In the embodiment, a medical image enhanced component network model is established based on the deep convolutional neural network so as to achieve the purpose of quantitatively acquiring enhanced components in an enhanced image, the precision and the speed of model training are improved, the accuracy of acquiring the enhanced components is ensured, and the accuracy of a subsequent image registration result is further ensured; the de-enhancement image corresponding to the enhancement image is obtained by adopting the image enhancement component network model, and then the image registration network model is established by adopting the depth convolution neural network based on the de-enhancement image and the flat-scan image so as to obtain the deformation field of each de-enhancement image relative to the flat-scan image, so that the precision and the speed of model training are improved, the accuracy of the result determination of the deformation field is improved, and the accuracy of the image registration result is further ensured.
Example 3
As shown in fig. 7, the medical image registration system of this embodiment includes a target pan image acquisition module 1, a target enhanced component acquisition module 2, a target de-enhanced image acquisition module 3, a target deformation field acquisition module 4, and a target registration image acquisition module 5.
The target plain scan image acquisition module 1 is used for acquiring an enhanced image to be processed and a corresponding target plain scan image;
the target enhancement component obtaining module 2 is used for inputting the enhancement image to be processed into the medical image enhancement component network model to obtain a target enhancement component;
the target de-enhancement image acquisition module 3 is used for acquiring a target de-enhancement image according to the to-be-processed enhancement image and the target enhancement component;
specifically, the target enhancement component is subtracted from the enhanced image to be processed to obtain an intermediate de-enhanced image;
the target deformation field acquisition module 4 is used for inputting the target de-enhancement image and the target flat scan image into the image registration network model so as to acquire a target deformation field of the target de-enhancement image relative to the target flat scan image;
the target registration image obtaining module 5 is configured to perform deformation processing on the target de-emphasis image according to the target deformation field to obtain a registered target registration image. For a specific implementation process, principle and effect of the medical image registration system of this embodiment, reference may be made to embodiment 1, which is not described herein again.
In the embodiment, the aim of quantitatively acquiring the enhanced component in the enhanced image is fulfilled by establishing the medical image enhanced component network model, so that the accuracy of acquiring the enhanced component is ensured, and the accuracy of a subsequent image registration result is further ensured; the method comprises the steps of establishing a registration network model based on de-enhanced images and flat-scan images to obtain a deformation field of each de-enhanced image relative to the flat-scan images, and registering the de-enhanced images according to the deformation fields to obtain a registered target registration image, so that the accuracy of deformation field result determination is improved, the accuracy of image registration is ensured, the calculated amount in the image registration process is effectively reduced, the registration time is shortened, and the image registration efficiency is greatly improved.
Example 4
The medical image registration system of the present embodiment is a further improvement of embodiment 3, specifically:
as shown in fig. 8, the medical image registration system further comprises a sample enhanced image acquisition module 6, a sample enhanced component acquisition module 7 and a component network model acquisition module 8.
The sample enhanced image acquisition module 6 is used for acquiring a plurality of sample horizontal scanning images in the training set and corresponding sample enhanced images; the sample enhanced component obtaining module 7 is configured to obtain a sample enhanced component corresponding to the sample enhanced image based on the sample swept image; the component network model obtaining module 8 is configured to train to obtain a medical image enhanced component network model according to different sample enhanced images and corresponding sample enhanced components.
Wherein the sample enhanced component acquiring module 7 comprises a reference enhanced image acquiring unit 9 and a sample enhanced component acquiring unit 10.
The reference enhanced image acquisition unit 9 is configured to adopt a preset registration method to register the sample flat scan image to the sample enhanced image to acquire a reference enhanced image;
the sample enhanced component obtaining unit 10 is configured to subtract the reference enhanced image from the sample enhanced image to obtain a sample enhanced component corresponding to the sample enhanced image.
Specifically, the sample enhanced component obtaining unit 10 includes a first preprocessing subunit, a first image block selecting subunit, an intermediate enhanced component obtaining subunit, and a sample enhanced component obtaining subunit.
The first preprocessing subunit is used for respectively preprocessing the sample enhanced image and the sample flat scanning image after registration;
the preprocessing process comprises the steps of resampling and normalizing the sample enhanced image and the registered sample parallel scanning image respectively;
the first image block selection subunit is used for randomly traversing and selecting a plurality of first enhanced image blocks in the preprocessed sample enhanced image and a plurality of second enhanced image blocks in the preprocessed reference enhanced image;
wherein, the same position of each first enhanced image block corresponds to a second enhanced image block;
the intermediate enhancement component acquiring subunit is used for subtracting the second enhancement image block at the corresponding position from each first enhancement image block to obtain a corresponding intermediate enhancement component;
the sample enhancement component obtaining subunit is configured to perform summation processing on the plurality of intermediate enhancement components to obtain a sample enhancement component corresponding to the sample enhancement image.
In addition, the component network model obtaining module 8 is configured to use a plurality of first enhancement image blocks corresponding to the sample enhancement image as inputs, use the corresponding sample enhancement components as outputs, and obtain the medical image enhancement component network model by adopting deep convolutional neural network training.
In addition, the medical image enhanced component network model of the present embodiment is trained by using an L1 norm loss function, an SSIM loss function, and the like to obtain the medical image enhanced component network model.
The medical image registration system of the present embodiment further includes a first enhanced component obtaining module 11, a sample de-enhanced image obtaining module 12, and a registration network model obtaining module 13.
The first enhanced component obtaining module 11 is configured to obtain a first enhanced component of a sample enhanced image in a training set by using a medical image enhanced component network model;
the sample de-enhanced image obtaining module 12 is configured to subtract the first enhanced component from the sample enhanced image to obtain a sample de-enhanced image corresponding to the sample enhanced image;
the registration network model obtaining module 13 is configured to obtain an image registration network model through training based on the sample de-emphasis image and the corresponding sample pan image.
Wherein the image registration network model is used to acquire a deformation field of the input de-emphasized image relative to the corresponding flat scan image.
Specifically, the registration network model obtaining module 13 includes a preprocessing unit 14, a second image block selecting unit 15, and a registration network model obtaining unit 16.
The preprocessing unit 14 is configured to respectively preprocess the sample de-emphasis image and the sample pan image;
specifically, the sample de-enhanced image and the sample parallel-scanning image are subjected to resampling processing, normalization processing and histogram matching processing respectively;
the second image block selecting unit 15 is configured to randomly traverse and select a plurality of de-enhanced image blocks in the preprocessed sample de-enhanced image and a plurality of flat-scan image blocks in the preprocessed sample flat-scan image;
wherein, the same position of each de-enhanced image block corresponds to a flat scanning image block;
the registration network model obtaining unit 16 is configured to input the plurality of de-enhanced image blocks in the sample de-enhanced image and the plurality of flat scan image blocks in the sample flat scan image to the deep convolutional neural network to train an image registration network model.
In addition, the image registration network model of the embodiment is trained by adopting an NCC loss function and/or a mutual information loss function to obtain the image registration network model.
The regularization terms in the training process include a global mean first order gradient and/or a jacobian mean to which the deformation field corresponds.
The training and testing processes of the medical image enhanced component network model and the image registration network model of this embodiment refer to embodiment 2, and therefore, are not described herein again.
In addition, the general adaptability of establishing the registration network model based on the deep convolutional neural network is strong, and for the imaging devices of different manufacturers and the MR images acquired by different imaging sequences, only the matched training data needs to be changed.
In the embodiment, a medical image enhanced component network model is established based on the deep convolutional neural network so as to achieve the purpose of quantitatively acquiring enhanced components in an enhanced image, the precision and the speed of model training are improved, the accuracy of acquiring the enhanced components is ensured, and the accuracy of a subsequent image registration result is further ensured; the de-enhancement image corresponding to the enhancement image is obtained by adopting the image enhancement component network model, and then the image registration network model is established by adopting the depth convolution neural network based on the de-enhancement image and the flat-scan image so as to obtain the deformation field of each de-enhancement image relative to the flat-scan image, so that the precision and the speed of model training are improved, the accuracy of the result determination of the deformation field is improved, and the accuracy of the image registration result is further ensured.
Example 5
Fig. 9 is a schematic structural diagram of an electronic device according to embodiment 5 of the present invention. The electronic device comprises a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the medical image registration method of any of embodiments 1 or 2 when executing the program. The electronic device 30 shown in fig. 9 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiment of the present invention.
As shown in fig. 9, the electronic device 30 may be embodied in the form of a general purpose computing device, which may be, for example, a server device. The components of the electronic device 30 may include, but are not limited to: the at least one processor 31, the at least one memory 32, and a bus 33 connecting the various system components (including the memory 32 and the processor 31).
The bus 33 includes a data bus, an address bus, and a control bus.
The memory 32 may include volatile memory, such as Random Access Memory (RAM)321 and/or cache memory 322, and may further include Read Only Memory (ROM) 323.
Memory 32 may also include a program/utility 325 having a set (at least one) of program modules 324, such program modules 324 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
The processor 31 executes a computer program stored in the memory 32 to execute various functional applications and data processing, such as the medical image registration method in any one of the embodiments 1 or 2 of the present invention.
The electronic device 30 may also communicate with one or more external devices 34 (e.g., keyboard, pointing device, etc.). Such communication may be through input/output (I/O) interfaces 35. Also, model-generating device 30 may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) via network adapter 36. As shown in FIG. 9, network adapter 36 communicates with the other modules of model-generating device 30 via bus 33. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the model-generating device 30, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID (disk array) systems, tape drives, and data backup storage systems, etc.
It should be noted that although in the above detailed description several units/modules or sub-units/modules of the electronic device are mentioned, such a division is merely exemplary and not mandatory. Indeed, the features and functionality of two or more of the units/modules described above may be embodied in one unit/module according to embodiments of the invention. Conversely, the features and functions of one unit/module described above may be further divided into embodiments by a plurality of units/modules.
Example 6
The present embodiment provides a computer-readable storage medium on which a computer program is stored, which when executed by a processor implements the steps in the medical image registration method of any of embodiments 1 or 2.
More specific examples, among others, that the readable storage medium may employ may include, but are not limited to: a portable disk, a hard disk, random access memory, read only memory, erasable programmable read only memory, optical storage device, magnetic storage device, or any suitable combination of the foregoing.
In a possible embodiment, the invention may also be implemented in the form of a program product comprising program code for causing a terminal device to perform the steps of implementing the medical image registration method of any of embodiments 1 or 2, when the program product is run on the terminal device.
Where program code for carrying out the invention is written in any combination of one or more programming languages, the program code may be executed entirely on the user device, partly on the user device, as a stand-alone software package, partly on the user device and partly on a remote device or entirely on the remote device.
While specific embodiments of the invention have been described above, it will be appreciated by those skilled in the art that this is by way of example only, and that the scope of the invention is defined by the appended claims. Various changes and modifications to these embodiments may be made by those skilled in the art without departing from the spirit and scope of the invention, and these changes and modifications are within the scope of the invention.

Claims (10)

1. A medical image registration method, characterized in that the medical image registration method comprises:
acquiring an enhanced image to be processed and a corresponding target plain scan image;
inputting the enhanced image to be processed into a medical image enhanced component network model to obtain a target enhanced component;
acquiring a target de-enhancement image according to the to-be-processed enhancement image and the target enhancement component;
inputting the target de-emphasis image and the target pan image into an image registration network model to obtain a target deformation field of the target de-emphasis image relative to the target pan image;
and carrying out deformation processing on the target de-emphasis image according to the target deformation field to obtain a registered target registration image.
2. The medical image registration method of claim 1, wherein the step of obtaining the medical image enhancement component network model comprises:
acquiring a plurality of sample horizontal scanning images in a training set and corresponding sample enhanced images;
acquiring a sample enhancement component corresponding to the sample enhancement image based on the sample sweep image;
and training to obtain a medical image enhanced component network model according to the different sample enhanced images and the corresponding sample enhanced components.
3. The medical image registration method of claim 2, wherein the step of obtaining a sample enhancement component corresponding to the sample enhancement image based on the sample pan image comprises:
registering the sample horizontal scanning image to the sample enhanced image by adopting a preset registration method to obtain a reference enhanced image;
subtracting the reference enhanced image from the sample enhanced image to obtain the sample enhanced component corresponding to the sample enhanced image.
4. A medical image registration method as claimed in claim 3, wherein the step of subtracting the reference enhanced image from the sample enhanced image to obtain the sample enhanced component to which the sample enhanced image corresponds comprises:
randomly traversing and selecting a plurality of first enhanced image blocks in the sample enhanced image and a plurality of second enhanced image blocks in the reference enhanced image;
wherein the same position of each first enhanced image block corresponds to one second enhanced image block;
subtracting the second enhanced image block at the corresponding position from each first enhanced image block to obtain a corresponding intermediate enhanced component;
and summing a plurality of the intermediate enhancement components to obtain the sample enhancement component corresponding to the sample enhancement image.
5. The medical image registration method of claim 4, wherein the step of training a medical image enhancement component network model based on different sample enhancement images and corresponding sample enhancement components comprises:
and taking a plurality of first enhanced image blocks corresponding to the sample enhanced image as input, taking the corresponding sample enhanced component as output, and obtaining the medical image enhanced component network model by adopting deep convolutional neural network training.
6. The medical image registration method of claim 2, wherein the step of obtaining the image registration network model comprises:
acquiring a first enhanced component of the sample enhanced image in a training set by adopting the medical image enhanced component network model;
subtracting the first enhanced component from the sample enhanced image to obtain a sample de-enhanced image corresponding to the sample enhanced image;
and training to obtain the image registration network model based on the sample de-emphasis image and the corresponding sample flat scanning image.
7. The medical image registration method of claim 6, wherein the step of training the image registration network model based on the sample de-emphasized image and the corresponding sample panned image comprises:
randomly traversing and selecting a plurality of de-enhanced image blocks in the sample de-enhanced image and a plurality of flat scanning image blocks in the sample flat scanning image;
wherein the same position of each de-enhanced image block corresponds to one flat scanning image block;
inputting the de-enhanced image blocks in the sample de-enhanced image and the flat scan image blocks in the sample flat scan image into a deep convolutional neural network to train to obtain the image registration network model.
8. The medical image registration method according to any one of claims 1 to 7, wherein the enhanced image to be processed comprises DCE-MRI enhanced images or CT enhanced images corresponding to different organs; or the like, or, alternatively,
the target enhancement component is used for representing the interference quantity of the contrast agent in the artery and the vein of different organs; or the like, or, alternatively,
the target registration image is used for reflecting the pathological condition of the primary liver cancer and/or liver metastasis of the patient.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor, when executing the computer program, implements the medical image registration method of any one of claims 1-8.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the medical image registration method of any one of claims 1 to 8.
CN202011313238.1A 2020-11-20 2020-11-20 Medical image registration method, electronic device and storage medium Active CN112419378B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011313238.1A CN112419378B (en) 2020-11-20 2020-11-20 Medical image registration method, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011313238.1A CN112419378B (en) 2020-11-20 2020-11-20 Medical image registration method, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN112419378A true CN112419378A (en) 2021-02-26
CN112419378B CN112419378B (en) 2024-04-09

Family

ID=74777112

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011313238.1A Active CN112419378B (en) 2020-11-20 2020-11-20 Medical image registration method, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN112419378B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113222852A (en) * 2021-05-26 2021-08-06 深圳高性能医疗器械国家研究院有限公司 Reconstruction method for enhancing CT image
CN113506331A (en) * 2021-06-29 2021-10-15 武汉联影智融医疗科技有限公司 Method, apparatus, computer device and storage medium for registering tissue and organ
CN113989338A (en) * 2021-09-06 2022-01-28 北京东软医疗设备有限公司 Image registration method and device, storage medium and computer equipment
WO2022246677A1 (en) * 2021-05-26 2022-12-01 深圳高性能医疗器械国家研究院有限公司 Method for reconstructing enhanced ct image

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2149121A1 (en) * 2007-05-03 2010-02-03 UCL Business PLC Image registration method
CN103400376A (en) * 2013-07-19 2013-11-20 南方医科大学 Registering method of breast dynamic contrast-enhanced magnetic resonance image (DCE-MRI) sequence
US20140003690A1 (en) * 2012-07-02 2014-01-02 Marco Razeto Motion correction apparatus and method
US20160217576A1 (en) * 2013-10-18 2016-07-28 Koninklijke Philips N.V. Registration of medical images
CN109767460A (en) * 2018-12-27 2019-05-17 上海商汤智能科技有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN111145229A (en) * 2019-12-25 2020-05-12 东软医疗系统股份有限公司 Imaging method, device and scanning system
CN111798410A (en) * 2020-06-01 2020-10-20 深圳市第二人民医院(深圳市转化医学研究院) Cancer cell pathological grading method, device, equipment and medium based on deep learning model

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2149121A1 (en) * 2007-05-03 2010-02-03 UCL Business PLC Image registration method
US20140003690A1 (en) * 2012-07-02 2014-01-02 Marco Razeto Motion correction apparatus and method
CN103400376A (en) * 2013-07-19 2013-11-20 南方医科大学 Registering method of breast dynamic contrast-enhanced magnetic resonance image (DCE-MRI) sequence
US20160217576A1 (en) * 2013-10-18 2016-07-28 Koninklijke Philips N.V. Registration of medical images
CN109767460A (en) * 2018-12-27 2019-05-17 上海商汤智能科技有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN111145229A (en) * 2019-12-25 2020-05-12 东软医疗系统股份有限公司 Imaging method, device and scanning system
CN111798410A (en) * 2020-06-01 2020-10-20 深圳市第二人民医院(深圳市转化医学研究院) Cancer cell pathological grading method, device, equipment and medium based on deep learning model

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
BALAKRISHNAN, G (BALAKRISHNAN, GUHA) ; ZHAO, A (ZHAO, AMY) ; SABUNCU, MR (SABUNCU, MERT R.) ; GUTTAG, J (GUTTAG, JOHN) [1] ; DA: "VoxelMorph: A Learning Framework for Deformable Medical Image Registration", IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 38, no. 8, pages 1788 - 1800, XP011736963, DOI: 10.1109/TMI.2019.2897538 *
余丽玲, 阳维, 卢振泰, 冯前进, 陈武凡: "乳腺DCE-MRI增强场时间序列和组织形变场的联合估计", 电子学报, vol. 42, no. 08, pages 1509 - 1514 *
刘月亮: "基于预测的肺4D-CT图像配准方法研究", 中国优秀硕士学位论文全文库, pages 1 - 47 *
刘月亮: "基于预测的肺4D-CT图像配准方法研究", 南方医科大学 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113222852A (en) * 2021-05-26 2021-08-06 深圳高性能医疗器械国家研究院有限公司 Reconstruction method for enhancing CT image
WO2022246677A1 (en) * 2021-05-26 2022-12-01 深圳高性能医疗器械国家研究院有限公司 Method for reconstructing enhanced ct image
CN113506331A (en) * 2021-06-29 2021-10-15 武汉联影智融医疗科技有限公司 Method, apparatus, computer device and storage medium for registering tissue and organ
CN113989338A (en) * 2021-09-06 2022-01-28 北京东软医疗设备有限公司 Image registration method and device, storage medium and computer equipment

Also Published As

Publication number Publication date
CN112419378B (en) 2024-04-09

Similar Documents

Publication Publication Date Title
CN112419378B (en) Medical image registration method, electronic device and storage medium
CN109993726B (en) Medical image detection method, device, equipment and storage medium
Wang et al. 3D conditional generative adversarial networks for high-quality PET image estimation at low dose
CN111161216A (en) Intravascular ultrasound image processing method, device, equipment and storage medium based on deep learning
JP2023540910A (en) Connected Machine Learning Model with Collaborative Training for Lesion Detection
JP2021530331A (en) Methods and systems for automatically generating and analyzing fully quantitative pixel-based myocardial blood flow and myocardial blood flow reserve maps to detect ischemic heart disease using cardiac perfusion magnetic resonance imaging
Ni et al. Segmentation of ultrasound image sequences by combing a novel deep siamese network with a deformable contour model
CN114037626A (en) Blood vessel imaging method, device, equipment and storage medium
Yancheng et al. RED-MAM: A residual encoder-decoder network based on multi-attention fusion for ultrasound image denoising
CN113888566B (en) Target contour curve determination method and device, electronic equipment and storage medium
CN110197472B (en) Method and system for stable quantitative analysis of ultrasound contrast image
CN116997928A (en) Method and apparatus for generating anatomical model using diagnostic image
US8805122B1 (en) System, method, and computer-readable medium for interpolating spatially transformed volumetric medical image data
CN110852993B (en) Imaging method and device under action of contrast agent
Wu et al. Image-based motion artifact reduction on liver dynamic contrast enhanced MRI
CN116485813A (en) Zero-sample brain lesion segmentation method, system, equipment and medium based on prompt learning
Lei et al. Generative adversarial networks for medical image synthesis
Gu et al. Cross-modality image translation: CT image synthesis of MR brain images using multi generative network with perceptual supervision
Arega et al. Using polynomial loss and uncertainty information for robust left atrial and scar quantification and segmentation
WO2009019535A1 (en) A method, apparatus, computer-readable medium and use for pharmacokinetic modeling
Qiu et al. A despeckling method for ultrasound images utilizing content-aware prior and attention-driven techniques
Xu A Robust and Efficient Framework for Slice-to-Volume Reconstruction: Application to Fetal MRI
US20240206907A1 (en) System and Method for Device Tracking in Magnetic Resonance Imaging Guided Inerventions
JP6692001B2 (en) System and method for reconstructing physiological signals of an organ's arterial / tissue / venous dynamic system in superficial space
CN115965567A (en) Image generation model training and image generation method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant