CN116071401B - Virtual CT image generation method and device based on deep learning - Google Patents

Virtual CT image generation method and device based on deep learning Download PDF

Info

Publication number
CN116071401B
CN116071401B CN202310042282.0A CN202310042282A CN116071401B CN 116071401 B CN116071401 B CN 116071401B CN 202310042282 A CN202310042282 A CN 202310042282A CN 116071401 B CN116071401 B CN 116071401B
Authority
CN
China
Prior art keywords
image
mri
data
network
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310042282.0A
Other languages
Chinese (zh)
Other versions
CN116071401A (en
Inventor
夏启胜
邹尧
王海
熊强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ruishi Wisdom Beijing Medical Technology Co ltd
China Japan Friendship Hospital
Original Assignee
Ruishi Wisdom Beijing Medical Technology Co ltd
China Japan Friendship Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ruishi Wisdom Beijing Medical Technology Co ltd, China Japan Friendship Hospital filed Critical Ruishi Wisdom Beijing Medical Technology Co ltd
Priority to CN202310042282.0A priority Critical patent/CN116071401B/en
Publication of CN116071401A publication Critical patent/CN116071401A/en
Application granted granted Critical
Publication of CN116071401B publication Critical patent/CN116071401B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/005Specific pre-processing for tomographic reconstruction, e.g. calibration, source positioning, rebinning, scatter correction, retrospective gating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention relates to a method and a device for generating a virtual CT image based on deep learning, wherein the method comprises the steps of acquiring paired MRI image data and CT image data of the same part of a patient to be checked within a preset time range; registering and resampling the MRI image data and the CT image data to obtain a registration data set of corresponding pairs of pixel levels; inputting the registration data set into a pre-constructed residual density block-generating an countermeasure network for training to obtain a CT image virtual synthesis model; and inputting the MRI image into the CT image virtual synthesis model to obtain a virtual synthesis CT image. The invention can realize virtual generation of CT data through MRI data by performing model training on the obtained MRI-CT paired data, so that a patient can obtain virtual CT image data without additional CT examination.

Description

Virtual CT image generation method and device based on deep learning
Technical Field
The invention belongs to the technical field of medical image processing, and particularly relates to a method and a device for generating a virtual CT image based on deep learning.
Background
Medical imaging is an important method for obtaining high-quality images of various internal organs, and plays an important role in diagnosis and treatment of diseases. In radiation therapy, it is desirable to precisely irradiate a Planned Target Volume (PTV) to eliminate tumors while minimizing unnecessary area irradiation to prevent adverse reactions. Therefore, accurate delineation of the target region and normal tissues is important, and CT is mainly adopted for target region positioning, organ delineation and dose calculation in the current radiotherapy plan. Magnetic Resonance Imaging (MRI) is zero-radiation and has better soft tissue contrast than CT, facilitating delineation of tumors and Organs At Risk (OAR), and is therefore increasingly used in radiation therapy.
Nuclear magnetic guided radiation therapy (MRI-g-RT) eliminates the need for CT imaging, reduces the number of scans and associated medical costs, and also reduces the additional dose of radiation to the patient, especially for patients requiring multiple scans during treatment. However, a problem with nuclear magnetic guided radiotherapy is that the MRI signals are related to tissue proton density and relaxation properties, rather than tissue attenuation coefficients, which results in that nuclear magnetic guided radiotherapy alone cannot perform radiotherapy dose calculations, and existing solutions use additional CT imaging data for multi-modal Registration (Registration) with MRI to perform patient target delineation and irradiation dose calculations. With this approach, however, the errors associated with MRI/CT image registration introduce a lot of uncertainty in organ delineation and dose calculation, especially in small tumors or at risk complex Organs (OAR).
In the related art, many teams are developing methods for generating virtual CT images using MRI images, including three types of methods, namely, a tissue segmentation-based method, an Atlas-based method, and an Artificial Intelligence (AI) -based method, aiming at the problem. The first method is difficult to distinguish bone structure from air in conventional magnetic resonance imaging because the bone structure signals are very weak and similar. The second approach is highly dependent on the availability of similar anatomical and/or pathological variations in the atlas dataset. In recent years, artificial intelligence techniques typified by deep learning have exhibited high performance in terms of image segmentation, denoising, reconstruction, image synthesis, and the like. Compared with the traditional method, the deep learning-based method has stronger popularization and great research and clinical interest in radiotherapy.
Although the advantages of the deep learning-based approach have been demonstrated, the deep learning-based approach requires a large training data set compared to the conventional approach, but the study of virtual generation of medical images tends to involve fewer patients, often tens of patients, and rarely hundreds of patients. Therefore, the existing small and medium volume patient data sets are only suitable for feasibility studies and are insufficient for clinical application evaluation. In addition, most model training needs to collect paired data sets, the closer the interval time between two modes of shooting by a patient is, the better the interval time between the two modes of shooting by the patient is, and the paired multi-mode data also needs to finish high-precision registration, so that a source image and a target image must reach a pixel-to-pixel corresponding relationship, related algorithm support is needed, and difficulty is further brought to the development of an image virtual generation algorithm model.
In summary, the existing algorithms have poor robustness, the generated images have insufficient definition and serious distortion, the subsequent target region sketching and dose calculation cannot be effectively completed, and the floor application of the algorithms in the clinical radiotherapy process is greatly limited, so that the existing method for generating virtual CT images by using MRI images cannot obtain the clinically satisfactory generation effect.
Disclosure of Invention
Accordingly, the present invention is directed to overcoming the shortcomings of the prior art, and providing a method and apparatus for generating a virtual CT image based on deep learning, so as to solve the problem that the method for generating a virtual CT image by using an MRI image in the prior art cannot obtain a clinically satisfactory generating effect.
In order to achieve the above purpose, the invention adopts the following technical scheme: a method for generating a virtual CT image based on deep learning comprises the following steps:
acquiring MRI image data and CT image data of the same part of a patient to be inspected within a preset time range;
registering and resampling the MRI image data and the CT image data by adopting a multi-mode registration algorithm to obtain a registration data set of corresponding pairs of pixel levels;
inputting the registration data set into a pre-constructed residual density block-generating an countermeasure network for training, and obtaining a CT image virtual synthesis model;
And inputting the MRI image into the CT image virtual synthesis model, and outputting to obtain a virtual synthesis CT image.
Further, the registering and resampling the MRI image data and the CT image data by using a multi-modal registration algorithm to obtain a registration dataset of corresponding pairs of pixel levels, including:
registering MRI image data and CT image data based on a multi-mode registration algorithm, and checking registration effects of the registered data to exclude data with poor registration effects;
the resampled data are unified into 512 pixels by 512 pixels, and the MRI-CT paired data with the layer thickness of 5mm are obtained;
the MRI-CT pairing data are in one-to-one correspondence with the pixel level of the MRI image data and the CT image data.
Further, the residual density block-generating countermeasure network includes:
generating a network and an countermeasure network, the generating network comprising a convolution layer, a base block and upsampling, the countermeasure network comprising a convolution layer, an activation function, a normalization layer and a density network;
wherein the residual density block-generating a loss function of the antagonism network is:
wherein,,output of the generating network for inputting MRI images, i.e +.>,/>Input for MRI;wherein->Activating a function for sigmoid- >Is output by the discriminator; />Is a true CT image;
the basic block adopts a residual error density network, the residual error density network comprises a density network, a cascade network and a local residual error network, and the density network is composed of a convolution layer and an activation function;
wherein, the density network is formed by convolution layer and activation function, the input and output relation of each density network is:
wherein,,for RELU activation function, +.>Weights for each convolutional layer;
constructing features and introducing perceptual loss functions prior to activating the functionsGenerating a loss function of the network as
Wherein,,for virtually generating the absolute distance between the image and the real image, namely, virtually obtaining a CT image through MRI, and obtaining a real CT image by pairing with a registration algorithmAbsolute distance of image; />Is a parameter used to balance the different loss factors;
calculation using PSNR modelThe loss function is calculated as follows:
wherein,,for the maximum possible value of the pixel, the data is normalized in the training process, so that the maximum value is 1,/or more>Representing a virtually generated image and a real image, respectively.
Further, the inputting the registration data set into a pre-constructed residual density block-generating an countermeasure network for training, to obtain a virtual synthesis model of the CT image, includes:
Performing data expansion on the registration data set, and normalizing the expanded registration data set into a 256×256-pixel image set;
dividing the normalized image set into a training set, a verification set and a test set;
inputting the training set into a generating network and an countermeasure network, obtaining a training model according to the loss function of the generating network and the loss function of the countermeasure network, obtaining a probability output result based on the training model, determining a cross entropy loss function according to the probability output result, and carrying out iterative training on the training model based on the cross entropy loss function and a pre-built identifiable loss function until the cross entropy loss function converges to obtain an output model;
optimizing the cross entropy loss function by using the verification set based on the output model, and obtaining a verification model after optimization;
and testing the verification model by using the test set, if the test result does not meet the prediction probability threshold, training the generation network and the countermeasure network again, and if the test result meets the prediction probability threshold, obtaining the CT image virtual synthesis model.
Further, the inputting the MRI image into the CT image virtual synthesis model to obtain a virtual synthesized CT image includes:
Detecting an input MRI image by using a computer vision model, and judging whether the MRI image is an organ picture supported by the CT image virtual synthesis model;
if yes, the MRI image is virtually generated to obtain a corresponding CT image.
Further, the computer vision model is a Vector Boosting model;
the Vector Boosting model extracts MCT features of an input MRI image by modifying a central transformation algorithm, and extracts LGP features by a local gradient algorithm;
and judging whether the input MRI image is an organ picture supported by the CT image virtual synthesis model or not based on the MCT features and the LGP features.
Further, the computer vision model is
A nonlinear support vector machine model based on RBF kernel functions.
Further, the MRI data and CT data are preprocessed.
The embodiment of the application provides a virtual CT image generating device based on deep learning, which comprises the following steps:
the acquisition module is used for acquiring MRI image data and CT image data of the same part of the patient to be inspected within a preset time range;
the registration module is used for registering and resampling the MR image data and the CT image data by adopting a multi-mode registration algorithm to obtain a registration data set of corresponding pairing at a pixel level;
The training module is used for inputting the registration data set into a pre-constructed residual density block-generating an countermeasure network for training, so as to obtain a CT image virtual synthesis model;
and the synthesis module is used for inputting the MRI image into the CT image virtual synthesis model to obtain a virtual synthesis CT image.
An embodiment of the present application provides a computer device, including: the computer program comprises a memory and a processor, wherein the memory stores a computer program, and the computer program is executed by the processor to enable the processor to execute the steps of the virtual CT image generating method based on deep learning.
By adopting the technical scheme, the invention has the following beneficial effects:
the invention provides a method and a device for generating virtual CT images based on deep learning, which can realize virtual generation of CT data through MRI data by model training of MRI-CT paired data of existing cases, so that a patient can obtain virtual CT image data without additional CT examination. In addition, in the nuclear magnetic resonance guided radiotherapy process, the CT imaging requirement of a patient is reduced, the scanning times and the related medical cost are reduced, and the additional irradiation dose of the patient is also reduced.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of steps of a method for generating a virtual CT image based on deep learning according to the present invention;
FIG. 2 is a flow chart of a method for generating a virtual CT image based on deep learning according to the present invention;
FIG. 3 is a schematic view of the results of the MRI-CT data before and after multi-modality image registration in accordance with the present invention;
FIG. 4 is a schematic diagram of a residual density block-generation countermeasure network architecture according to the present invention;
FIG. 5 is a schematic diagram of a basic block network architecture of the present invention;
FIG. 6 is a graph of RELU functions provided by the present invention;
FIG. 7 is a schematic view of a virtual CT synthesis result provided by the present invention, wherein the first row is an MRI image of an input patient, the second row is a CT image obtained by virtual synthesis by the present method, the third row is a real CT image of a group trunk patient, columns 1 and 2 are CT bone window images, and columns 3 and 4 are CT brain window images;
FIG. 8 is a schematic structural diagram of a virtual CT image generating device based on deep learning according to the present invention;
fig. 9 is a schematic structural diagram of a computer device involved in a method for generating a virtual CT image based on deep learning according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail below. It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. All other embodiments, based on the examples herein, which are within the scope of the invention as defined by the claims, will be within the scope of the invention as defined by the claims.
The following describes a specific method and apparatus for generating a virtual CT image based on deep learning according to the embodiments of the present application with reference to the accompanying drawings.
As shown in fig. 1, the method for generating a virtual CT image based on deep learning provided in the embodiment of the present application includes:
s101, acquiring MRI image data and CT image data of the same part of a patient to be inspected within a preset time range;
it will be appreciated that if the interval between the acquisition of MRI image data and CT image data of the same part of the patient is too long, the registration effect may be poor, and thus, MRI-CT paired data of the same part of the patient needs to be acquired in a short time interval; the preset time range in this application is 24 hours, i.e. MRI-CT paired data needs to be collected within 24 hours.
In some embodiments, further comprising:
preprocessing the MRI data and CT data.
Specifically, the preprocessing is to reject image data with more artifacts, poorer imaging quality and more than 24 hours of inspection interval time, so that the data rejection with poor registration effect is not included in the subsequent model training.
S102, registering and resampling the MRI image data and the CT image data by adopting a multi-mode registration algorithm to obtain a registration data set of corresponding pairing of pixel levels;
in the method, a multi-mode registration algorithm is adopted to register and resample MRI image data and corresponding CT image data to obtain an MRI-CT paired data set corresponding to a pixel level.
S103, inputting the registration data set into a pre-constructed residual density block-generating an countermeasure network for training, and obtaining a CT image virtual synthesis model;
first, a residual density block-generation countermeasure (RDN-GAN) network is constructed, wherein the RDN convolutional neural network is used as a generator of the GAN network, and an enhancement discriminator performs true and false discrimination on the output result of the generator, so that a CT image virtual synthesis model is obtained.
S104, inputting the MRI image into the CT image virtual synthesis model, and outputting to obtain a virtual synthesis CT image.
And calling a CT image virtual synthesis model, inputting an MRI image of a patient to be detected, generating a virtual CT image, and obtaining virtual CT image data of the patient for subsequent procedures such as target region sketching, dose calculation, image registration and the like.
The working principle of the virtual CT image generation method based on the deep learning is as follows: as shown in fig. 2, the medical picture in the application is mainly an MRI picture, and when the MRI picture of the patient is input, the CT picture is virtually generated by a trained CT image virtual synthesis model, so as to obtain the CT picture corresponding to the MRI picture. It should be noted that the present application may also implement the generation of virtual CT data using other MRI sequences, and the patient dataset collected by the present application also includes other MRI sequences, such as T2 weighted imaging, proton density imaging, etc., but in the embodiment of the present invention, the generation of virtual CT image data using T1 weighted imaging sequences is mainly exemplified. It should be noted that the present application can also be applied to a plurality of different organs, as long as a sufficient number of high quality MRI-CT paired data sets can be collected at other locations. For example, the database may contain the following organs: the MRI-based virtual CT image generation of more organs can be realized with the completion of the collection of the subsequent paired data sets of the brain, the head, the neck, the chest, the abdomen and the limbs.
It should be additionally noted that the pre-trained CT image virtual synthesis model in the present application may be directly applied to a process of generating virtual CT images from MRI images from various equipment sources, and in some cases, if the quality of the virtually generated CT images is insufficient or there is more distortion, the robustness and accuracy of the algorithm model may be improved by continuously adding model training data, especially the current MRI equipment source data.
According to the method for generating the virtual CT image based on the deep learning, the virtual generation of CT data through MRI data can be realized by performing model training on the MRI-CT paired data of the existing cases, so that a patient can obtain virtual CT image data without performing additional CT examination, and the generated virtual CT data has higher precision and fidelity and can be applied to application scenes such as drawing of a radiotherapy target area, nuclear magnetic induction radiotherapy and the like. The method can improve the radiotherapy process, particularly in the nuclear magnetic resonance guided radiotherapy process, reduce the CT imaging requirement of a patient, reduce the scanning times and the related medical cost, and also reduce the extra irradiation dose of the patient.
In some embodiments, the registering and resampling the MRI image data and the CT image data using a multi-modal registration algorithm to obtain a registration dataset of a corresponding pair at a pixel level includes:
Registering MRI image data and CT image data based on a multi-mode registration algorithm, and checking registration effects of the registered data to exclude data with poor registration effects;
the resampled data are unified into 512 pixels by 512 pixels, and the MRI-CT paired data with the layer thickness of 5mm are obtained;
the MRI-CT pairing data are in one-to-one correspondence with the pixel level of the MRI image data and the CT image data.
As shown in fig. 3, registration and resampling of the CT image and the MRI image are completed by using a multi-modal registration algorithm in the present application, wherein the first column is a CT bone window image, the third column is a CT brain window image, and the second and fourth columns are MRI T1 weighted images. The self-grinding multi-mode registration algorithm based on the Nifty-reg algorithm is adopted, the algorithm can automatically complete registration of large-scale data in batches, registration effect verification is carried out on registered data by medical professionals, data with poor registration effect are eliminated, resampled data are unified to 512 pixels multiplied by 512 pixels, the thickness of the layer is 5mm, and MRI-CT paired data after registration and resampling are completed form a pixel-level one-to-one correspondence.
In some embodiments, as shown in fig. 4, the residual density block-generating countermeasure network includes:
Generating a network and an countermeasure network, the generating network comprising a convolution layer, a base block and upsampling, the countermeasure network comprising a convolution layer, an activation function, a normalization layer and a density network;
the basic block adopts an antagonism density network, the antagonism density network comprises a density network, a cascade network and a local residual error network, and the density network is composed of a convolution layer and an activation function.
As a specific embodiment, the inputting the registration data set into a pre-constructed residual density block-generating countermeasure network for training, to obtain a virtual composite model of the CT image, includes:
performing data expansion on the registration data set, and normalizing the expanded registration data set into a 256-pixel by 256-pixel image set;
dividing the normalized image set into a training set, a verification set and a test set;
inputting the training set into a generating network and an countermeasure network, obtaining a training model according to the loss function of the generating network and the loss function of the countermeasure network, obtaining a probability output result based on the training model, determining a cross entropy loss function according to the probability output result, and carrying out iterative training on the training model based on the cross entropy loss function and a pre-built identifiable loss function until the cross entropy loss function converges to obtain an output model;
Optimizing the cross entropy loss function by using the verification set based on the output model, and obtaining a verification model after optimization;
and testing the verification model by using the test set, if the test result does not meet the prediction probability threshold, training the generation network and the countermeasure network again, and if the test result meets the prediction probability threshold, obtaining the CT image virtual synthesis model.
It can be appreciated that the steps of obtaining the CT image virtual synthesis model based on the RDN-GAN network structure are,
(1) Training set construction: and carrying out normalization processing on the CT image and the MRI image which are subjected to registration and resampling, and then completing signal intensity correction and normalization of MRI data by using an open source N4 algorithm to obtain a corresponding matched MRI-CT image pair.
(2) Training phase: the invention adopts a deep learning network to construct a CT image virtual synthesis model, wherein the deep learning network is an improved residual density block-generation countermeasure (RDN-GAN) network structure, a CT image is virtually obtained from an MRI image through a generation network of the residual density block-generation countermeasure (RDN-GAN) network, and whether the generated virtual image is the CT image is judged through a countermeasure network of the residual density block-generation countermeasure (RDN-GAN) network.
As shown in FIG. 4, in a particular embodiment of the present application, training the RDN-GAN network structure includes generating a network and opposing the network; the generating network consists of convolution layers, basic blocks and up-sampling, the convolution kernel of each layer has the size of k x k, each layer has C channels, the antagonism network or discriminator is used for obtaining whether the input picture is true or false through the convolution layers with different parameters, normalization layers and activation functions.
The generation network adopts the antagonism density network as a basic block. As shown in fig. 5, the counterdensity network architecture includes a density network, feature fusion, and local residual learning. The density networks are formed by using convolution layers and activation functions, and the input and output relation of each density network is as follows:
wherein the method comprises the steps ofFor RELU activation function, +.>Weights for each convolutional layer.
The RELU activation function is:
the shape of the RELU activation function is shown in fig. 6, and as can be seen from fig. 6, the RELU activation function is a linear function, compared with a general activation function based on Sigmoid or Tanh, when in training, the RELU activation function has no problem of derivative disappearance or derivative explosion, so that the whole training process is more stable, the calculation of the RELU activation function is simpler, floating point operation is not needed, and the processing time is greatly shortened during the calculation.
The cascade network links all the characteristics of the density network and adaptively controls the output information through the convolutional network of 1 x 1. And finally, obtaining the output of the whole local countermeasure density network through a residual error learning network.
The loss function of the overall generation countermeasure network is:
wherein the method comprises the steps ofOutput of the generation network for inputting MRI images, +.>,/>Is an MRI input. />Wherein->Activating a function for sigmoid->Is the discriminator output. />Is a true CT image.
At the same time, to enhance network capacity, features are built and perceptual loss functions are introduced before activating function RELUThe loss function of the entire generation network is updated as:
wherein,,the absolute distance between the virtual generated image and the real image is the absolute distance between the CT image obtained virtually through the MRI image and the real CT image obtained through matching with the registration algorithm in the embodiment of the application. />To balance the parameters of the different loss factors. At the same time, calculate->The loss function adopts a PSNR model, and the specific formula is as follows
Wherein the method comprises the steps ofFor the maximum possible value of the pixel, the data is normalized in the training process, so that the maximum value is 1,/or more>Representing a virtually generated image and a real image, respectively.
The loss function of the antagonism network can also be defined as:
it should be noted that, in the present application, the RDN-GAN network is a convolutional neural network based on a residual neural network and a full connection, and constructs a virtual generation model of the organ according to MRI-CT paired data stored in a database, specifically, the constructed virtual synthesis model is trained according to a cross entropy loss function and a pre-constructed discriminable loss function based on the residual neural network and the full connection convolutional neural network;
the cross entropy loss function is:
where M is the number of organs in the medical MRI picture, M is the medical CT picture of a different organ, yo, M represents the pixels of the medical picture M.
The identifiable loss function includes: variance loss function Lvar and maximized edge loss function L dist The variance loss function is:
the maximized edge loss function is:
where M is the number of organs in the medical MRI picture, xi is the vector representing each pixel, μ m Is the average of the vectors of all pixels of all training samples of the medical MRI picture m. And then training the constructed virtual generation model according to the cross entropy loss function and the preset weight of the pre-constructed discriminable loss function.
Preferably, the preset weight of the cross entropy loss function is 0.8, and the preset weight of the discriminable loss function is 0.2.
It will be appreciated that the present application loads MRI images and normalizes the pictures to 256 pixel by 256 pixel images by linear interpolation and affine variation, and then inputs the entire pictures to a generation countermeasure network (GAN) to obtain a virtual composite CT image. The CT image virtual synthesis model can also support brain images. It can be appreciated that the CT image virtual synthesis model in the application can also support automatic cross-modal generation of more organ images according to requirements. Since most computer vision algorithm models need to be trained, different training data sets and data sets with different quality may affect the final model effect, and in order to achieve a better effect, the training data details are as follows: with MRI-CT paired data of approximately 1000 patient brains, the patient MRI-CT examination interval is within 24 hours, with each patient containing approximately 20-30 slices.
After training is completed, all modules may only retain the test program and the model obtained by training. In addition, in the implementation, fixed-point implementation is adopted to avoid floating point operation, so that the running speed of the whole system is greatly increased.
(3) And (3) detection: and generating an output model of the virtual CT image based on the MRI image by using the deep learning based on the training phase learning, and virtually generating the CT image on all loaded MRI image images by using the output model. And optimizing the loss function and the cross entropy loss function of the generation network and the antagonism network, thereby obtaining a verification model.
(4) Testing: and testing the verification model by using the test set, if the test result does not meet the prediction probability threshold, training the generation network and the countermeasure network again, and if the test result meets the prediction probability threshold, obtaining the CT image virtual synthesis model.
In some embodiments, the inputting the MRI image into the CT image virtual synthesis model to obtain a virtual synthesized CT image includes:
detecting an input MRI image by using a computer vision model, and judging whether the input MRI image is an organ picture supported by the CT image virtual synthesis model;
if yes, the input MRI image is virtually generated to obtain a corresponding CT image.
As a preferred embodiment, the computer vision model is a Vector Boosting model;
The Vector Boosting model extracts MCT (modified census transform) features of the MRI image by modifying a central transformation algorithm, and extracts LGP (local gradiant pattern) features by a local gradient algorithm;
and judging whether the input MRI image is an organ picture supported by the CT image virtual synthesis model or not based on the MCT features and the LGP features.
Specifically, in the present application, in inputting an MRI image into the CT image virtual synthesis model, a pre-trained Vector Boosting model is adopted to detect the loaded medical MRI image, and it is determined that the medical image is an organ supported by the model.
Wherein, the following modification center transformation algorithm is adopted to extract the characteristics of the medical picture to obtain the MCT characteristics of the medical picture,
wherein,,is a plurality of adjacent pixel points of the pixel point x in the medical picture, I (x) is the gray value of the pixel point x, and +.>Is the average gray value of all neighboring pixels of pixel x, < >>Definition is equivalent to->
Extracting features of the medical picture by adopting the following local gradient algorithm to obtain LGP features of the medical picture;
wherein,,is the pixel center point, center point +.>And adjacent point->The gray value difference between them is +. >Average gray level difference of +.>
As a preferred embodiment, the computer vision model is
A nonlinear support vector machine model based on RBF kernel functions.
The nonlinear support vector machine model of the RBF kernel function is as follows:
wherein,,is a hidden variable for the ith medical picture, < ->Is a class label of the picture, the value is +/-1, the positive sample and the negative sample are respectively corresponding, and n is the number of training samples +.>Is the MCT feature and LGP feature of the ith medical picture, < >>Is the ith medical picture and the jth medical pictureThe similarity between pictures, C is a punishment parameter used for punishing the wrongly classified training samples, is a manually set parameter, and has a value of a real number larger than 0.
As shown in FIG. 7, the first row in FIG. 7 is an input MRI image of a patient, the second row is a CT image obtained by virtual synthesis by the method, the third row is a group trunk, namely a real CT image of the patient, the first, second and third columns are bone window CT images, and the fourth and fifth columns are brain window CT images.
On Nvidia's 3070Ti GPU hardware, the deep learning algorithm of the present application can process at least 5 frames of images per second. In summary, the present application utilizes the combination of computer vision technology and the latest artificial intelligence deep learning technology to realize the virtual generation of CT images based on MRI images, reduces CT examination requirements of radiotherapy patients, and particularly in the nuclear magnetic induction radiotherapy process, the generated CT data can effectively assist radiotherapy workers in carrying out multi-mode fusion delineation and dose calculation of target areas and organs at risk, thereby greatly improving the efficiency and accuracy of target area delineation and dose calculation of doctors. Overall, the method has the advantages of more accurate algorithm, stronger robustness and more extreme cases which can be processed.
The method for generating the virtual CT image based on the deep learning has the following beneficial effects:
in the data collection stage, the establishment of a matched data set of CT and MRI of the brain of a larger-scale patient is completed, the data scale of the data set is approximately 500 cases, and the follow-up data is still collected, so that approximately 1000 cases can be expected;
in terms of data set quality, the interval time between CT and MRI examination of the patient of the paired data set is extremely short, most of the interval time is continuously completed, the interval is within 1h, the maximum interval time is not more than 24h, the image data of the type is extremely scarce, the interval time is short, the consistency of the anatomy and physiological state of the internal organ of the patient during the two examinations can be ensured as much as possible, and the completion of the subsequent high-precision image multi-mode image registration is facilitated;
In the data multi-modal registration step, the embodiment of the application adopts an automatic batch multi-modal registration algorithm based on an open source Nifty-reg algorithm, the algorithm can accurately and efficiently finish the registration and resampling of MRI and CT data in batches, after the registration is finished, medical professionals further verify the accuracy of registration results, and only the image data which are qualified in verification can enter the next training link;
in the model training stage, the application adopts the latest improved deep learning algorithm model based on a residual density block-generation countermeasure joint network (RDN-GAN) convolutional neural network architecture, the algorithm shows the effect exceeding that of all existing deep learning algorithms in the traditional computer vision field, the application modifies and perfects the algorithm, the application is applied to the medical image synthesis field, the model training is completed by utilizing the collected high-quality paired data set of nearly 500 pairs of patients, the data set belongs to a larger scale globally, meanwhile, the imaging effect verification of the algorithm is completed, and the better image virtual generation effect than that of the existing CNN deep learning algorithm model can be obtained.
In summary, the invention provides a method for virtually generating CT images based on residual density blocks and generating a deep learning algorithm model against a convolutional neural network (RDN-GAN) architecture on the basis of the defects of the conventional deep learning algorithm in image synthesis, and the method has good application prospects in conventional radiotherapy procedures such as target region positioning, target region sketching, dose calculation and the like and nuclear magnetic induction radiotherapy by using a larger amount of training data, a more accurate batch registration algorithm and a more advanced deep learning algorithm model, so that the method has higher image generation precision and accuracy and higher robustness, and patients can obtain virtual CT data based on MRI without CT examination.
As shown in fig. 8, an embodiment of the present application provides a virtual CT image generating apparatus based on deep learning, including:
an acquisition module 201, configured to acquire MRI image data and CT image data for checking the same part of a patient within a preset time range;
the registration module 202 is configured to register and resample the MRI image data and the CT image data by using a multi-modal registration algorithm, so as to obtain a registration dataset of corresponding pairs at a pixel level;
the training module 203 is configured to input the registration data set into a pre-constructed residual density block-generating an countermeasure network for training, so as to obtain a CT image virtual synthesis model;
the synthesis module 204 is configured to input the MRI image into the CT image virtual synthesis model, and output the virtual synthesis CT image.
The working principle of the virtual CT image generating device based on deep learning provided by the present application is that the acquisition module 201 acquires MRI image data and CT image data of the same part of the patient to be inspected within a preset time range; the registration module 202 registers and resamples the MRI image data and the CT image data by using a multi-modal registration algorithm to obtain a registration dataset of corresponding pairs at a pixel level; the training module 203 inputs the registration data set into a pre-constructed residual density block-generates an countermeasure network for training, and a CT image virtual synthesis model is obtained; the synthesis module 204 inputs the MRI image into the CT image virtual synthesis model to obtain a virtual synthesized CT image.
The application provides a computer device comprising: the memory 1 and the processor 2 may further comprise a network interface 3, said memory storing a computer program, the memory may comprise non-volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory etc. form, such as Read Only Memory (ROM) or flash memory (flash RAM). The computer device stores an operating system 4, the memory being an example of a computer readable medium. The computer program, when executed by the processor, causes the processor to perform a method for generating a virtual CT image based on deep learning, and the structure shown in fig. 9 is merely a block diagram of a part of the structure related to the present application, and does not constitute a limitation of a computer device to which the present application is applied, and a specific computer device may include more or less components than those shown in the drawings, or may combine some components, or have a different arrangement of components.
In one embodiment, the method for generating a virtual CT image based on deep learning provided in the present application may be implemented as a computer program, which may be executed on a computer device as shown in fig. 9.
In some embodiments, the computer program, when executed by the processor, causes the processor to perform the steps of: acquiring MRI image data and CT image data of the same part of a patient to be inspected within a preset time range; registering and resampling the MRI image data and the CT image data by adopting a multi-mode registration algorithm to obtain a registration data set of corresponding pairs of pixel levels; inputting the registration data set into a pre-constructed residual density block-generating an countermeasure network for training, and obtaining a CT image virtual synthesis model; and inputting the MRI image into the CT image virtual synthesis model, and outputting to obtain a virtual synthesis CT image.
The present application also provides a computer storage medium, examples of which include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassette storage or other magnetic storage devices, or any other non-transmission medium, that can be used to store information that can be accessed by a computing device.
In some embodiments, the present invention also provides a computer readable storage medium storing a computer program which, when executed by a processor, acquires MRI image data and CT image data of an examination of the same portion of a patient within a preset time frame; registering and resampling the MRI image data and the CT image data by adopting a multi-mode registration algorithm to obtain a registration data set of corresponding pairs of pixel levels; inputting the registration data set into a pre-constructed residual density block-generating an countermeasure network for training, and obtaining a CT image virtual synthesis model; and inputting the MRI image into the CT image virtual synthesis model, and outputting to obtain a virtual synthesis CT image.
In summary, the present invention provides a method and apparatus for generating a virtual CT image based on deep learning, where the method includes acquiring MRI image data and CT image data for checking the same part of a patient within a preset time range; registering and resampling the MRI image data and the CT image data by adopting a multi-mode registration algorithm to obtain a registration data set of corresponding pairs of pixel levels; inputting the registration data set into a pre-constructed residual density block-generating an countermeasure network for training to obtain a CT image virtual synthesis model; and inputting the MRI image into the CT image virtual synthesis model to obtain a virtual synthesis CT image. The invention can realize virtual generation of CT data through MRI data by model training of MRI-CT paired data of the existing cases, so that a patient can obtain virtual CT image data without additional CT examination.
It can be understood that the above-provided method embodiments correspond to the above-described apparatus embodiments, and corresponding specific details may be referred to each other and will not be described herein.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, magnetic disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (7)

1. A method for generating a virtual CT image based on deep learning, comprising:
acquiring MRI image data and CT image data of the same part of a patient to be inspected within a preset time range;
registering and resampling the MRI image data and the CT image data by adopting a multi-mode registration algorithm to obtain a registration data set of corresponding pairs of pixel levels;
inputting the registration data set into a pre-constructed residual density block-generating an countermeasure network for training, and obtaining a CT image virtual synthesis model;
inputting the MRI image into the CT image virtual synthesis model, and outputting to obtain a virtual synthesis CT image;
the registering and resampling the MRI image data and the CT image data by using a multi-modal registration algorithm to obtain a registration dataset of corresponding pairs at a pixel level, including:
registering MRI image data and CT image data based on a multi-mode registration algorithm, and checking registration effects of the registered data to eliminate data with poor registration effects;
the resampled data are unified into 512 pixels by 512 pixels, and the MRI-CT paired data with the layer thickness of 5mm are obtained;
the MRI-CT pairing data are in pixel-level one-to-one correspondence with the MRI image data and the CT image data;
The residual density block-generating an countermeasure network, comprising:
generating a network and an countermeasure network, the generating network comprising a convolution layer, a base block and upsampling, the countermeasure network comprising a convolution layer, an activation function, a normalization layer and a density network;
wherein the residual density block-generating a loss function of the antagonism network is:
wherein (1)>Output of the generating network for inputting MRI images, i.e +.>,/>Input for MRI; />Wherein->Activating a function for sigmoid->Is output by the discriminator; />Is a true CT image;
the basic block adopts a residual error density network, the residual error density network comprises a density network, a cascade network and a local residual error network, and the density network is composed of a convolution layer and an activation function;
wherein, the density network is formed by convolution layer and activation function, the input and output relation of each density network is:
wherein (1)>For RELU activation function, +.>Weights for each convolutional layer;
constructing features and introducing perceptual loss functions prior to activating the functionsGenerating a loss function of the network as
Wherein (1)>The absolute distance between the virtual generated image and the real image is the absolute distance between the CT image obtained through MRI virtual and the real CT image obtained through matching of the MRI virtual and the registration algorithm; / >Is a parameter used to balance the different loss factors;
calculation using PSNR modelThe calculation formula is as follows:
wherein (1)>For the maximum possible value of the pixel, the data is normalized in the training process, so that the maximum possible value of the pixel is 1, +.>Representing a virtually generated image and a real image, respectively.
2. The method of claim 1, wherein inputting the registration dataset into a pre-constructed residual density block-generating a countermeasure network for training to obtain a CT image virtual composite model, comprising:
performing data expansion on the registration data set, and normalizing the expanded registration data set into a 256×256-pixel image set;
dividing the normalized image set into a training set, a verification set and a test set;
inputting the training set into a generating network and an countermeasure network, obtaining a training model according to the loss function of the generating network and the loss function of the countermeasure network, obtaining a probability output result based on the training model, determining a cross entropy loss function according to the probability output result, and carrying out iterative training on the training model based on the cross entropy loss function and a pre-built identifiable loss function until the cross entropy loss function converges to obtain an output model;
Optimizing the cross entropy loss function by using the verification set based on the output model, and obtaining a verification model after optimization;
and testing the verification model by using the test set, if the test result does not meet the prediction probability threshold, training the generation network and the countermeasure network again, and if the test result meets the prediction probability threshold, obtaining the CT image virtual synthesis model.
3. The method of claim 1, wherein inputting the MRI image into the CT image virtual synthesis model and outputting the resulting virtual synthetic CT image comprises:
detecting an input MRI image by using a computer vision model, and judging whether the MRI image is an organ picture supported by the CT image virtual synthesis model;
if yes, the input MRI image is virtually generated to obtain a corresponding CT image.
4. The method of claim 3, wherein the computer vision model is a Vector Boosting model;
the Vector Boosting model extracts MCT features of an input MRI image by modifying a central transformation algorithm, and extracts LGP features by a local gradient algorithm;
and judging whether the input MRI image is an organ picture supported by the CT image virtual synthesis model or not based on the MCT features and the LGP features.
5. A method according to claim 3, wherein the computer vision model is
A nonlinear support vector machine model based on RBF kernel functions.
6. The method as recited in claim 1, further comprising:
preprocessing the MRI data and CT data.
7. A computer device, comprising: a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the generation method of any of claims 1 to 6.
CN202310042282.0A 2023-01-28 2023-01-28 Virtual CT image generation method and device based on deep learning Active CN116071401B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310042282.0A CN116071401B (en) 2023-01-28 2023-01-28 Virtual CT image generation method and device based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310042282.0A CN116071401B (en) 2023-01-28 2023-01-28 Virtual CT image generation method and device based on deep learning

Publications (2)

Publication Number Publication Date
CN116071401A CN116071401A (en) 2023-05-05
CN116071401B true CN116071401B (en) 2023-08-01

Family

ID=86183323

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310042282.0A Active CN116071401B (en) 2023-01-28 2023-01-28 Virtual CT image generation method and device based on deep learning

Country Status (1)

Country Link
CN (1) CN116071401B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116385329B (en) * 2023-06-06 2023-08-29 之江实验室 Multilayer knowledge distillation medical image generation method and device based on feature fusion
CN116932798B (en) * 2023-09-15 2023-11-21 星河视效科技(北京)有限公司 Virtual speaker generation method, device, equipment and storage medium
CN117974735B (en) * 2024-04-02 2024-06-14 西北工业大学 Cross-modal medical image registration method, system and equipment for digital person

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102592137A (en) * 2011-12-27 2012-07-18 中国科学院深圳先进技术研究院 Multi-modality image registration method and operation navigation method based on multi-modality image registration
CN106651875A (en) * 2016-12-08 2017-05-10 温州医科大学 Multimode MRI longitudinal data-based brain tumor space-time coordinative segmentation method
CN110021037A (en) * 2019-04-17 2019-07-16 南昌航空大学 A kind of image non-rigid registration method and system based on generation confrontation network
WO2021061710A1 (en) * 2019-09-25 2021-04-01 Subtle Medical, Inc. Systems and methods for improving low dose volumetric contrast-enhanced mri
CN113554669A (en) * 2021-07-28 2021-10-26 哈尔滨理工大学 Unet network brain tumor MRI image segmentation method for improving attention module

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101959454B (en) * 2008-03-07 2013-11-20 皇家飞利浦电子股份有限公司 CT replacement by automatic segmentation of magnetic resonance images
US10346974B2 (en) * 2017-05-18 2019-07-09 Toshiba Medical Systems Corporation Apparatus and method for medical image processing
WO2019238804A1 (en) * 2018-06-13 2019-12-19 Siemens Healthcare Gmbh Localization and classification of abnormalities in medical images
US10835761B2 (en) * 2018-10-25 2020-11-17 Elekta, Inc. Real-time patient motion monitoring using a magnetic resonance linear accelerator (MR-LINAC)
US11083913B2 (en) * 2018-10-25 2021-08-10 Elekta, Inc. Machine learning approach to real-time patient motion monitoring
WO2020246996A1 (en) * 2019-06-06 2020-12-10 Elekta, Inc. Sct image generation using cyclegan with deformable layers
CN110288641A (en) * 2019-07-03 2019-09-27 武汉瑞福宁科技有限公司 PET/CT and the different machine method for registering of MRI brain image, device, computer equipment and storage medium
JP2022550688A (en) * 2019-09-25 2022-12-05 サトゥル メディカル,インコーポレイテッド Systems and methods for improving low-dose volume-enhanced MRI
CN111242959B (en) * 2020-01-15 2023-06-16 中国科学院苏州生物医学工程技术研究所 Target area extraction method of multi-mode medical image based on convolutional neural network
US11077320B1 (en) * 2020-02-07 2021-08-03 Elekta, Inc. Adversarial prediction of radiotherapy treatment plans
US11348259B2 (en) * 2020-05-23 2022-05-31 Ping An Technology (Shenzhen) Co., Ltd. Device and method for alignment of multi-modal clinical images using joint synthesis, segmentation, and registration
CN112102294B (en) * 2020-09-16 2024-03-01 推想医疗科技股份有限公司 Training method and device for generating countermeasure network, and image registration method and device
US11995810B2 (en) * 2021-04-27 2024-05-28 City University Of Hong Kong System and method for generating a stained image
CN113763442B (en) * 2021-09-07 2023-06-13 南昌航空大学 Deformable medical image registration method and system
CN114387317B (en) * 2022-03-24 2022-06-17 真健康(北京)医疗科技有限公司 CT image and MRI three-dimensional image registration method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102592137A (en) * 2011-12-27 2012-07-18 中国科学院深圳先进技术研究院 Multi-modality image registration method and operation navigation method based on multi-modality image registration
CN106651875A (en) * 2016-12-08 2017-05-10 温州医科大学 Multimode MRI longitudinal data-based brain tumor space-time coordinative segmentation method
CN110021037A (en) * 2019-04-17 2019-07-16 南昌航空大学 A kind of image non-rigid registration method and system based on generation confrontation network
WO2021061710A1 (en) * 2019-09-25 2021-04-01 Subtle Medical, Inc. Systems and methods for improving low dose volumetric contrast-enhanced mri
CN113554669A (en) * 2021-07-28 2021-10-26 哈尔滨理工大学 Unet network brain tumor MRI image segmentation method for improving attention module

Also Published As

Publication number Publication date
CN116071401A (en) 2023-05-05

Similar Documents

Publication Publication Date Title
CN116071401B (en) Virtual CT image generation method and device based on deep learning
CN111008984B (en) Automatic contour line drawing method for normal organ in medical image
Wang et al. RETRACTED: ADVIAN: Alzheimer's Disease VGG-Inspired Attention Network Based on Convolutional Block Attention Module and Multiple Way Data Augmentation
CN108257134B (en) Nasopharyngeal carcinoma focus automatic segmentation method and system based on deep learning
CN107563434B (en) Brain MRI image classification method and device based on three-dimensional convolutional neural network
EP3362146A1 (en) Pseudo-ct generation from mr data using tissue parameter estimation
CN113826143A (en) Feature point detection
CN108629785B (en) Three-dimensional magnetic resonance pancreas image segmentation method based on self-learning
CN113239755A (en) Medical hyperspectral image classification method based on space-spectrum fusion deep learning
Wu et al. Ultrasound image segmentation method for thyroid nodules using ASPP fusion features
CN117218453A (en) Incomplete multi-mode medical image learning method
CN116051545B (en) Brain age prediction method for bimodal images
CN114048806A (en) Alzheimer disease auxiliary diagnosis model classification method based on fine-grained deep learning
CN115861464A (en) Pseudo CT (computed tomography) synthesis method based on multimode MRI (magnetic resonance imaging) synchronous generation
Al-Khasawneh et al. Alzheimer’s Disease Diagnosis Using MRI Images
Wang et al. Deep transfer learning-based multi-modal digital twins for enhancement and diagnostic analysis of brain mri image
CN113538209A (en) Multi-modal medical image registration method, registration system, computing device and storage medium
Fan et al. Graph Reasoning Module for Alzheimer’s Disease Diagnosis: A Plug-and-Play Method
Lei et al. Generative adversarial network for image synthesis
CN112767403A (en) Medical image segmentation model training method, medical image segmentation method and device
CN108596900B (en) Thyroid-associated ophthalmopathy medical image data processing device and method, computer-readable storage medium and terminal equipment
Lei et al. Generative adversarial networks for medical image synthesis
CN116309754A (en) Brain medical image registration method and system based on local-global information collaboration
Garcia-Cabrera et al. Semi-supervised learning of cardiac MRI using image registration
CN113052840B (en) Processing method based on low signal-to-noise ratio PET image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant