WO2020198854A1 - Method and system for producing medical images - Google Patents

Method and system for producing medical images Download PDF

Info

Publication number
WO2020198854A1
WO2020198854A1 PCT/CA2020/050404 CA2020050404W WO2020198854A1 WO 2020198854 A1 WO2020198854 A1 WO 2020198854A1 CA 2020050404 W CA2020050404 W CA 2020050404W WO 2020198854 A1 WO2020198854 A1 WO 2020198854A1
Authority
WO
WIPO (PCT)
Prior art keywords
images
modality
image
medical image
medical
Prior art date
Application number
PCT/CA2020/050404
Other languages
French (fr)
Inventor
Reda OULBACHA
Samuel KADOURY
Original Assignee
Polyvalor, Limited Partnership
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Polyvalor, Limited Partnership filed Critical Polyvalor, Limited Partnership
Publication of WO2020198854A1 publication Critical patent/WO2020198854A1/en

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/006Inverse problem, transformation from projection-space into object-space, e.g. transform methods, back-projection, algebraic methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2211/00Image generation
    • G06T2211/40Computed tomography
    • G06T2211/464Dual or multimodal imaging, i.e. combining two or more imaging modalities

Abstract

Methods and systems for producing medical images are provided. A first medical image of a first modality is obtained. A second medical image of a second modality is generated, based on the first medical image and using an artificial intelligence, wherein the second medical image is mappable to a third modality of medical image. The second medical image is mapped to a third medical image of the third modality.

Description

METHOD AND SYSTEM FOR PRODUCING MEDICAL IMAGES
TECHNICAL FIELD
[0001 ] The present disclosure relates to medical imaging, and more specifically to the production of synthetic medical imagery.
BACKGROUND OF THE ART
[0002] Advances in medical imaging techniques have allowed for improved diagnostics and additional precision when performing operations and other medical procedures. Imaging techniques including magnetic resonance imaging (MRI), computed tomography (CT) imaging, positron emission tomography (PET) imaging, and ultrasound imaging are used in different scenarios, including prior to medical operations, during medical operations, and following medical operations.
[0003] Different modalities of medical imaging present different advantages and risks. Certain modalities of radiology-based imagery are used sparingly with patients, because of the danger associated with intense and/or repeated exposure to radiation. Other modalities of medical imagery are used sparingly due to cost considerations and equipment availability. In addition, certain modalities of imagery are not suited for use during medical procedures and operations.
[0004] Therefore, there is a need for improvement.
SUMMARY
[0005] In accordance with a broad aspect, there is provided a method for producing medical images, comprising: obtaining a first medical image of a first modality; generating a second medical image of a second modality based on the first medical image using an artificial intelligence, wherein the second medical image is mappable to a third modality of medical image; and mapping the second medical image to a third medical image of the third modality.
[0006] In some embodiments, generating the second medical image comprises segmenting the first medical image. [0007] In some embodiments, the method further comprises augmenting a resolution of the first medical image using the artificial intelligence.
[0008] In some embodiments, generating the second medical image comprises performing histogram normalization on the first medical image.
[0009] In some embodiments, generating the third medical image comprises applying a ray-casting procedure to the second medical image.
[0010] In some embodiments, the artificial intelligence is a cycle generative adversarial network.
[001 1 ] In some embodiments, the artificial intelligence is a conditional generative adversarial network.
[0012] In some embodiments, the first modality corresponds to magnetic-resonance imaging images, the second modality corresponds to computed tomography images, and the third modality corresponds to C-arm images.
[0013] In accordance with a further broad aspect, there is provided a method for training an artificial intelligence, comprising: obtaining a plurality of unpaired images comprising first images of a first modality and second images of a second modality; generating, for each of the first images, a corresponding first synthetic image of the second modality using a first artificial intelligence; generating, for each of the second images, a corresponding second synthetic image of the first modality using the first artificial intelligence; and training a second artificial intelligence using a training image set comprising the first images, the first synthetic images, the second images, and the second synthetic images.
[0014] In some embodiments, the first artificial intelligence is a cycle generative adversarial network (GAN), and wherein the second artificial intelligence is a conditional GAN.
[0015] In some embodiments, the cycle GAN and/or the conditional GAN comprises two synthesis convolutional neural networks (CNN) and two discriminator CNN. [0016] In some embodiments, the training image data set further comprises a paired image set comprising paired third images of the first and the second modalities resulting from medical imaging.
[0017] In some embodiments, the method further comprises: obtaining a subsequent image of the first modality; and using the trained second artificial intelligence, generating a subsequent image of the second modality based on the subsequent image of the first modality.
[0018] In some embodiments, the method further comprises segmenting the subsequent image.
[0019] In some embodiments, the method further comprises augmenting a resolution of the subsequent image using a third artificial intelligence.
[0020] In some embodiments, the method further comprises performing histogram normalization on the subsequent image.
[0021 ] In some embodiments, the first modality corresponds to magnetic-resonance imaging images, and the second modality corresponds to computed tomography images.
[0022] In accordance with a further broad embodiment, there is provided a system for producing medical images. The system comprises a processing unit and a non-transitory computer-readable medium having stored thereon program instructions. The program instructions are executable by the processing unit for: obtaining a first medical image of a first modality; generating a second medical image of a second modality based on the first medical image using an artificial intelligence, wherein the second medical images are mappable to a third modality of medical image; and generating a third medical image of the third modality based on the second medical image.
[0023] In some embodiments, the artificial intelligence is a conditional generative adversarial network, wherein the first modality corresponds to magnetic-resonance imaging images, the second modality corresponds to computed tomography images, and the third modality corresponds to C-arm images, and wherein obtaining the first medical image comprises performing an MRI procedure of a lumbar region of a patient. [0024] In some embodiments, the artificial intelligence is a cycle generative adversarial network.
[0025] Features of the systems, devices, and methods described herein may be used in various combinations and may also be used for the system and computer-readable storage medium in various combinations.
BRIEF DESCRIPTION OF THE DRAWINGS
[0026] Further features and advantages of embodiments described herein may become apparent from the following detailed description, taken in combination with the appended drawings, in which:
[0027] FIG. 1 illustrates a schematic diagram of an example medical imaging framework, in accordance with at least one embodiment;
[0028] FIGS. 2A-B illustrates schematic diagrams of an example artificial intelligence (Al) architectures for converting between modalities of medical images, in accordance with at least one embodiment;
[0029] FIG. 3 illustrates a block diagram of an example Al training architecture, in accordance with at least one embodiment;
[0030] FIG. 4 a schematic diagram of an example computing system, in accordance with at least one embodiment;
[0031 ] FIG. 5A is a flowchart illustrating an example method for producing medical images, in accordance with at least one embodiment;
[0032] FIG. 5B is a flowchart illustrating an example implementation of an Al training step of the example method of FIG. 5A, in accordance with at least one embodiment; and
[0033] FIGS. 6A-C are examples of acquired and synthesized medical images.
[0034] It will be noted that throughout the appended drawings, like features are identified by like reference numerals. DETAILED DESCRIPTION
[0035] There is provided herein methods and systems for producing medical images, including synthetic images using various artificial intelligence (Al) techniques and approaches. The forgoing discussion will focus on the use of particular types of Als, though it should be noted that the use of alternative types of Als, combinations of different types of Als, and the use of other machine learning approaches more generally are also considered.
[0036] As used herein, the term“artificial intelligence” refers to intelligence displayed by machines, for instance by a computing system. Embodiments of artificial intelligences may exhibit analytical or cognitive intelligence, human-inspired intelligence, including emotional intelligence, or humanized artificial intelligence, including self-awareness. An artificial intelligence can be instructed to perform one or more tasks by recognizing patterns in datasets, by observing human actions, or using specific programming. In addition, an artificial intelligence can be configured for learning additional behaviour, or improving existing behaviour, by repetition of particular tasks or based on the provisioning of new data or observations.
[0037] An artificial intelligence can be implemented using any suitable type of computing system, which can be programmed, taught, or otherwise instructed to perform one or more tasks. In some embodiments, an artificial intelligence consists of computer- readable instructions or program code which can be interpreted by a processing unit to perform the tasks. It should be noted that numerous practical implementations of artificial intelligence are known, including neural networks, deep learning, Bayesian networks, and the like. References to a particular implementation of an artificial intelligence in the following paragraphs should be understood to represent example implementations only, and that other implementations are also considered.
[0038] In many medical interventions, physicians or other medical professionals make use of so-called “intra-operative” medical imaging procedures, which allow for the collection of medical images concurrently with the carrying out of the medical intervention. One non-limiting example of an intra-operative imaging procedure is fluoroscopy. Intra operative medical images may be compared with pre-operative medical images (acquired prior to the medical intervention), or synthetic images derived therefrom, in order to assist medical professionals during the medical intervention. For instance, medical professionals can use pre- and intra-operative medical images to confirm the position of a medical implement or tool, assess the completeness of an ablation procedure, evaluate the alignment of bones, and the like. However, it occurs that the modality of pre-operative image available to a medical professional is not suitably mappable to the modality of intra operative images obtained when carrying out a medical procedure. Put differently, comparisons and alignment between pre- and intra-operative images can be difficult. This can occur, for example, if the pre- and intra-operative medical imaging procedures operate on different principles, illustrate different types of tissues or bodily structures, and the like. This can also occur when the preoperative image is a three-dimensional (3D) image, and the intra-operative image is a two-dimensional (2D) image.
[0039] For example, in the case of image-guided spinal interventions, an association between pre-operative MRI imagery and intra-operative C-arm imagery can assist a medical professional to visualize the patient’s anatomy in 3D and provide an axial view of the vertebra, which can be applicable for pedicle screw insertion procedures, and to avoid critical structures and soft tissues, including veins and nerves, during instrumentation insertion. However, because MRI and C-arm imaging rely on different physical principles - water excitation vs. bone attenuation - and because of differences in patient posing when acquiring the imagery - prone vs. supine - it can be difficult to perform any kind of registration between MRI and C-arm imagery. CT imaging, however, relies on similar principles as C-arm imaging: techniques for translating MRI imagery into CT imagery could mitigate some of these issues, as MRI imaging is often performed on patients, and CT imagery can be more easily mapped to C-arm imagery.
[0040] The present disclosure provides, inter alia, methods and systems for performing registration between 3D and 2D medical images, including images of a lumbar spine, using CT images which are synthesized from T2-weighted MRI images acquired for various diagnostic purposes. In some embodiments, the methods and systems described herein incorporate a trainable pre-processing pipeline which uses a fully-convolutional residual network (FC-ResNet) or other similar neural network, combined with a low- capacity fully-convolutional network to normalize input MRI data and to segment vertebral bodies and pedicles within the MRI data. In some embodiments, the methods and systems described herein also incorporate a pseudo-3D cycle generative adversarial network (GAN) architecture, using a synthesis process which includes neighboring slices and a cyclic loss function which can reduce inconsistencies between MRI and CT synthesis by matching differential histograms to multimodal distributions. In some embodiments, the methods and systems described herein additionally incorporate a multi-planar digitally- reconstructed radiograph (DRR) registration approach which aligns the 3D and 2D modalities.
[0041 ] With reference to FIG. 1 , there is shown a framework 100 for producing medical images. The framework 100 can be used in situations where a modality mismatch between pre-operative and intra-operative medical images exist. By synthesizing an intermediary medical image, which is based on the pre-operative medical image and which maps to the intra-operative medical image, medical professionals can be provided with medical images which are mappable to the available intra-operative images. The framework 100 is composed of an image preprocessing module 110, an image synthesis module 120, and a transformation module 130. The framework 100 obtains a pre-operative image of a first modality (“Modality A”), synthesizes an intermediary image of a second modality (“Modality B”) based on the pre-operative image, and then uses the intermediary image to produce a synthetic intra-operative image (“Modality C”), which can be compared with actual intra operative imagery, for instance acquired via imaging instruments during a medical intervention, to assist medical professionals with the medical intervention. Although the foregoing discussion will focus on embodiments operating in the context of lumbar-related medical interventions and images, it should be understood that the techniques disclosed herein can be applied to medical interventions and images employed in different contexts.
[0042] The image preprocessing module 110 is configured for obtaining a pre operative image of Modality A and performing one or more preprocessing steps on the pre-operative image. For example, the pre-operative image can be segmented to identify different vertebrae or similar structures present in the image. Segmentation of the pre operative image can be performed in any suitable fashion, for example using edge- or gradient-detection techniques. Other preprocessing steps, including noise reduction, contrast adjustment, and the like, are also considered. For example, the image preprocessing module 110 can enhance the resolution of the pre-operative image using one or more Al-based resolution enhancing techniques. In another example, the image preprocessing module 110 performs image normalization and/or image bias field correction. In yet another example, the image preprocessing module 110 performs histogram normalization, for instance via a fully-convolutional neural network (FCN).
[0043] The image synthesis module 120 acquires the preprocessed pre-operative image and produces, based thereon, an intermediary image of Modality B. The intermediary image can be produced using any suitable technique, including one or more Als. In some embodiments, the image segmentation performed by the image preprocessing module 110 produces a plurality of segmented images which are each provided to the image synthesis module 120. The image synthesis module 120 can then produce a plurality of intermediary images based on each of the segmented images. Other embodiments are also considered.
[0044] The transformation 130 obtains the intermediary image and applies one or more transformation techniques to produce a synthetic intra-operative image of Modality C. The transformation module 130 then outputs the intra-operative image, for instance to a computer system or other device, which can use the synthetic intra-operative image in a comparison with actual intra-operative imagery, as appropriate.
[0045] The transformation techniques employed by the transformation module 130 can include digital reconstructed radiology (DRR) techniques, 2D slice extraction techniques, and the like. DRR techniques rely on the fact that images of Modality B and C, despite being of differing dimensionality (i.e., 3D vs. 2D), are quasi-intramodal, since both Modality B and Modality C images rely on similar imaging principles. For example, in the case of CT images and C-arm images, both rely on X-ray absorption. DDR techniques can use ray casting or other approaches, as appropriate. For example, the transformation module 130 can employ an optimization procedure which determines an optimal rigid transformation to apply to a CT image that best matches a target C-arm image. 2D slice extraction techniques rely on the fact that 2D images can be extracted from 3D volumetric images. For example, in the case of ultrasound images, a“2D slice” of a 3D volumetric ultrasound image can be extracted, which can then be registered against a 2D image. Other approaches are also considered. In some cases, the transformation module 130 performs image transformation between 3D and 2D image modalities. In some embodiments, the transformation techniques employed by the transformation module 130 can be performed at a plurality of angles, for instance including an anteroposterior angle, a lateral angle, and/or angles at -45° and 45°. Other angles are also considered.
[0046] The framework 100 described hereinabove can be used for a variety of modalities of images. Table 1 below provides a non-exhaustive list of the modalities of images considered, and of the particular corresponding modalities (A, B, C):
Figure imgf000011_0001
Table 1 : Listing of corresponding image modalities
[0047] For example, a medical professional is performing a cardio-vascular intervention during which 2D intra-operative images will be acquired with a C-arm device (Modality C). The patient was required to obtain an MRI scan prior to the medical intervention, and thus a pre-operative MRI was obtained (Modality A). The framework 100 can be used to produce, based on the pre-operative MRI image, a synthetic CT image, which can then be used to produce a synthetic 2D C-arm image for comparison with 2D C- arm imagery acquired during the medical intervention. For instance, an optimization algorithm compares the output of the transformation module 130 with medical imagery obtained through fluoroscopy or similar techniques. The optimization algorithm can then adjust the parameters of the transformation module 130. In some cases, validation by a human operator, for instance a medical professional, can assist in the operation of the framework 100. Other examples, using other combinations of corresponding images (as shown in Table 1), are also considered.
[0048] With reference to FIG. 2, there is shown an example Al architecture 200 which forms part of the image synthesis module 120. The embodiment illustrated in FIG. 2 relates particularly to conversion between MRI and CT images, though it should be understood that other embodiments of the Al architecture 200 could be used to perform conversions between other sorts of medical images. For instance, the Al architecture can be doubled, as illustrated in FIG. 2B, to form a cycle-consistent GAN, which will be described in greater detail hereinbelow. In some embodiments, the Al architecture of FIG. 2 serves to convert between images of Modality A and Modality B.
[0049] The Al architecture 200 consists of four deep neural networks: two synthesis convolutional neural networks (CNNs) and two discriminator CNNs. In the illustrated embodiment, the Al architecture 200 includes an MRI discriminator 210, an MRI-to-CT generator 220, a CT discriminator 230, and a CT-to-MRI generator 240. In this fashion, the Al architecture 200 forms a generative adversarial network (GAN). GANs consist of networks that are adversarial in nature: a generator to produce images from noise and a discriminator which is trained to detect synthetic images. The generator and discriminator are jointly trained to solve for an optimal Nash equilibrium, where the generator generates images that the discriminator cannot distinguish from real imagery.
[0050] The MRI discriminator 210 and the MRI-to-CT generator 220 are provided with input MRI images 202 for training purposes. The MRI discriminator 210 and the MRI-to-CT generator 220 can be trained using any suitable training methodology, including both supervised and unsupervised learning techniques. Similarly, the CT discriminator 230 and the CT-to-MRI generator 240 can be provided with synthetic CT images 204 produced by the MRI-to-CT generator 220 for training purposes. Any suitable training methodology can be employed. The CT-to-MRI generator 240 can then produce reconstructed MRI images 206 based on the synthetic CT images 204. In some embodiments, the MRI discriminator 210, the MRI-to-CT generator 220, the CT discriminator 230, and the CT-to-MRI generator are trained to cycle consistency. For example, the CT-to-MRI generator 240 can produce synthetic MRI images, for instance the reconstructed MRI images 206, which can be provided to the MRI discriminator 210 to assist in further training.
[0051 ] It should be noted that the Al architecture 200 can be implemented using a cycle GAN or a conditional GAN. Cycle GANs do not require paired training datasets, but can be less accurate and more resource-intensive. Conditional GANs, conversely, have higher accuracy but require paired datasets, which can be difficult to collect. The present disclosure provides an approach to training the Al architecture 200 to combine the abilities of both cycle GANs and conditional GANs.
[0052] With reference to FIG. 2B, an alternative Al architecture 250 is illustrated. The Al architecture 250 includes the MRI discriminator 210, the MRI-to-CT generator 220, the CT discriminator 230, and the CT-to-MRI generator 240 of the Al architecture 200, illustrated in FIG. 2. Input MRI images 202, which can be obtained from a database 201 , are provided to the MRI-to-CT generator 220, which produces synthetic CT images 204. The synthetic CT images 204 can be used to train the CT discriminator 230 and the CT-to- MRI generator 240, as described hereinabove. The output of the CT-to-MRI generator 240, namely the reconstructed MRI images 206, can be reused for training the MRI-to-CT generator 220. In some embodiments, the reconstructed MRI images 206 are returned to the database 201 , and are then used for training the MRI-to-CT generator 220. Similarly, the reconstructed MRI images 206 can also be provided to the MRI discriminator 210.
[0053] The CT-to-MRI generator 240 can, in turn, be provided with input CT images 252, which can be obtained from a database 251. It should be noted that the databases 201 , 251 can be separate databases, as illustrated in FIG. 2B, or can be a single common database, as appropriate. The input CT images 252 are provided to the CT-to-MRI generator 220, which produces synthetic MRI images 254. The synthetic MRI images 254 can be used to train the MRI discriminator 210 and the CT-to-MRI generator 220, much in the same way as the synthetic CT images 204 are used. The MRI-to-CT generator 220 produces reconstructed CT images 256, which can be reused for training the CT-to-MRI generator 240. In some embodiments, the reconstructed CT images 256 are returned to the database 201 , and are then used for training the CT-to-MRI generator 240. Similarly, the reconstructed CT images 256 can also be provided to the CT discriminator 230.
[0054] The Al architecture 250 forms a cycle-GAN. In some embodiments, the MRI and CT discriminators 210, 230 (collectively,“the discriminators”) are implemented by way of a fully-convolutional PatchGAN. The patch size and stride can be selected as any suitable value, for instance a patch size of 64x64 and a stride of 16. In some cases, the Al architecture 250 is composed of a succession of 4x4 filters with a stride of 2 and a padding of 1 , followed by a final 4x4 kernel with zero padding and unit stride. In some embodiments, the MRI-to-CT and CT-to-MRI generators 220, 240 (collectively “the generators”) are implemented by way of a ResNet neural network. The ResNet neural networks can be composed of a series of downsampling convolutional layers, followed by a number of residual blocks and a series of upsampling transpose convolutional layers to recover the original dimensions of the images. For instance, the ResNet neural networks can employ 9 residual blocks. The forward path for each residual block consists of a unit reflection pad, a 3x3 convolution with a zero pad and a unit stride, an InstanceNorm block, a ReLU block, a Dropout block, unit stride zero pad 3x3 convolution, and a unit reflection pad. The output of the forward path can be summed with the input to the forward path.
[0055] The loss for the generators 220, 240 can be expressed as:
Figure imgf000014_0001
and the loss for the discriminators 210, 230 can be expressed as:
Figure imgf000014_0002
[0056] At each iteration, equation (1) above can be minimized with respect to the generators 220, 240, and then equation (2) above can be minimized with respect to the discriminators 210, 230. In some embodiments, a traditional image domain transfer on single-slice 2D images can be replaced with a pseudo-3D strategy, which, in some instances, is a computationally effective alternative to 3D convolutions. In some embodiments, neighbouring sagittal slices are grouped in quadruplets and stacked along a dimension associated with the channel, thereby forming a thick sagittal slice. In some other embodiments, neighbouring sagittal slices are grouped in triplets, or in smaller groups. Larger groups are also considered.
[0057] In some embodiments, the Al architecture 250 is trained to perform image domain transfer on the thickened slices rather than 2D images. In this fashion, neighbouring 3D information can be factored in while still maintaining a reasonable memory footprint, contrary to standard 3D convolutions. The use of neighbouring 3D information, for instance N singleton slices, is equivalent to N - 3 thickened slices. The thickened slices can then be recombined into a volume of N singleton slices by taking the first three slices of the thickened slice; for each subsequent thickened slice, only the third slice is added. A final thickened slice can be created from the remaining slices, in order to complete a set of N slices. This recombination approach can, in certain embodiments, ensure that the slices not in the extremities of the images come from thickened slices with at least three overlapping slices, which can provide continuity in the shared 3D information.
[0058] With reference to FIG. 3, a training architecture 300 for the Al architecture 200 is illustrated. The training architecture 300 aims to produce a trained conditional GAN 330 for use in the image synthesis module 120 of the framework 100, since conditional GANs are more accurate. In order to provide the conditional GAN 330 with the requisite paired training dataset, a cycle GAN 320 is employed.
[0059] The training architecture 300 uses an initial paired dataset 302 and an initial unpaired dataset 304. Each of the datasets 302, 304, consists of medical images of the modality relevant for training the Al architecture 200. For the Al architecture 200 of FIG. 2, the datasets 302, 304, contain MRI and/or CT imagery, for example which was collected from patients, or is otherwise available.
[0060] The initial paired dataset 302 includes a plurality of MRI and CT imagery. The imagery are said to be paired because, for each of the MRI imagery, the dataset 302 includes a corresponding CT imagery, which illustrates substantially the same bodily structure. For example, both an MRI scan and a CT can be performed for the same portion of a patient’s body. The initial unpaired database 304 can include MRI and/or CT imagery, but the imagery of one modality are not provided with corresponding imagery of the other modality, for instance on a per-patient basis.
[0061 ] The imagery in the datasets 302, 304, are provided to the image preprocessing module 310, which can perform various image processing tasks. These can include image segmentation, noise reduction, contrast adjustment, and the like. For example, the image preprocessing module 310 can include one or more FC-ResNets neural networks which employ a low-capacity fully-convolutional neural network (FCN). In some embodiments, the image preprocessing module 310 can enhance the resolution of the imagery in the datasets 302, 304 using one or more Al-based resolution enhancing techniques. In some other embodiments, the image preprocessing module 310 performs image normalization and/or image bias field correction. For example, the image preprocessing module 310 can perform zero-mean and/or zero-unit-variance standardization, voxel value downscaling, and the like. In still further embodiments, the image preprocessing module 310 performs histogram normalization, which can be implemented by way of an Al, for instance a FCN. The images originally part of the dataset 302 are stored in the combined paired dataset 306. The images originally part of the dataset 304 are provided to cycle GAN 320. In some embodiments, the preprocessing module 310 can employ one or more iterative processes, for instance an iterative process by which a normalized image is subsequently refined by re-segmentation using the FC-ResNets neural network. The preprocessing module 310 can also performing cropping operations, for instance to select particular portions of images associated with a lumbar spine region, or the like.
[0062] In some embodiments, the histogram matching described hereinabove, for instance for histograms of intensity distribution between multi-modal images during the production of synthetic images, can be used to accelerate adversarial learning. Computation of histograms for continuous variables can sometimes require sequential operations which are not differentiable, for instance quantization and binning, and can make it difficult to make use of backpropagation. In some embodiments, the quantization and binning is performed using a set of smooth functions, which allows for backpropagation.
[0063] For example, taking two real-valued vectors X and Y of size n and having variables bounded in the interval (m, M ), vectors X and Ϋ can be defined as
Figure imgf000016_0001
where vectors X and Ϋ have integer values between 0 and nbins - 1.
[0064] The frequency count histograms of vectors X and Ϋ and the joint ( X , Ϋ) can be constructed assuming a Dirichlet prior on the joint histograms, where each (X, Y) has k imaginary observations. For elements i,j e {0, ... , nbins - 1}, and without loss of generality for H?, the frequency counts for the histogram vector Hk and for the joint histogram matrix Hk? can be defined as:
(4)
(5)
Figure imgf000016_0002
in which D; is the discrete Dirac Delta function, shifted to the integer value i. The Dirichlet prior can be verified by marginalizing the joint histograms matrix, either via X or Ϋ, and by validating for equality with the corresponding marginal histograms vector. From a formalism standpoint, two portions of equations (4) and (5) are problematic for differentiability and for backpropagation: the floor function (symbolized as [ J), and the Dirac Delta function.
[0065] In order to mitigate the issues posed by the floor function and the Dirac Delta function, alternative functions are considered, which are continuous and differentiable, and which allow for network backpropagation through scalar-valued functions, using histograms. In this case, functions g and / are defined (for a value of a > 0): nbins 1
1
g: x ® - (1 + tanh(ax)) /: X >® g(x - 0 (6)
Figure imgf000017_0001
[0066] Function / can be used to substitute the floor function. The Dirac Delta function, when composed with the floor function, it serves as an indicator function for values in the interface ( i, i + 1). Since / maps any value in the interval ( i, i + 1) to the interval (i the Dirac Delta function can serve as a standalone smooth indicator
Figure imgf000017_0002
for values within this interval. Via this substitution, the composition of both smooth substitute functions performs as a soft count of values within the interval ( i , i + 1):
Figure imgf000017_0003
Uij : x, y H> Ui^Ujiy) (8)
[0067] In equations (7) and (8), ¾ is a univariate indicator function, and uiy is a bivariate indicator function. The uiy is a bivariate indicator function is used to compute the joint histogram X, Y. In addition, the functions
Figure imgf000017_0004
and uiy serve to produce histograms which naturally integrate to the value 1, because the equation
Figure imgf000017_0005
is a telescopic sum that evaluates very close to the value 1 for Xp e [0 , nbins - 1] if a is not too small. As a result, the loss term for the proposed histogram approach can be termed as:
Figure imgf000017_0006
[0068] This proposed approach, using histogram vectors and matrices, can allow for optimize of functions, including mutual-information functions, KL-divergence functions, or other similar metrics requiring marginal or joint histograms.
[0069] Cycle GAN 320 is trained to perform MRI-to-CT conversion, as well as CT-to- MRI conversion. In some embodiments, the cycle GAN 320 consists of an implementation of the Al architecture 200 of FIG. 2. The cycle GAN 320 can be trained in any suitable fashion, including using the imagery of the dataset 304. The cycle GAN 320 takes the unpaired imagery of the dataset 304 and produces, for each of the imagery, synthetic corresponding images, which can be associated with the original imagery. In this fashion, the cycle GAN 320 serves to augment the availability of paired images which can be used to train the conditional GAN 330.
[0070] The synthetically-produced paired imagery provided by the cycle GAN 320 can be added to the combined paired dataset 306, thereby creating an augmented training dataset. The combined paired dataset 306 can then be used to train the conditional GAN 330. In some embodiments, the images produced by the cycle GAN 320 are provided to the image preprocessing module 310 for preprocessing prior to being associated with their respective pairing imagery and provided to the combined paired dataset 306. Once the conditional GAN 330 is trained, it can be used as part of the image synthesis module 120 of the framework 100.
[0071 ] The objective of the cycle GAN 320 is to estimate a joint distribution across both image domains (e.g. MRI and CT) based on samples from their marginal distribution. In contrast, the conditional GAN 330 learns to estimate a conditional distribution from one of the domains (e.g. MRI) to the other (e.g. CT), based on samples from their joint distribution. Because a training approach based on unpaired images relies on marginal distributions thereof, the estimation problem for the cycle GAN 320 is under-constrained, and there are many possible joint distributions the network can learn. It is the cycle- consistency loss that adds an extra constraint to the problem of solving for the target joint distribution, thus making the optimization problem better constrained.
[0072] The conditional GAN 330, on the other hand, does not attempt to solve an under-constrained problem, as the joint distribution is already embedded in the sample data by virtue of being paired. The cycle-consistency constraint implies that mappings across domains are one-to-one, allowing for translation from one image modality to another and then reconstruct back to the original image modality. However, in practice, the conditional distribution is what is of interest, because the goal is to estimate a posterior distribution (e.g., a CT image) given an observation (e.g. an MRI image).
[0073] By using the combination of the cycle GAN 320 and the conditional GAN 330, estimation of a joint distribution and a conditional distribution can be performed concurrently. The estimation of the conditional distribution can then be refined using the joint distribution. In this fashion, the estimation of the joint distribution, based on the marginal distribution, can be constrained through cycle-consistency and through the conditional estimate of the conditional GAN 330, made on a smaller joint sample distribution. In some embodiments, upsampling procedures can be applied to the synthetic CT images produced by the cycle GAN 320. For instance, after generating the synthetic corresponding images, the cycle GAN 320 can provide the images to the preprocessing module 310 to improve the quality of the images.
[0074] In some embodiments, a Wasserstein GAN variant optimization function can be used for the conditional GAN 330. Traditional GANs experience weak generator gradient in early training stages. The Wasserstein GAN variant is based on estimating an Earth- mover (EM) distance metric between distributions, and optimizes based on a resulting objective function. For example, the estimation function can be a 1--Lispschitz function. In some embodiments, a discriminator gradient penalty can be applied to enforce unit norm.
[0075] In some other embodiments, the training of the conditional GAN 330 can start with a Wasserstein GAN objective function until convergence has been achieved. At that point, the objective function can be replaced with a classical loss function, since the GAN will already have partially learned the objective task. For example, the loss function can be a binary cross-entropy function.
[0076] In some further embodiments, the training of the conditional GAN 330 can begin with reorienting volumes in the images to be aligned along anatomical axes and reformatted in the sagittal plane, for instance to place the back of a patient or subject on a right side of the image, and a spine in an upright position. In some instances, data augmentation procedures can be performed. In some other instances, random flipping of the images is done in one or more dimensions (width, height, and/or depth). In some cases, training of the discriminators 210, 230 can be performed using random images obtained from a history of the most recent 50 images produced by the generators 220, 240, rather than strictly the most recent images, for regularization purposes. Training can be done with any suitable learning rate, for any suitable duration (for instance, as measured in epochs), using any suitable linear learning rate decay, and the like. For instance, using hyperparameters Acycie = 10, Ahist = 0.1, k = 1, nbins = 32, and a = 20, the overall loss function £ can be defined as:
£ = £(GCT, GMRI ) + £(PCT, DMRI ) + L ist (G CT r GMRI ) (10)
It should be noted, however, that these values are solely provided for the purpose of illustration, and other values are considered.
[0077] With reference to FIG. 4, one or more of the framework 100, the Al architecture 200, and/or one or more elements of the training architecture 300 can be implemented by a computing device 410, comprising a processing unit 412 and a memory 414 which has stored therein computer-executable instructions 416. The processing unit 412 may comprise any suitable devices configured to cause a series of steps to be performed such that instructions 416, when executed by the computing device 410 or other programmable apparatus, may cause the functions/acts/steps specified in the techniques described herein to be executed. The processing unit 412 may comprise, for example, any type of general-purpose microprocessor or microcontroller, a digital signal processing (DSP) processor, a central processing unit (CPU), an integrated circuit, a field programmable gate array (FPGA), a reconfigurable processor, other suitably programmed or programmable logic circuits, or any combination thereof.
[0078] The memory 414 may comprise any suitable known or other machine-readable storage medium. The memory 414 may comprise non-transitory computer readable storage medium, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. The memory 414 may include a suitable combination of any type of computer memory that is located either internally or externally to device, for example random-access memory (RAM), read-only memory (ROM), compact disc read only memory (CDROM), electro-optical memory, magneto-optical memory, erasable programmable read-only memory (EPROM), and electrically-erasable programmable read- only memory (EEPROM), Ferroelectric RAM (FRAM) or the like. Memory 414 may comprise any storage means (e.g., devices) suitable for retrievably storing machine- readable instructions 416 executable by processing unit 412.
[0079] With reference to FIG. 5A, there is shown a method 500 for producing medical images. At step 502, a medical image of a first modality is obtained, for instance an MRI imagery. Optionally, at step 504 a GAN is trained, for instance using the training architecture 300 described hereinabove. The end result of step 504 can be a trained GAN, for instance the cycle GAN 320 or the conditional GAN 330. It is considered, however, that in certain embodiments of the method 500, the GAN is already trained, and further training is not required.
[0080] At step 506, a second medical image of a second modality is generated based on the first medical image using the trained GAN. The second medical image is a synthetic image, and is, for example, a CT image. The CT image can be produced by the cycle GAN 320 or the conditional GAN 330. In some embodiments, the first medical image is subjected to one or more preprocessing steps prior to being used as a basis for the second medical image. At step 508, a third medical image of a third modality is generated based on the second medical image using a DRR procedure. The third medical image is a synthetic image, and is, for example, a 2D C-arm image.
[0081 ] In some embodiments, the DRR procedure employs a Siddon-Jacobs algorithm to compute a radiological path through the synthesized CT images or CT volumes. In some instances, the Siddon-Jacobs algorithm is implemented using CUDA via one or more graphical processing units (GPUs), using linear absorption coefficients for an equivalent energy rating of 30k eV. C-arm images can be modeled as pinhole cameras, in which each view has an intrinsic and an extrinsic camera matrix. The pinhole cameras can be set up with any suitable number of views. In some cases, the relative camera views remain fixed, and a rigid transformation is used to move the CT images or volumes to enable the DRR procedure. Region-of-interest (ROI) selection can be performed after a vertebral level is selected, defining an ROI around a vertebral body and associated pedicles to initiate the optimization process. In some embodiments, the optimization process minimizes an objective function by summing negative normalized cross correlations (NCCs) over the aforementioned relative camera views. [0082] Once the third medical image has been generated, it can be used for comparison purposes with acquired intra-operative imagery, for instance to assist with instrument insertion, ablation, and the like. Other uses are also considered.
[0083] With reference to FIG. 5B, there is shown an embodiment of step 504 of the method 500. It should be understood that the embodiment of step 504 illustrated in FIG. 5B is one example embodiment, and others are considered.
[0084] At step 542, a plurality of unpaired images, comprising first images of a first modality and second images of a second modality, are obtained. For instance, the unpaired images are the images in the initial unpaired dataset 304 of FIG. 3, which include MRI images of Modality A and CT images of Modality B.
[0085] At step 544, a corresponding first synthetic image is generated for each of the first images using a first GAN, for example the cycle GAN 320. The first images and the first synthetic images together form a first paired image set. For example, the first images are MRI imagery, and the cycle GAN 320 is used to generate synthetic CT images which correspond to the MRI imagery.
[0086] At step 546, a corresponding second synthetic image is generated for each of the second images using the cycle GAN 320. The second images and the second synthetic images together form a second paired image set. For example, the second images are CT imagery, and the cycle GAN 320 is used to generate synthetic MRI images which correspond to the CT imagery.
[0087] The first and second paired image sets can then be combined to form a training image set. In certain cases, for instance when considering conditional GANs, training procedures require the use of a paired image set. For example, in order to train the conditional GAN 330 to perform MRI-to-CT image synthesis, the conditional CAN 330 can require access to the paired image set, which is composed of pairs of MRI and CT images which correspond to one another.
[0088] At step 548, a second GAN, for instance the conditional GAN 330, is trained using the training image set, which includes the first and the second paired image sets produced at steps 544 and 546, respectively. Training the conditional GAN 330 can be accomplished using any suitable approaches and techniques, including those described hereinabove.
[0089] With reference to FIGS. 6A-C, there are shown different medical images. FIG. 6A is an acquired MRI imagery of a spinal area of a patient. FIG. 6C is an acquired CT imagery of the same spinal area of the same patient. FIG. 6B is a synthetically-generated CT image based on the MRI imagery of FIG. 6A, for instance generated by the trained conditional GAN 330. Similarities between the synthetic CT image of FIG. 6B and the acquired CT imagery of FIG. 6C are notable.
[0090] In some embodiments, some elements of the image preprocessing can be manually tuned, for instance by an operator of the systems disclosed herein. Input tools can be provided to the operator to adjust the contrast levels and the like. In some embodiments, the use of the pseudo-3D GAN, as described hereinabove, can improve noise levels in certain views, and/or can result in improved colorization of certain image features, for instance when combined with the use of the histogram loss term. In some embodiments, the use of the cycle GAN, which may not require voxel-to-voxel or image-to- image correspondence, can be used to produce plausible synthetic images from which original images can be reconstructed. For example, synthetic CT images produced from MRI image may ignore surgical clamps and/or screws which would normally be present in actual CT images.
[0091 ] The methods and systems for producing medical images described herein may be implemented in a high level procedural or object oriented programming or scripting language, or a combination thereof, to communicate with or assist in the operation of a computer system, for example the computing device 410. Alternatively, the methods and systems described herein may be implemented in assembly or machine language. The language may be a compiled or interpreted language. Program code for implementing the methods and systems described herein may be stored on a storage media or a device, for example a ROM, a magnetic disk, an optical disc, a flash drive, or any other suitable storage media or device. The program code may be readable by a general or special- purpose programmable computer for configuring and operating the computer when the storage media or device is read by the computer to perform the procedures described herein. Embodiments of the methods and systems described herein may also be considered to be implemented by way of a non-transitory computer-readable storage medium having a computer program stored thereon. The computer program may comprise computer-readable instructions which cause a computer, or more specifically the processing unit 412 of the computing device 410, to operate in a specific and predefined manner to perform the functions described herein.
[0092] Computer-executable instructions may be in many forms, including program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types. Typically the functionality of the program modules may be combined or distributed as desired in various embodiments.
[0093] The above description is meant to be exemplary only, and one skilled in the art will recognize that changes may be made to the embodiments described without departing from the scope of the invention disclosed. Still other modifications which fall within the scope of the present invention will be apparent to those skilled in the art, in light of a review of this disclosure.
[0094] Various aspects of the method and system disclosed herein may be used alone, in combination, or in a variety of arrangements not specifically discussed in the embodiments described in the foregoing and are therefore not limited in their application to the details and arrangement of components set forth in the foregoing description or illustrated in the drawings. For example, aspects described in one embodiment may be combined in any manner with aspects described in other embodiments. Although particular embodiments have been shown and described, it will be obvious to those skilled in the art that changes and modifications may be made without departing from this invention in its broader aspects. The scope of the following claims should not be limited by the preferred embodiments set forth in the examples, but should be given the broadest reasonable interpretation consistent with the description as a whole.

Claims

CLAIMS:
1. A method for producing medical images, comprising:
obtaining a first medical image of a first modality;
generating a second medical image of a second modality based on the first medical image using an artificial intelligence, wherein the second medical image is mappable to a third modality of medical image; and
mapping the second medical image to a third medical image of the third modality.
2. The method of claim 1 , wherein generating the second medical image comprises segmenting the first medical image.
3. The method of claim 1 , further comprising augmenting a resolution of the first medical image using the artificial intelligence.
4. The method of claim 1 , wherein generating the second medical image comprises performing histogram normalization on the first medical image.
5. The method of claim 1 , wherein generating the third medical image comprises applying a ray-casting procedure to the second medical image.
6. The method of claim 1 , wherein the artificial intelligence is a cycle generative adversarial network.
7. The method of claim 1 , wherein the artificial intelligence is a conditional generative adversarial network.
8. The method of claim 1 , wherein the first modality corresponds to magnetic- resonance imaging images, the second modality corresponds to computed tomography images, and the third modality corresponds to C-arm images.
9. A method for training an artificial intelligence, comprising:
obtaining a plurality of unpaired images comprising first images of a first modality and second images of a second modality;
generating, for each of the first images, a corresponding first synthetic image of the second modality using a first artificial intelligence; generating, for each of the second images, a corresponding second synthetic image of the first modality using the first artificial intelligence; and
training a second artificial intelligence using a training image set comprising the first images, the first synthetic images, the second images, and the second synthetic images.
10. The method of claim 9, wherein the first artificial intelligence is a cycle generative adversarial network (GAN), and wherein the second artificial intelligence is a conditional GAN.
11. The method of claim 10, wherein the cycle GAN and/or the conditional GAN comprises two synthesis convolutional neural networks (CNN) and two discriminator CNN.
12. The method of claim 9, wherein the training image data set further comprises a paired image set comprising paired third images of the first and the second modalities resulting from medical imaging.
13. The method of claim 9, further comprising:
obtaining a subsequent image of the first modality; and
using the trained second artificial intelligence, generating a subsequent image of the second modality based on the subsequent image of the first modality.
14. The method of claim 13, further comprising segmenting the subsequent image.
15. The method of claim 13, further comprising augmenting a resolution of the subsequent image using a third artificial intelligence.
16. The method of claim 13, further comprising performing histogram normalization on the subsequent image.
17. The method of claim 9, wherein the first modality corresponds to magnetic- resonance imaging images, and the second modality corresponds to computed tomography images.
18. A system for producing medical images, comprising:
a processing unit; and
a non-transitory computer-readable medium having stored thereon program instructions executable by the processing unit for: obtaining a first medical image of a first modality;
generating a second medical image of a second modality based on the first medical image using an artificial intelligence, wherein the second medical images are mappable to a third modality of medical image; and
generating a third medical image of the third modality based on the second medical image.
19. The system of claim 18, wherein the artificial intelligence is a conditional generative adversarial network, wherein the first modality corresponds to magnetic- resonance imaging images, the second modality corresponds to computed tomography images, and the third modality corresponds to C-arm images, and wherein obtaining the first medical image comprises performing an MRI procedure of a lumbar region of a patient.
20. The system of claim 18, wherein the artificial intelligence comprises a cycle generative adversarial network.
PCT/CA2020/050404 2019-03-29 2020-03-27 Method and system for producing medical images WO2020198854A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962826305P 2019-03-29 2019-03-29
US62/826,305 2019-03-29

Publications (1)

Publication Number Publication Date
WO2020198854A1 true WO2020198854A1 (en) 2020-10-08

Family

ID=72667545

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2020/050404 WO2020198854A1 (en) 2019-03-29 2020-03-27 Method and system for producing medical images

Country Status (1)

Country Link
WO (1) WO2020198854A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113850763A (en) * 2021-09-06 2021-12-28 中山大学附属第一医院 Method, device, equipment and medium for measuring vertebral column Cobb angle
US20220292742A1 (en) * 2021-03-11 2022-09-15 Siemens Healthcare Gmbh Generating Synthetic X-ray Images and Object Annotations from CT Scans for Augmenting X-ray Abnormality Assessment Systems
CN116129235A (en) * 2023-04-14 2023-05-16 英瑞云医疗科技(烟台)有限公司 Cross-modal synthesis method for medical images from cerebral infarction CT to MRI conventional sequence

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170337682A1 (en) * 2016-05-18 2017-11-23 Siemens Healthcare Gmbh Method and System for Image Registration Using an Intelligent Artificial Agent
WO2018048507A1 (en) * 2016-09-06 2018-03-15 Han Xiao Neural network for generating synthetic medical images
US20180225823A1 (en) * 2017-02-09 2018-08-09 Siemens Healthcare Gmbh Adversarial and Dual Inverse Deep Learning Networks for Medical Image Analysis
US20180260997A1 (en) * 2017-03-10 2018-09-13 Siemens Healthcare Gmbh Consistent 3d rendering in medical imaging
US10127659B2 (en) * 2016-11-23 2018-11-13 General Electric Company Deep learning medical systems and methods for image acquisition
US20180330511A1 (en) * 2017-05-11 2018-11-15 Kla-Tencor Corporation Learning based approach for aligning images acquired with different modalities

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170337682A1 (en) * 2016-05-18 2017-11-23 Siemens Healthcare Gmbh Method and System for Image Registration Using an Intelligent Artificial Agent
WO2018048507A1 (en) * 2016-09-06 2018-03-15 Han Xiao Neural network for generating synthetic medical images
US10127659B2 (en) * 2016-11-23 2018-11-13 General Electric Company Deep learning medical systems and methods for image acquisition
US20180225823A1 (en) * 2017-02-09 2018-08-09 Siemens Healthcare Gmbh Adversarial and Dual Inverse Deep Learning Networks for Medical Image Analysis
US20180260997A1 (en) * 2017-03-10 2018-09-13 Siemens Healthcare Gmbh Consistent 3d rendering in medical imaging
US20180330511A1 (en) * 2017-05-11 2018-11-15 Kla-Tencor Corporation Learning based approach for aligning images acquired with different modalities

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MAHMOOD F ET AL.: "Unsupervised Reverse Domain Adaptation for Synthetic Medical Images via Adversarial Training", IEEE TRANS MED IMAGING, vol. 37, no. 12, 2018, pages 2572 - 2581, XP011696683, DOI: 10.1109/TMI.2018.2842767 *
NIE D ET AL.: "Medical Image Synthesis with Deep Convolutional Adversarial Networks", IEEE TRANS BIOMED ENG., vol. 65, no. 12, 2018, pages 2720 - 2730, XP011697408, DOI: 10.1109/TBME.2018.2814538 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220292742A1 (en) * 2021-03-11 2022-09-15 Siemens Healthcare Gmbh Generating Synthetic X-ray Images and Object Annotations from CT Scans for Augmenting X-ray Abnormality Assessment Systems
US11908047B2 (en) * 2021-03-11 2024-02-20 Siemens Healthineers Ag Generating synthetic x-ray images and object annotations from CT scans for augmenting x-ray abnormality assessment systems
CN113850763A (en) * 2021-09-06 2021-12-28 中山大学附属第一医院 Method, device, equipment and medium for measuring vertebral column Cobb angle
CN113850763B (en) * 2021-09-06 2024-01-23 脊客医疗科技(广州)有限公司 Image processing method, device, equipment and medium based on spine medical image
CN116129235A (en) * 2023-04-14 2023-05-16 英瑞云医疗科技(烟台)有限公司 Cross-modal synthesis method for medical images from cerebral infarction CT to MRI conventional sequence

Similar Documents

Publication Publication Date Title
EP3470006B1 (en) Automated segmentation of three dimensional bony structure images
US20220245400A1 (en) Autonomous segmentation of three-dimensional nervous system structures from medical images
EP3751516B1 (en) Autonomous multidimensional segmentation of anatomical structures on three-dimensional medical imaging
EP3525171B1 (en) Method and system for 3d reconstruction of x-ray ct volume and segmentation mask from a few x-ray radiographs
US11257259B2 (en) Topogram prediction from surface data in medical imaging
US10970829B2 (en) Synthesizing and segmenting cross-domain medical images
Oulbacha et al. MRI to CT synthesis of the lumbar spine from a pseudo-3D cycle GAN
WO2020198854A1 (en) Method and system for producing medical images
US20210393229A1 (en) Single or a few views computed tomography imaging with deep neural network
CN117813055A (en) Multi-modality and multi-scale feature aggregation for synthesis of SPECT images from fast SPECT scans and CT images
Amirkolaee et al. Development of a GAN architecture based on integrating global and local information for paired and unpaired medical image translation
Pradhan et al. Conditional generative adversarial network model for conversion of 2 dimensional radiographs into 3 dimensional views
Poonkodi et al. 3d-medtrancsgan: 3d medical image transformation using csgan
Daly et al. Multimodal image registration using multiresolution genetic optimization
Gholizadeh-Ansari et al. Low-dose CT denoising using edge detection layer and perceptual loss
Al Abboodi et al. Supervised Transfer Learning for Multi Organs 3D Segmentation With Registration Tools for Metal Artifact Reduction in CT Images
Ragte et al. A novel approach for fast generation of digitally reconstructed radiographs to increase the automation of 2D-3D registration system
Zhou et al. Robust single-view cone-beam x-ray pose estimation with neural tuned tomography (nett) and masked neural radiance fields (mnerf)
Chen Synthesizing a complete tomographic study with nodules from multiple radiograph views via deep generative models
Xiang Deep learning-based reconstruction of volumetric CT images of vertebrae from a single view X-ray image
Gao et al. X-CTCANet: 3D spinal CT reconstruction directly from 2D X-ray images
Kumar et al. 3D Volumetric Computed Tomography from 2D X-Rays: A Deep Learning Perspective
Kumar et al. 5 3D Volumetric
Tianrungroj et al. Isometric Convolutional Neural Networks for Bone Suppression of Multi-Planar Dual Energy Chest Radiograph
Poonkodi et al. UCS-MedTranGAN: Unsupervised Medical Image Transformation Using CSGAN

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20783787

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20783787

Country of ref document: EP

Kind code of ref document: A1