CN112419173A - Deep learning framework and method for generating CT image from PET image - Google Patents

Deep learning framework and method for generating CT image from PET image Download PDF

Info

Publication number
CN112419173A
CN112419173A CN202011215657.1A CN202011215657A CN112419173A CN 112419173 A CN112419173 A CN 112419173A CN 202011215657 A CN202011215657 A CN 202011215657A CN 112419173 A CN112419173 A CN 112419173A
Authority
CN
China
Prior art keywords
image
attenuation correction
pet
graph
correction coefficient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011215657.1A
Other languages
Chinese (zh)
Other versions
CN112419173B (en
Inventor
梁栋
李庆能
胡战利
郑海荣
刘新
杨永峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202011215657.1A priority Critical patent/CN112419173B/en
Publication of CN112419173A publication Critical patent/CN112419173A/en
Application granted granted Critical
Publication of CN112419173B publication Critical patent/CN112419173B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention discloses a depth learning framework and a method for generating a CT image from a PET image. The method comprises the following steps: obtaining an attenuation correction coefficient map by reverse calculation of an attenuation correction mechanism by using the first PET image without attenuation correction and the corresponding second PET image with attenuation correction; and utilizing the obtained attenuation correction coefficient map to generate fitting learning of a countermeasure network through the map-to-map, and obtaining the mapping relation between the attenuation correction coefficient map and the CT mode image, thereby realizing the generation process from the PET image to the CT mode image. In the graph-to-graph confrontation network, a generator takes the attenuation correction coefficient graph as input, takes a CT mode image as output, and takes the generator input image as a discrimination condition of a discriminator so as to distinguish the authenticity of the generated CT mode image. The invention can realize PET attenuation correction and CT image reconstruction at the same time, and can effectively reduce the reconstruction difficulty of the CT image and improve the quality of the CT image by means of the attenuation correction coefficient image.

Description

Deep learning framework and method for generating CT image from PET image
Technical Field
The invention relates to the technical field of medical image reconstruction, in particular to a deep learning framework and a method for generating a CT image from a PET image.
Background
Computed Tomography (CT) can provide abundant anatomical structure information, and significantly alleviates the problem of low resolution of PET (Positron Emission Tomography) images. PET imaging is functional imaging, can reflect the relevant information of human pathological change tissue directly perceivedly, and PET image combines with CT image can realize accurate positioning and detection to the focus region. In addition, the CT images can also provide spatial constraint information to aid in attenuation correction and artifact removal of the original PET images. Therefore, the PET/CT imaging system is the most widely used PET imaging system worldwide at present. However, the intervention of CT scanning results in additional X-ray radiation, and the accumulation of radiation dose to the human body increases the possibility of various diseases, further affecting the physiological function of the human body, destroying tissues and organs of the human body, and even endangering the life safety of the patient.
In the prior art, the following technical solutions exist:
dong et al published in 2019 the article "Synthetic CT generation from non-attenuated corrected PET images for white-body PET imaging" in Physics in Medicine & Biology, and used cyclic consistency to generate a countermeasure network (CycleGAN) to successfully convert the first PET image without attenuation correction to the CT modality. The generated pseudo-CT image may not only provide anatomical information to assist in locating and diagnosing the lesion region, but may also perform attenuation correction on the first PET image.
The main disadvantages of the prior art are: because the parameters are set differently during PET and CT scanning, a significant matching error exists between the PET and the CT; PET and CT belong to two distinct image domains, and it is difficult to directly generate a CT image from a PET image having low resolution and lacking spatial structure information, and the generated image effect is yet to be improved.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a deep learning framework and a method for generating a CT image from a PET image, which realize attenuation correction of a first PET image through end-to-end deep learning, generate a false CT image and facilitate accurate positioning and detection of auxiliary focuses.
According to a first aspect of the invention, a deep learning method for generating a CT image from a PET image is provided. The method comprises the following steps:
obtaining an attenuation correction coefficient map by reverse calculation of an attenuation correction mechanism by using the first PET image without attenuation correction and the corresponding second PET image with attenuation correction;
constructing a graph-to-graph generation countermeasure network comprising a generator and a discriminator, wherein the generator takes the attenuation correction coefficient graph as input, takes the CT modality image as output, and takes the generator input image as the discrimination condition of the discriminator so as to discriminate the authenticity of the generated CT modality image;
and optimally training the graph-to-graph generation countermeasure network by taking the set loss function as a target, and obtaining the mapping relation between the first PET image without attenuation correction and the CT mode image.
According to a second aspect of the invention, a deep learning framework for generating CT images from PET images is provided. The framework comprises an attenuation correction coefficient map calculation module and a map-to-map generation countermeasure network, wherein:
the attenuation correction coefficient map calculation module is used for obtaining an attenuation correction coefficient map by utilizing the first PET image without attenuation correction and the corresponding second PET image with attenuation correction through reverse calculation;
the graph-to-graph generation countermeasure network comprises a generator and a discriminator, wherein the generator takes the attenuation correction coefficient graph as input and takes the CT mode image as output, and the generator input image is taken as the discrimination condition of the discriminator to distinguish the authenticity of the generated CT mode image.
Compared with the prior art, the method has the advantages that a deep learning framework is provided to replace various functions of a CT mode in a PET/CT imaging system, and the noise reduction and artifact removal of the first PET image without attenuation correction are realized based on end-to-end mapping learning, so that the attenuation correction process of the first PET image is realized, and the attenuation correction coefficient map with more structural features is obtained. Furthermore, the CT image is reconstructed based on the obtained attenuation correction coefficient image, the reconstruction difficulty of the CT image is obviously reduced, and the image quality of the CT reconstruction is improved. In addition, a joint multiple loss function is designed for a contrast network model generated by the CT reconstructed image-to-image, and the quality of an output image is further ensured.
Other features of the present invention and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a schematic flow chart of a process for generating CT images based on a deep learning framework according to an embodiment of the present invention;
FIG. 2 is an overall block diagram of a residual Unet network according to one embodiment of the present invention;
FIG. 3 is a diagram of an encoder network architecture according to one embodiment of the present invention;
FIG. 4 is a diagram of a residual module network architecture according to one embodiment of the present invention;
FIG. 5 is a block diagram of a decoder network according to one embodiment of the present invention;
FIG. 6 is a block diagram illustrating an arbiter in a graph-to-graph generation countermeasure network, according to one embodiment of the present invention;
FIG. 7 is a diagram illustrating the result of PET-CT image synthesis according to one embodiment of the present invention.
Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
To eliminate the CT radiation present in PET/CT imaging systems, the present invention designs a deep learning framework to replace the role of the CT modality in the imaging system. The deep learning framework comprises 3 parts as a whole: a first part, an end-to-end attenuation correction process, for enabling mapping between a first PET image that is not attenuation corrected to a second PET image that is attenuation corrected; and in the second part, obtaining an attenuation correction coefficient map through the inverse calculation of the attenuation correction mechanism. And a third part, generating a pseudo CT image from the attenuation correction coefficient map. The generated pseudo CT image can be used for assisting a doctor in diagnosing a focus area.
Specifically, the deep learning framework for generating a CT image from a PET image provided by the present invention includes a residual Unet network for attenuation correction of the PET image (other deep learning models may be used), and a map-to-map generation countermeasure network for generating a CT image corresponding to the PET image.
Referring to fig. 1, the deep learning method for generating a CT image according to this embodiment includes the following steps.
And step S110, constructing a residual Unet network for attenuation correction of the first PET image without attenuation correction.
Wherein the attenuation correction process essentially de-noizes and removes artifacts from the first PET image that is not attenuation corrected to improve image quality.
For example, noise reduction and artifact removal of the first PET image without attenuation correction are achieved by establishing a two-dimensional residual net network.
Specifically, referring to fig. 2, a net network with several residual modules is designed to directly generate a clean PET image, i.e., an attenuation-corrected second PET image, through end-to-end learning and residual feedback. The residual Unet network comprises an encoder, a residual module and a decoder, wherein a crossing connection exists between the encoder and the decoder, and the problems of gradient loss and gradient explosion in training can be solved in this way, and meanwhile, information transfer between networks can be promoted.
Referring to fig. 3, in this embodiment, the encoder portion contains 5 convolution modules. Except for the first convolution module, each convolution module consists of one 2 × 2 max pooling operation and two consecutive and identical 3 × 3 convolution operations, each with a step size of 1 and activated by the ReLU function. The first convolution module removes the max pooling operation in order to retain more of the original image information. The coding results of each convolution module are passed into the decoder through a cross-connect mechanism to better guide the attenuation correction work on the first PET image. As the coded depth increases, the width of the convolution module also increases from the initial 64-channel number to 512-channel number to extract progressively deeper image features. And the coding result of the last convolution module is subjected to 2-time down-sampling and then is sent to a residual error module so as to further extract the depth expression information of the image.
Fig. 4 is a block diagram of a residual block, in this embodiment, three consecutive and identical residual blocks are arranged between the encoder and the decoder. Each residual block contains 2 convolution operations of step size 1, number of channels 512 and 3 × 3 activated by the ReLU function. The output result of each residual block is obtained by adding the input of the first convolution operation and the output pixel level of the second convolution operation. The final residual block output is passed to the decoder.
The decoder of fig. 5 has a symmetrical structure to the encoder of fig. 3, except that the 2 x 2 max-pooling operation in each convolution block is replaced in the decoder with a bilinear interpolated upsampling with step size 2. After one convolution operation, the feature map obtained by the residual error module is decoded by 4 continuous convolution modules. By means of cross-connection, the up-sampled decoded feature map and the coded feature map with the corresponding resolution can be combined along the channel, and then convolution operation is carried out. The final convolution module decoded results are fed into the last convolution layer, producing a single channel attenuation-corrected second PET image. Since the sigmoid activation function is used in the last convolution layer, the image is normalized to be within the (0,1) interval.
Step S120 is to obtain an attenuation correction coefficient map by inverse calculation of the attenuation correction mechanism based on the PET images before and after the attenuation correction.
Unlike the conventional way of obtaining the attenuation correction coefficient map, the embodiment of the present invention obtains the attenuation correction coefficient map by performing inverse calculation on the attenuation correction mechanism using the first PET image without attenuation correction and the second PET image generated in step S110 and subjected to attenuation correction.
In a conventional attenuation correction process, the CT data represented by the HU values are subjected to image registration, energy level conversion and spatial resolution correction to obtain a corresponding attenuation correction coefficient map. And after the attenuation correction coefficient map is orthographically projected, obtaining a corresponding attenuation correction coefficient sinogram. And performing point multiplication on the attenuation correction coefficient sinogram and the first PET sinogram to obtain a second PET sinogram after attenuation correction, and performing a classical reconstruction algorithm to obtain a final second PET image.
It can be seen that the attenuation correction process for PET is reversible. When the first PET image without attenuation correction and the second PET image with attenuation correction are known, the attenuation correction coefficient sinogram can be obtained by inverse calculation and then reconstructed to obtain the attenuation correction coefficient map. In the present invention, a Laden transform function is used as a forward projection operation from the image domain to the sinogram, and a filtered back-projection algorithm is used as a reconstruction algorithm from the sinogram to the image domain.
And S130, constructing a graph-to-graph generation confrontation network model for generating a CT mode image.
For example, the countermeasure network is generated by a two-dimensional map-to-map, and the resulting attenuation correction coefficient map is converted into a CT mode.
Considering that the CT image is far more complex than the second PET image in expression, the method adds a discriminator on the basis of the residual Unet network, introduces a generation countermeasure mechanism, and constructs a graph-to-graph generation countermeasure network so as to further improve the generation quality of the CT image.
In one embodiment, the graph-to-graph generation countermeasure network uses the residual Unet network in step S110 as a generator to realize the transformation from the attenuation correction coefficient graph to the CT image, wherein the discriminator adopts a full convolution network structure, and uses the attenuation correction coefficient graph as a discrimination condition to distinguish the true and false of the generated image. For example, referring to FIG. 6, the discriminator has 4 convolutional layers, each of which contains 14 × 4 convolution operations of step size 2, a batch normalization operation, and a LeakyReLu activation function of slope 0.2, and a final output layer. The convolutional layers have 64, 128, 256, and 256 channels of convolutional kernels, respectively. In the final output layer, the image blocks of the single channel are normalized to the (0,1) value range by the sigmoid function.
It should be understood that the generator in the graph-to-graph generation countermeasure network may also adopt a network structure different from the residual Unet in step S110, i.e., the generator may be implemented by using other types of deep learning models.
And step S140, designing a residual Unet network and a graph-pair graph to generate a loss function of the countermeasure network.
Preferably, in order to improve the quality of the image generated by the network, a more complex joint loss function is designed, and iterative training of the network is optimized by combining multiple loss functions, so as to further ensure that the generated CT image meets the requirement of auxiliary medical diagnosis.
The method comprises the steps of generating a countermeasure network from a residual Unet network from a first PET image subjected to attenuation correction to a second PET image subjected to attenuation correction and from an attenuation correction coefficient map to a map-to-map of a CT image, and designing independent loss functions for the two models respectively.
In one embodiment, in training the residual Unet model, noise reduction of the first PET image without attenuation correction is achieved using Mean Absolute Error (MAE), and the loss function is expressed as:
Figure BDA0002760291780000071
wherein x represents a first PET image without attenuation correction, y represents a second PET image after attenuation correction, and N represents the total pixel number of each image.
In one embodiment, in order to make the generated CT image retain richer texture features and local details in the process of training the graph-to-graph generation countermeasure network, besides the Mean Absolute Error (MAE), a perceptual loss function (PCP) is introduced. The consistent distribution of the feature level can be realized no matter the countermeasure loss function of the graph-to-graph generation countermeasure network or the introduced perception loss function. This not only produces a more realistic image, but also significantly speeds up the convergence of the network.
Thus, the graph-to-graph generation is represented as a loss function for the generator in the countermeasure network:
LossCT=MAE+λ1·PCP+λ2·cGANg (2)
Figure BDA0002760291780000072
cGANg(x)=log(D(G(x),x)) (4)
wherein PCP represents a model based on pre-training (e.g., VGG19)Perceptual loss function, cGANgRepresenting the contrast loss function, x representing the attenuation correction coefficient image, and y representing the corresponding CT image; phi is aiI-th encoded convolutional layer, W, representing a pre-training modeliAnd HiThe length and width of the characteristic diagram of the ith code convolution layer are shown, and n is the selected convolution layer number.
In one embodiment, a confrontation loss function cGAN of the confrontation network is generated by using a sigmoid activation-based cross entropy function as a graph-to-graphgAnd cGANd. The form of the generation of the countermeasure loss function in the generator G and the discriminator D is as follows:
cGANg(x)=log(D(G(x),x)) (5)
cGANd(x,y)=log(D(y,x))+log(1-D(G(x),x)) (6)
wherein x represents an input image for generating the countermeasure network, i.e., an attenuation correction coefficient map. And x is also used as the discrimination condition of the discriminator. G (x) represents the generated CT modality image, and y represents the real CT image.
To balance the contribution of each loss function in the graph-to-graph generation confrontation network training, λ may be set empirically1And λ2For example, set to 1.0 and 0.01, respectively. The setting of the loss weights may be adjusted according to experimental simulations.
And step S150, optimizing the residual U-Net network by taking the set loss function as a target.
Specifically, a first PET image without attenuation correction is used as the input of the residual Unet network, a second PET image after attenuation correction is used as the reference of the residual Unet network, and the residual Unet network is optimized by adopting an RMSprop optimizer and a dynamic attenuation learning rate strategy to gradually reach a convergence state.
And step S160, generating a countermeasure network by optimizing the graph-to-graph by taking the set loss function as a target.
Specifically, the calculated attenuation correction coefficient graph is used as an input of the graph-to-graph generation countermeasure network, the CT image is used as a reference of the graph-to-graph generation countermeasure network, and the Adam optimizer is used for training the graph-to-graph generation countermeasure network to enable the graph-to-graph generation countermeasure network to gradually reach a convergence state.
It should be noted that the experimental framework proposed by the present invention can also be applied to other image types such as MRI modality, besides being applied to CT modality.
In order to evaluate the practical effects of the present invention, two clinically experienced physicians were invited to subjectively score the second PET image and the CT image generated by the network model from the three aspects of the noise suppression degree, the detail recovery degree and the overall generation quality (score 10, score 1 indicates unacceptable, score 10 indicates perfect restoration). The scoring results are shown in table 1 below.
TABLE 1 subjective scores of clinicians
Figure BDA0002760291780000081
FIG. 7 is a diagram showing the result of PET-CT image synthesis. Wherein the subgraphs are (a) the generated pseudo-CT image, (b) the real CT image, (c) the generated AC PET image (i.e. the attenuation-corrected PET image), (d) the real AC PET image, (e) the generated PET/CT fusion map, and (f) the real PET/CT fusion map.
As can be seen from FIG. 7, the method of the present invention can well achieve attenuation correction of PET, resulting in a clean second PET image. Meanwhile, the generated pseudo CT image also has enough anatomical structure information, and can assist a doctor in clinical diagnosis and positioning of a focus region. Experiments prove that the method provided by the invention can replace the effect of a CT mode in a PET imaging system to a great extent, which is helpful for the PET imaging system to get rid of dependence on anatomical modes and realize the goal of radiation removal.
In conclusion, the PET attenuation correction and the CT image reconstruction are realized at the same time, and the dependence of a PET imaging system on a CT modality is eliminated; by taking the traditional attenuation correction process as a reference, the attenuation correction coefficient image is reversely calculated so as to reduce the CT reconstruction difficulty and improve the CT image quality; and (3) improving the reconstruction quality of the CT image by using the joint loss function.
The present invention may be a system, method and/or computer program product. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied therewith for causing a processor to implement various aspects of the present invention.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present invention may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present invention are implemented by personalizing an electronic circuit, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), with state information of computer-readable program instructions, which can execute the computer-readable program instructions.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, by software, and by a combination of software and hardware are equivalent.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the invention is defined by the appended claims.

Claims (10)

1. A deep learning method of generating a CT image from a PET image, comprising:
obtaining an attenuation correction coefficient map by reverse calculation of an attenuation correction mechanism by using the first PET image without attenuation correction and the corresponding second PET image with attenuation correction;
constructing a graph-to-graph generation countermeasure network comprising a generator and a discriminator, wherein the generator takes the attenuation correction coefficient graph as input, takes the CT modality image as output, and takes the generator input image as the discrimination condition of the discriminator so as to discriminate the authenticity of the generated CT modality image;
and optimally training the graph-to-graph generation countermeasure network by taking the set loss function as a target, and obtaining the mapping relation between the first PET image without attenuation correction and the CT mode image.
2. The method according to claim 1, wherein a first PET image without attenuation correction is input to a pre-trained residual uet network, a corresponding second PET image with attenuation correction is output, and a generator of the graph-to-graph generating countermeasure network is also set as the residual uet network.
3. The method of claim 2, wherein the residual Unet network comprises an encoder, a residual module and a decoder in sequence, the encoder comprises a plurality of convolution modules, the encoding result of each convolution module is transmitted to the decoder through a cross-connection mechanism, the residual modules are arranged in a plurality, each residual module comprises a first convolution operation and a second convolution operation, the output of each residual module is obtained by adding the input of the first convolution operation and the output pixel level of the second convolution operation, and the decoder comprises a plurality of convolution modules corresponding to the encoder.
4. The method of claim 3, wherein the first convolution module of the encoder comprises two consecutive and identical 3 x 3 convolution operations, the other convolution modules of the encoder comprise a 2 x 2 max-pooling operation and two consecutive and identical 3 x 3 convolution operations, the encoded result of the encoder is passed to the residual module after being 2-fold down-sampled, and the decoder employs a bilinear interpolated up-sampling with step size 2 corresponding to the encoder.
5. The method of claim 2, wherein the loss function of the residual Unet network is set to:
Figure FDA0002760291770000011
wherein x represents the first PET image without attenuation correction, y represents the second PET image with attenuation correction, and N represents the total pixel number of each image.
6. The method of claim 5, wherein the graph-to-graph generation is set to combat the loss function of the generator in the network as:
LossCT=MAE+λ1·PCP+λ2·cGANg
Figure FDA0002760291770000021
wherein PCP represents a perceptual loss function based on a pre-trained model, cGANgRepresenting the contrast loss function, x representing the attenuation correction coefficient image, and y representing the corresponding CT image; phi is aiI-th encoded convolutional layer, W, representing a pre-training modeliAnd HiThe length and width of the characteristic diagram of the ith code convolution layer are shown, and n is the selected convolution layer number.
7. The method of claim 6, wherein a sigmoid activation-based cross entropy function is used as the graph-to-graph generation countermeasure loss function cGAN of the countermeasure networkgAnd cGANdThe generated countermeasure loss function is represented at generator G and discriminator D as:
cGANg(x)=log(D(G(x),x))
cGANd(x,y)=log(D(y,x))+log(1-D(G(x),x))
wherein x represents the attenuation correction coefficient map of the input image of the generated countermeasure network, and x also serves as the discrimination condition of the discriminator, G (x) represents the generated CT mode image, and y represents the real CT image.
8. The method of claim 1, wherein obtaining an attenuation correction coefficient map by inverse calculation of an attenuation correction mechanism using a first PET image that is not attenuation corrected and a corresponding second PET image that is attenuation corrected comprises:
for the first PET image without attenuation correction and the second PET image with attenuation correction, obtaining a corresponding first PET sinogram and a second PET sinogram through sinogram projection;
performing point multiplication operation on the first PET sinogram and the second PET sinogram to obtain an attenuation correction coefficient sinogram;
and carrying out filtering back projection on the attenuation correction coefficient sinogram to obtain the attenuation correction coefficient map.
9. A deep learning framework for generating CT images from PET images comprising an attenuation correction coefficient map computation module and a map-to-map generation countermeasure network, wherein:
the attenuation correction coefficient map calculation module is used for obtaining an attenuation correction coefficient map by utilizing the first PET image without attenuation correction and the corresponding second PET image with attenuation correction through reverse calculation;
the graph-to-graph generation countermeasure network comprises a generator and a discriminator, wherein the generator takes the attenuation correction coefficient graph as input and takes the CT mode image as output, and the generator input image is taken as the discrimination condition of the discriminator to distinguish the authenticity of the generated CT mode image.
10. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 8.
CN202011215657.1A 2020-11-04 2020-11-04 Deep learning framework and method for generating CT image from PET image Active CN112419173B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011215657.1A CN112419173B (en) 2020-11-04 2020-11-04 Deep learning framework and method for generating CT image from PET image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011215657.1A CN112419173B (en) 2020-11-04 2020-11-04 Deep learning framework and method for generating CT image from PET image

Publications (2)

Publication Number Publication Date
CN112419173A true CN112419173A (en) 2021-02-26
CN112419173B CN112419173B (en) 2024-07-09

Family

ID=74827686

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011215657.1A Active CN112419173B (en) 2020-11-04 2020-11-04 Deep learning framework and method for generating CT image from PET image

Country Status (1)

Country Link
CN (1) CN112419173B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113436708A (en) * 2021-07-22 2021-09-24 杭州电子科技大学 Delayed CT image generation method based on deep learning algorithm
CN113487657A (en) * 2021-07-29 2021-10-08 广州柏视医疗科技有限公司 Deep learning-based mode conversion method
CN113837961A (en) * 2021-09-22 2021-12-24 中国科学院深圳先进技术研究院 Method and system suitable for long-time endogenous imaging of living body
WO2023134030A1 (en) * 2022-01-11 2023-07-20 浙江大学 Pet system attenuation correction method based on flow model
CN116485937A (en) * 2023-06-21 2023-07-25 吉林大学 CT motion artifact eliminating method and system based on graph neural network

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107133996A (en) * 2017-03-21 2017-09-05 上海联影医疗科技有限公司 Produce the method and PET/CT systems for the decay pattern rebuild for PET data
CN109697741A (en) * 2018-12-28 2019-04-30 上海联影智能医疗科技有限公司 A kind of PET image reconstruction method, device, equipment and medium
US20190130569A1 (en) * 2017-10-26 2019-05-02 Wisconsin Alumni Research Foundation Deep learning based data-driven approach for attenuation correction of pet data
US20190266728A1 (en) * 2018-02-23 2019-08-29 Seoul National University R&Db Foundation Positron emission tomography system and image reconstruction method using the same
KR20200057450A (en) * 2018-11-16 2020-05-26 한국원자력의학원 Method and system for generating virtual CT(Computed Tomography) image and attenuation-corrected PET(Positron Emission Tomography) image based on PET image
CN111340903A (en) * 2020-02-10 2020-06-26 深圳先进技术研究院 Method and system for generating synthetic PET-CT image based on non-attenuation correction PET image
CN111436958A (en) * 2020-02-27 2020-07-24 之江实验室 CT image generation method for PET image attenuation correction

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107133996A (en) * 2017-03-21 2017-09-05 上海联影医疗科技有限公司 Produce the method and PET/CT systems for the decay pattern rebuild for PET data
US20190130569A1 (en) * 2017-10-26 2019-05-02 Wisconsin Alumni Research Foundation Deep learning based data-driven approach for attenuation correction of pet data
US20190266728A1 (en) * 2018-02-23 2019-08-29 Seoul National University R&Db Foundation Positron emission tomography system and image reconstruction method using the same
KR20200057450A (en) * 2018-11-16 2020-05-26 한국원자력의학원 Method and system for generating virtual CT(Computed Tomography) image and attenuation-corrected PET(Positron Emission Tomography) image based on PET image
CN109697741A (en) * 2018-12-28 2019-04-30 上海联影智能医疗科技有限公司 A kind of PET image reconstruction method, device, equipment and medium
CN111340903A (en) * 2020-02-10 2020-06-26 深圳先进技术研究院 Method and system for generating synthetic PET-CT image based on non-attenuation correction PET image
CN111436958A (en) * 2020-02-27 2020-07-24 之江实验室 CT image generation method for PET image attenuation correction

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113436708A (en) * 2021-07-22 2021-09-24 杭州电子科技大学 Delayed CT image generation method based on deep learning algorithm
CN113436708B (en) * 2021-07-22 2022-10-25 杭州电子科技大学 Delayed CT image generation method based on deep learning algorithm
CN113487657A (en) * 2021-07-29 2021-10-08 广州柏视医疗科技有限公司 Deep learning-based mode conversion method
CN113487657B (en) * 2021-07-29 2022-02-01 广州柏视医疗科技有限公司 Deep learning-based mode conversion method
WO2023005186A1 (en) * 2021-07-29 2023-02-02 广州柏视医疗科技有限公司 Modal transformation method based on deep learning
CN113837961A (en) * 2021-09-22 2021-12-24 中国科学院深圳先进技术研究院 Method and system suitable for long-time endogenous imaging of living body
CN113837961B (en) * 2021-09-22 2023-10-20 中国科学院深圳先进技术研究院 Method and system suitable for long-time endogenous imaging of living body
WO2023134030A1 (en) * 2022-01-11 2023-07-20 浙江大学 Pet system attenuation correction method based on flow model
CN116485937A (en) * 2023-06-21 2023-07-25 吉林大学 CT motion artifact eliminating method and system based on graph neural network
CN116485937B (en) * 2023-06-21 2023-08-29 吉林大学 CT motion artifact eliminating method and system based on graph neural network

Also Published As

Publication number Publication date
CN112419173B (en) 2024-07-09

Similar Documents

Publication Publication Date Title
Liao et al. ADN: artifact disentanglement network for unsupervised metal artifact reduction
US11120582B2 (en) Unified dual-domain network for medical image formation, recovery, and analysis
CN112419173B (en) Deep learning framework and method for generating CT image from PET image
CN111325686B (en) Low-dose PET three-dimensional reconstruction method based on deep learning
Zhou et al. DuDoDR-Net: Dual-domain data consistent recurrent network for simultaneous sparse view and metal artifact reduction in computed tomography
Ko et al. Rigid and non-rigid motion artifact reduction in X-ray CT using attention module
CN115953494B (en) Multi-task high-quality CT image reconstruction method based on low dose and super resolution
Zhou et al. DuDoUFNet: dual-domain under-to-fully-complete progressive restoration network for simultaneous metal artifact reduction and low-dose CT reconstruction
CN111340903B (en) Method and system for generating synthetic PET-CT image based on non-attenuation correction PET image
WO2022246677A1 (en) Method for reconstructing enhanced ct image
CN112669247B (en) Priori guided network for multi-task medical image synthesis
Wang et al. Adaptive convolutional dictionary network for CT metal artifact reduction
Hou et al. CT image quality enhancement via a dual-channel neural network with jointing denoising and super-resolution
WO2022094779A1 (en) Deep learning framework and method for generating ct image from pet image
CN108038840B (en) Image processing method and device, image processing equipment and storage medium
Liu et al. MRCON-Net: Multiscale reweighted convolutional coding neural network for low-dose CT imaging
Chan et al. An attention-based deep convolutional neural network for ultra-sparse-view CT reconstruction
CN112991220B (en) Method for correcting image artifact by convolutional neural network based on multiple constraints
Chen et al. Deep learning-based algorithms for low-dose CT imaging: A review
Zhou et al. Limited view tomographic reconstruction using a deep recurrent framework with residual dense spatial-channel attention network and sinogram consistency
Gholizadeh-Ansari et al. Low-dose CT denoising using edge detection layer and perceptual loss
CN112419175A (en) Weight-sharing dual-region generation countermeasure network and image generation method thereof
KR20220071554A (en) Medical Image Fusion System
Sharif et al. Two-Stage Deep Denoising With Self-guided Noise Attention for Multimodal Medical Images
Ikuta et al. A deep recurrent neural network with FISTA optimization for CT metal artifact reduction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant