WO2022094779A1 - Cadre d'apprentissage profond et procédé destiné à la génération d'une image de ct à partir d'une image de pet - Google Patents

Cadre d'apprentissage profond et procédé destiné à la génération d'une image de ct à partir d'une image de pet Download PDF

Info

Publication number
WO2022094779A1
WO2022094779A1 PCT/CN2020/126384 CN2020126384W WO2022094779A1 WO 2022094779 A1 WO2022094779 A1 WO 2022094779A1 CN 2020126384 W CN2020126384 W CN 2020126384W WO 2022094779 A1 WO2022094779 A1 WO 2022094779A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
attenuation correction
pet
graph
correction coefficient
Prior art date
Application number
PCT/CN2020/126384
Other languages
English (en)
Chinese (zh)
Inventor
梁栋
李庆能
胡战利
郑海荣
刘新
杨永峰
Original Assignee
深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳先进技术研究院 filed Critical 深圳先进技术研究院
Priority to PCT/CN2020/126384 priority Critical patent/WO2022094779A1/fr
Publication of WO2022094779A1 publication Critical patent/WO2022094779A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation

Definitions

  • the present invention relates to the technical field of medical image reconstruction, and more particularly, to a deep learning framework and method for generating CT images from PET images.
  • Computed Tomography which can provide rich anatomical structure information, significantly alleviates the problem of low image resolution of Positron Emission Tomography (PET).
  • PET imaging is a functional imaging, which can directly reflect the relevant information of human diseased tissue.
  • CT images can provide spatial constraint information to aid in attenuation correction and artifact removal of raw PET images. Therefore, the PET/CT imaging system is currently the most widely used PET imaging system in the world.
  • the intervention of CT scanning will lead to additional X-ray exposure, and the accumulation of radiation doses to the human body will increase the possibility of various diseases, thereby affecting human physiological functions, destroying human tissues and organs, and even endangering the lives of patients Safety.
  • PET and CT due to the different parameter settings of PET and CT scanning, there is a significant matching error between the two; PET and CT belong to two completely different image domains, from low resolution and lack of space. It is very difficult to directly generate CT images from PET images with structural information, and the effect of the generated images needs to be improved.
  • the purpose of the present invention is to overcome the above-mentioned defects of the prior art, and provide a deep learning framework and method for generating CT images from PET images. Through end-to-end deep learning, the attenuation correction of the first PET image is realized, and pseudo CT images are helpful for accurate localization and detection of auxiliary lesions.
  • a deep learning method for generating a CT image from a PET image includes:
  • a graph-to-graph generative adversarial network comprising a generator and a discriminator, where the generator takes the attenuation correction coefficient map as input, the CT modal image as output, and the generator input image as the discriminator's discriminant condition To distinguish the authenticity of the generated CT modal images;
  • the graph-to-graph generative adversarial network is optimized and trained, and the mapping relationship between the first PET image without attenuation correction and the CT modal image is obtained.
  • a deep learning framework for generating CT images from PET images.
  • the framework includes an attenuation correction coefficient graph calculation module and a graph-to-graph generative adversarial network, where:
  • the attenuation correction coefficient map calculation module is configured to obtain the attenuation correction coefficient map through reverse calculation using the first PET image without attenuation correction and the corresponding second PET image with attenuation correction;
  • the graph-to-graph generative adversarial network includes a generator and a discriminator, wherein the generator takes the attenuation correction coefficient map as input, takes the CT modal image as output, and takes the generator input image as the discriminator's discriminant condition to distinguish Authenticity of the generated CT modality images.
  • the present invention has the advantage of providing a deep learning framework to replace the various functions of CT modalities in the PET/CT imaging system, and based on end-to-end mapping learning, realizes the unattenuated correction.
  • Noise reduction and artifact removal of the first PET image thereby realizing an attenuation correction process for the first PET image, and obtaining an attenuation correction coefficient map with more structural features.
  • the CT image is reconstructed based on the obtained attenuation correction coefficient map, which significantly reduces the reconstruction difficulty of the CT image and improves the image quality of the CT reconstruction.
  • a joint multiple loss function is designed for the graph-to-graph generative adversarial network model of CT reconstruction, which further ensures the quality of the output image.
  • FIG. 1 is a schematic flow chart of generating a CT image based on a deep learning framework according to an embodiment of the present invention
  • FIG. 2 is an overall structural diagram of a residual Unet network according to an embodiment of the present invention.
  • Fig. 3 is an encoder network structure diagram according to an embodiment of the present invention.
  • FIG. 4 is a structural diagram of a residual module network according to an embodiment of the present invention.
  • Fig. 5 is a decoder network structure diagram according to an embodiment of the present invention.
  • FIG. 6 is a structural diagram of a discriminator in a graph-to-graph generative adversarial network according to an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of a PET-CT image synthesis result according to an embodiment of the present invention.
  • the present invention designs a deep learning framework to replace the role played by the CT modality in the imaging system.
  • the deep learning framework as a whole consists of three parts: the first part, the end-to-end attenuation correction process, is used to realize the mapping between the first PET image without attenuation correction and the second PET image after attenuation correction; the second part, By inverse calculation of the attenuation correction mechanism, the attenuation correction coefficient map is obtained.
  • the third part from the attenuation correction coefficient map to the generation of the pseudo CT image.
  • the generated pseudo CT image can be used to assist the doctor in diagnosing the lesion area.
  • the deep learning framework for generating CT images from PET images includes a residual Unet network and a graph-to-graph generative adversarial network, wherein the residual Unet network is used for attenuation correction of PET images (other can also be used) deep learning model), a graph-to-graph generative adversarial network is used to generate CT images corresponding to PET images.
  • the deep learning method for generating a CT image includes the following steps.
  • Step S110 constructing a residual Unet network for performing attenuation correction on the first PET image without attenuation correction.
  • the attenuation correction process essentially performs noise reduction and removal of artifacts on the unattenuated first PET image to improve image quality.
  • the noise reduction and artifact removal of the first PET image without attenuation correction is achieved.
  • a Unet network with several residual modules is designed, and through end-to-end learning and residual feedback, a clean PET image, that is, a second attenuation-corrected PET image, is directly generated.
  • the residual Unet network includes an encoder, a residual module and a decoder, in which there is a spanning connection between the encoder and the decoder. In this way, the problems of gradient disappearance and gradient explosion in training can be solved, and at the same time, the network can be promoted. Information transfer.
  • the encoder part includes 5 convolution modules. Except for the first convolution module, each convolution module consists of a 2 ⁇ 2 max-pooling operation and two consecutive and identical 3 ⁇ 3 convolution operations with stride 1 and ReLU function activation. The first convolution module removes the max pooling operation in order to retain more original image information.
  • the encoded results of each convolutional module are passed into the decoder through a cross-connection mechanism to better guide the attenuation correction work on the first PET image. As the coding depth increases, the width of the convolution module is also increased from the initial number of 64 channels to 512 channels to gradually extract deeper image features.
  • the final coding result of the convolution module is down-sampled by a factor of 2 and then sent to the residual module to further extract the depth representation information of the image.
  • FIG. 4 is a structural diagram of a residual module.
  • three consecutive and identical residual modules are arranged between the encoder and the decoder.
  • Each residual module contains 2 3 ⁇ 3 convolution operations with stride 1, channel number 512 and activated by the ReLU function.
  • the output of each residual module is obtained by adding the input of the first convolution operation and the output of the second convolution operation pixel-wise.
  • the final residual block output is passed into the decoder.
  • the decoder of Fig. 5 has a symmetric structure to the encoder of Fig. 3, except that the 2 ⁇ 2 max-pooling operation in each convolutional module is replaced in the decoder by a bilinear stride of 2 Sexual interpolation upsampling.
  • the feature map obtained by the residual module is decoded by four consecutive convolution modules after one convolution operation. With the help of spanning connections, the upsampled decoded feature map can be combined with the encoded feature map of the corresponding resolution along the channel, and then the convolution operation is performed.
  • the decoded result of the final convolutional module is fed into the last convolutional layer, which produces a single-channel attenuation-corrected second PET image. Because the sigmoid activation function is used in the final convolutional layer, the image is normalized to the (0,1) interval.
  • Step S120 based on the PET images before and after the attenuation correction, obtain an attenuation correction coefficient map by inverse calculation of the attenuation correction mechanism.
  • the embodiment of the present invention uses the first PET image without attenuation correction and the second PET image after attenuation correction generated in step S110 to obtain attenuation correction by inverse calculation of the attenuation correction mechanism Coefficient graph.
  • the CT data represented by the HU value undergoes image registration, energy level conversion and spatial resolution correction to obtain the corresponding attenuation correction coefficient map.
  • the attenuation correction coefficient map is orthographically projected to obtain a corresponding attenuation correction coefficient sinogram.
  • the attenuation correction coefficient sinogram and the first PET sinogram are subjected to a dot product operation to obtain a second PET sinogram after attenuation correction, and then a final second PET image is obtained through a classical reconstruction algorithm.
  • the attenuation correction process of PET is reversible.
  • the sinogram of the attenuation correction coefficient can be obtained by inverse calculation, and then the attenuation correction coefficient map can be obtained by reconstruction.
  • the Laden transform function is used as the forward projection operation from the image domain to the sinogram
  • the filtered backprojection algorithm is used as the reconstruction algorithm from the sinogram to the image domain.
  • Step S130 constructing a graph-to-graph generative adversarial network model for generating CT modal images.
  • a two-dimensional graph-to-graph generative adversarial network is used to convert the resulting attenuation correction coefficient graph into a CT modality.
  • the present invention adds a discriminator based on the residual Unet network, introduces a generative adversarial mechanism, and constructs a graph-to-graph generative adversarial network to further improve the generation quality of CT images.
  • the graph-to-graph generative adversarial network uses the residual Unet network in step S110 as a generator to transform from an attenuation correction coefficient map to a CT image, wherein the discriminator adopts a fully convolutional network structure to decay
  • the correction coefficient map is a discriminant condition to distinguish the true and false of the generated image.
  • the discriminator has 4 convolutional layers and the final output layer, each convolutional layer contains a 4 ⁇ 4 convolution operation with stride 2, batch normalization operation and slope LeakyReLu activation function of 0.2.
  • the number of convolution kernels in the convolution layer is 64, 128, 256 and 256 channels, respectively.
  • the single-channel image patches are normalized to the (0,1) range by the sigmoid function.
  • the generator in the graph-to-graph generative adversarial network can also adopt a residual Unet network structure different from that in step S110, that is, other types of deep learning models can be used to implement the generator.
  • Step S140 designing the loss function of the residual Unet network and the graph-to-graph generation adversarial network.
  • a more complex joint loss function is designed, and the iterative training of the network is optimized by combining multiple loss functions, so as to further ensure that the generated CT images meet the needs of auxiliary medical diagnosis.
  • the present invention includes the residual Unet network from the first PET image without attenuation correction to the second PET image after attenuation correction and the graph-to-graph generation adversarial network from the attenuation correction coefficient map to the CT image, so it needs to be two Each model designs an independent loss function.
  • the mean absolute error (MAE) is used to denoise the first PET image without attenuation correction when training the residual Unet model, and the loss function is expressed as:
  • x represents the first PET image without attenuation correction
  • y represents the second PET image after attenuation correction
  • N represents the total number of pixels in each image.
  • a perceptual loss function is also introduced (Perceptive loss, PCP).
  • PCP Perceptive loss
  • Loss CT MAE+ ⁇ 1 ⁇ PCP+ ⁇ 2 ⁇ cGAN g (2)
  • PCP represents the perceptual loss function based on the pre-training model (such as VGG19), cGAN g represents the adversarial loss function, x represents the attenuation correction coefficient image, y represents the corresponding CT image; ⁇ i represents the i-th encoded convolution of the pre-training model layer, Wi and H i represent the feature map length and width of the i -th encoded convolutional layer, and n represents the number of selected convolutional layers.
  • the pre-training model such as VGG19
  • cGAN g represents the adversarial loss function
  • x represents the attenuation correction coefficient image
  • y represents the corresponding CT image
  • ⁇ i represents the i-th encoded convolution of the pre-training model layer
  • Wi and H i represent the feature map length and width of the i -th encoded convolutional layer
  • n represents the number of selected convolutional layers.
  • a sigmoid activation based cross-entropy function is used as the adversarial loss functions cGANg and cGANd for the graph-to-graph generative adversarial network.
  • the representation of the generative adversarial loss function in the generator G and the discriminator D is as follows:
  • x represents the input image of the generative adversarial network, that is, the attenuation correction coefficient map. At the same time, x is also used as the discriminator's judgment condition.
  • G(x) represents the generated CT modality image, and y represents the real CT image.
  • ⁇ 1 and ⁇ 2 can be set empirically, for example, set to 1.0 and 0.01, respectively. According to the experimental simulation, the setting of the loss weight can be adjusted.
  • Step S150 optimize the residual U-Net network with the set loss function as the target.
  • Step S160 optimizing the graph-to-graph generative adversarial network with the set loss function as the target.
  • the calculated attenuation correction coefficient map is used as the input of the graph-to-graph generative adversarial network, and the CT image is used as its reference, and the Adam optimizer is used to train the graph-to-graph generative adversarial network to gradually reach a convergence state.
  • the experimental architecture proposed in the present invention can also be applied to other image types such as MRI modalities.
  • FIG. 7 is a schematic diagram of a PET-CT image synthesis result.
  • subfigure (a) is the generated pseudo CT image, (b) is the real CT image, (c) is the generated AC PET image (i.e. attenuation corrected PET image), and (d) is the real AC PET image Images, (e) is the generated PET/CT fusion map, (f) is the real PET/CT fusion map.
  • the method of the present invention can well realize the attenuation correction of PET, and obtain a clean second PET image.
  • the generated pseudo-CT images also have sufficient anatomical structure information, which can assist doctors in clinical diagnosis and localization of the lesion area.
  • the method proposed in the present invention can largely replace the role of CT modalities in PET imaging systems, which helps PET imaging systems to get rid of the dependence on anatomical modalities and achieve the goal of deradiation.
  • the PET attenuation correction and CT image reconstruction can be realized at the same time, which gets rid of the dependence of the PET imaging system on the CT modality.
  • the attenuation correction coefficient map is reversely calculated to reduce the difficulty of CT reconstruction and improve the CT.
  • Image quality enhancement of CT image reconstruction quality using a joint loss function.
  • the present invention may be a system, method and/or computer program product.
  • the computer program product may include a computer-readable storage medium having computer-readable program instructions loaded thereon for causing a processor to implement various aspects of the present invention.
  • a computer-readable storage medium may be a tangible device that can hold and store instructions for use by the instruction execution device.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Non-exhaustive list of computer readable storage media include: portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM) or flash memory), static random access memory (SRAM), portable compact disk read only memory (CD-ROM), digital versatile disk (DVD), memory sticks, floppy disks, mechanically coded devices, such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above.
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable read only memory
  • flash memory static random access memory
  • SRAM static random access memory
  • CD-ROM compact disk read only memory
  • DVD digital versatile disk
  • memory sticks floppy disks
  • mechanically coded devices such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above.
  • Computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (eg, light pulses through fiber optic cables), or through electrical wires transmitted electrical signals.
  • the computer readable program instructions described herein may be downloaded to various computing/processing devices from a computer readable storage medium, or to an external computer or external storage device over a network such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer-readable program instructions from a network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
  • the computer program instructions for carrying out the operations of the present invention may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state setting data, or instructions in one or more programming languages.
  • Source or object code written in any combination, including object-oriented programming languages, such as Smalltalk, C++, etc., and conventional procedural programming languages, such as the "C" language or similar programming languages.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through the Internet connect).
  • LAN local area network
  • WAN wide area network
  • custom electronic circuits such as programmable logic circuits, field programmable gate arrays (FPGAs), or programmable logic arrays (PLAs)
  • FPGAs field programmable gate arrays
  • PDAs programmable logic arrays
  • Computer readable program instructions are executed to implement various aspects of the present invention.
  • These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer or other programmable data processing apparatus to produce a machine that causes the instructions when executed by the processor of the computer or other programmable data processing apparatus , resulting in means for implementing the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • These computer readable program instructions can also be stored in a computer readable storage medium, these instructions cause a computer, programmable data processing apparatus and/or other equipment to operate in a specific manner, so that the computer readable medium on which the instructions are stored includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • Computer readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other equipment to cause a series of operational steps to be performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , thereby causing instructions executing on a computer, other programmable data processing apparatus, or other device to implement the functions/acts specified in one or more blocks of the flowcharts and/or block diagrams.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more functions for implementing the specified logical function(s) executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or actions , or can be implemented in a combination of dedicated hardware and computer instructions. It is well known to those skilled in the art that implementation in hardware, implementation in software, and implementation in a combination of software and hardware are all equivalent.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

La présente invention concerne un cadre d'apprentissage profond et un procédé destiné à la génération d'une image de tomodensitométrie (CT) à partir d'une image de tomographie par émission de positrons (PET). Le procédé consiste : à obtenir, au moyen d'un calcul inverse d'un mécanisme de correction d'atténuation, une carte de coefficient de correction d'atténuation à l'aide d'une première image PET qui n'est pas soumise à une correction d'atténuation et une seconde image PET correspondante qui est soumise à une correction d'atténuation ; et à obtenir, au moyen d'un apprentissage par ajustement d'un réseau contradictoire génératif graphique à graphique, une relation de mappage entre la carte de coefficient de correction d'atténuation et une image modale de CT à l'aide de la carte de coefficient de correction d'atténuation obtenue, de façon à réaliser un processus de génération à partir d'une image de PET vers l'image modale de CT. Dans le réseau contradictoire graphique-graphique, un générateur de celui-ci utilise la carte de coefficient de correction d'atténuation en tant qu'entrée, utilise l'image modale de CT comme sortie et utilise une image entrée par le générateur comme condition de discrimination d'un discriminateur pour distinguer l'authenticité de l'image modale de CT générée. Selon la présente invention, une correction d'atténuation de PET et une reconstruction d'image de CT peuvent à la fois être obtenues, la difficulté de reconstruction de l'image de CT peut être efficacement réduite au moyen de la carte de coefficient de correction d'atténuation, et la qualité de l'image de CT est améliorée.
PCT/CN2020/126384 2020-11-04 2020-11-04 Cadre d'apprentissage profond et procédé destiné à la génération d'une image de ct à partir d'une image de pet WO2022094779A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/126384 WO2022094779A1 (fr) 2020-11-04 2020-11-04 Cadre d'apprentissage profond et procédé destiné à la génération d'une image de ct à partir d'une image de pet

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/126384 WO2022094779A1 (fr) 2020-11-04 2020-11-04 Cadre d'apprentissage profond et procédé destiné à la génération d'une image de ct à partir d'une image de pet

Publications (1)

Publication Number Publication Date
WO2022094779A1 true WO2022094779A1 (fr) 2022-05-12

Family

ID=81458440

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/126384 WO2022094779A1 (fr) 2020-11-04 2020-11-04 Cadre d'apprentissage profond et procédé destiné à la génération d'une image de ct à partir d'une image de pet

Country Status (1)

Country Link
WO (1) WO2022094779A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116502701A (zh) * 2023-06-29 2023-07-28 合肥锐世数字科技有限公司 衰减校正方法和装置、训练方法和装置、成像方法和系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107133996A (zh) * 2017-03-21 2017-09-05 上海联影医疗科技有限公司 产生用于pet数据重建的衰减图的方法及pet/ct系统
CN109697741A (zh) * 2018-12-28 2019-04-30 上海联影智能医疗科技有限公司 一种pet图像重建方法、装置、设备及介质
US20190130569A1 (en) * 2017-10-26 2019-05-02 Wisconsin Alumni Research Foundation Deep learning based data-driven approach for attenuation correction of pet data
CN111179372A (zh) * 2019-12-31 2020-05-19 上海联影智能医疗科技有限公司 图像衰减校正方法、装置、计算机设备和存储介质
CN111340903A (zh) * 2020-02-10 2020-06-26 深圳先进技术研究院 基于非衰减校正pet图像生成合成pet-ct图像的方法和系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107133996A (zh) * 2017-03-21 2017-09-05 上海联影医疗科技有限公司 产生用于pet数据重建的衰减图的方法及pet/ct系统
US20190130569A1 (en) * 2017-10-26 2019-05-02 Wisconsin Alumni Research Foundation Deep learning based data-driven approach for attenuation correction of pet data
CN109697741A (zh) * 2018-12-28 2019-04-30 上海联影智能医疗科技有限公司 一种pet图像重建方法、装置、设备及介质
CN111179372A (zh) * 2019-12-31 2020-05-19 上海联影智能医疗科技有限公司 图像衰减校正方法、装置、计算机设备和存储介质
CN111340903A (zh) * 2020-02-10 2020-06-26 深圳先进技术研究院 基于非衰减校正pet图像生成合成pet-ct图像的方法和系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116502701A (zh) * 2023-06-29 2023-07-28 合肥锐世数字科技有限公司 衰减校正方法和装置、训练方法和装置、成像方法和系统
CN116502701B (zh) * 2023-06-29 2023-10-20 合肥锐世数字科技有限公司 衰减校正方法和装置、训练方法和装置、成像方法和系统

Similar Documents

Publication Publication Date Title
Kang et al. Cycle‐consistent adversarial denoising network for multiphase coronary CT angiography
CN111325686B (zh) 一种基于深度学习的低剂量pet三维重建方法
Willemink et al. Iterative reconstruction techniques for computed tomography Part 1: technical principles
KR20200044222A (ko) 뉴럴 네트워크를 이용한 비매칭 저 선량 엑스선 전산단층 촬영 영상 처리 방법 및 그 장치
Huang et al. Learning a deep CNN denoising approach using anatomical prior information implemented with attention mechanism for low-dose CT imaging on clinical patient data from multiple anatomical sites
Ko et al. Rigid and non-rigid motion artifact reduction in X-ray CT using attention module
CN115953494B (zh) 基于低剂量和超分辨率的多任务高质量ct图像重建方法
CN112435164B (zh) 基于多尺度生成对抗网络的低剂量ct肺部图像的同时超分辨率和去噪方法
Huang et al. Deep cascade residual networks (DCRNs): optimizing an encoder–decoder convolutional neural network for low-dose CT imaging
Hou et al. CT image quality enhancement via a dual-channel neural network with jointing denoising and super-resolution
CN111340903B (zh) 基于非衰减校正pet图像生成合成pet-ct图像的方法和系统
Xi et al. High-kVp assisted metal artifact reduction for X-ray computed tomography
Chao et al. Dual-domain attention-guided convolutional neural network for low-dose cone-beam computed tomography reconstruction
Liu et al. MRCON-Net: Multiscale reweighted convolutional coding neural network for low-dose CT imaging
Chan et al. An attention-based deep convolutional neural network for ultra-sparse-view CT reconstruction
WO2022094779A1 (fr) Cadre d'apprentissage profond et procédé destiné à la génération d'une image de ct à partir d'une image de pet
Feng et al. Dual residual convolutional neural network (DRCNN) for low-dose CT imaging
Yin et al. Unpaired low-dose CT denoising via an improved cycle-consistent adversarial network with attention ensemble
WO2022094911A1 (fr) Réseau antagoniste génératif à double région à poids répartis et procédé de génération d'image à cet effet
CN112419175A (zh) 一种共享权重的双区域生成对抗网络及其图像生成方法
Zhou et al. Limited view tomographic reconstruction using a deep recurrent framework with residual dense spatial-channel attention network and sinogram consistency
Wu et al. Unsharp structure guided filtering for self-supervised low-dose CT imaging
Xia et al. Dynamic controllable residual generative adversarial network for low-dose computed tomography imaging
WO2022120661A1 (fr) Réseau guidé a priori pour la synthèse d'images médicales multi-tâches
CN112419173B (en) Deep learning framework and method for generating CT image from PET image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20960242

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20960242

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20960242

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 14.12.2023)