CN115423893A - Low-dose PET-CT reconstruction method based on multi-mode structure similarity neural network - Google Patents

Low-dose PET-CT reconstruction method based on multi-mode structure similarity neural network Download PDF

Info

Publication number
CN115423893A
CN115423893A CN202211366435.9A CN202211366435A CN115423893A CN 115423893 A CN115423893 A CN 115423893A CN 202211366435 A CN202211366435 A CN 202211366435A CN 115423893 A CN115423893 A CN 115423893A
Authority
CN
China
Prior art keywords
image
pet
network
modal
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211366435.9A
Other languages
Chinese (zh)
Other versions
CN115423893B (en
Inventor
丘成桐
刘继军
王冬
王丽艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Applied Mathematics Center
Original Assignee
Nanjing Applied Mathematics Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Applied Mathematics Center filed Critical Nanjing Applied Mathematics Center
Priority to CN202211366435.9A priority Critical patent/CN115423893B/en
Publication of CN115423893A publication Critical patent/CN115423893A/en
Application granted granted Critical
Publication of CN115423893B publication Critical patent/CN115423893B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention discloses a low-dose PET-CT reconstruction method of a neural network based on multi-modal image structure similarity. The core of the invention is that the structural information of the CT image is utilized, so that the PET image acquired by a short time window level achieves the effect of long time window level acquisition, which is beneficial to reducing motion artifacts and improving the scanning comfort of patients. By constructing a multi-modal multi-branch depth convolution neural network with a self-attention mechanism and introducing multi-modal image structure similarity in a loss function to learn the structure information in the CT image, the peak signal-to-noise ratio of the low-dose PET image is improved by 9%, the effect of the standard-dose PET image is successfully achieved, and the structure information is obviously superior to the standard-dose PET image. According to the method, while CT image information is utilized, penalty terms for describing the multi-modal image structure similarity are introduced into a loss function, and therefore a high-precision PET image containing more structural information is obtained.

Description

Low-dose PET-CT reconstruction method based on multi-mode structure similarity neural network
Technical Field
The invention belongs to the technical field of medical imaging, and particularly relates to a novel low-dose PET-CT reconstruction method based on a multi-modal structure similarity deep convolutional neural network.
Background
PET-CT (Positron Emission Tomography-Computed Tomography) is a multimodal nuclear imaging technique. The PET-CT image can quantitatively reflect the glucose metabolism capability of a human body, improve the specificity and sensitivity of tumor detection, determine the position and the boundary of a primary focus, detect a far-end lymphatic metastasis and a blood metastasis, has obvious advantages in accurate cancer diagnosis and treatment and is widely applied.
Two times of different data acquisition are required in the PET-CT imaging process: the first time, the whole body CT scanning is carried out on a patient so as to carry out attenuation positioning and structure positioning on subsequent PET imaging and improve the precision of the PET imaging; the second time is to carry out PET scanning on the patient, a radionuclide tracer with radioactivity is injected into the patient through a vein before scanning, then gamma ray pairs emitted by decay of the nuclide are collected by a detector, and the positioning information obtained by CT scanning is utilized to carry out high-precision PET imaging.
Due to objective limitations in terms of imaging mechanisms and clinical practice, current PET-CT imaging faces the following challenges. On one hand, the injected nuclide tracer before PET scanning has certain radioactivity, which can bring inevitable influence on the health of patients; on the other hand, the acquisition time of the existing PET imaging data is long, each window level usually needs to be scanned for 60 seconds to 90 seconds, and the whole process usually needs 20 minutes to 30 minutes. Therefore, there is a clinically urgent need to reduce the dose of PET imaging and reduce the time for data acquisition. However, this results in a reduction in the measured data (number of photons), an increase in noise and a reduction in the signal-to-noise ratio of the PET reconstructed image. Therefore, how to reconstruct a high-quality PET image by using low-dose data is a problem to be solved.
The current low-dose PET-CT reconstruction methods can be generally classified into model-based methods and data-based methods. Although the two methods ensure the precision of the low-dose PET-CT image to a certain extent, the information of the CT image in the PET-CT is not fully utilized. In fact, the CT image contains structural information of organs and lesions, which have a substantial influence on improving the accuracy of the PET-CT image, and the accuracy of the reconstructed image can be improved by fully utilizing the information during reconstruction. Based on the facts, the patent provides a multi-modal multi-branch deep neural network model with a self-attention mechanism, and introduces multi-modal image similarity errors into a loss function to learn structural information in a CT image, so that the reconstruction accuracy of low-dose PET is improved, and meanwhile anatomical structure information in the CT image is fused to improve the boundaries of organs and focuses in the PET image.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the defects and shortcomings of the existing low-dose PET-CT reconstruction method, the invention provides a multi-mode structure similarity depth convolution neural network model, which enables a low-dose PET image (each window is scanned for 10 seconds) to achieve the effect of a conventional dose PET image (each window is scanned for 60 seconds) by effectively extracting and utilizing structural information in a CT image, and improves the definition of the boundary of an organ and a focus while improving the reconstruction precision of the low-dose PET image.
The technical scheme is as follows: the invention provides a brand-new low-dose PET-CT reconstruction method, which comprises the following 5 steps:
(a) Collecting data: collecting clinical 60 second PET-CT data and 10 second PET-CT data of the same patient; randomly dividing the collected data set into a training set, a verification set and a test set according to a certain proportion and preprocessing;
(b) Establishing a model: constructing a multi-modal multi-branch depth convolution neural network with a self-attention mechanism based on a deep learning method, wherein the network input is a 10-second PET image and a CT image, and the network output is a 60-second PET image;
(c) Designing a loss function: introducing a penalty term (MMSP) for describing the similarity of the Multi-modal images into the loss function, and learning structural information in the CT image by utilizing the gradient information of the images;
(d) Training a model: optimizing the deep neural network model constructed in the step (b) on a training set and a verification set by adopting a full-supervised learning method to obtain optimal network parameters;
(e) And (3) testing a model: and (d) after training is finished, testing the network obtained in the step (d) on a test set to obtain a reconstructed PET image, carrying out numerical comparison on the reconstructed PET image and a conventional dose PET image, and checking the effectiveness of the reconstruction method.
Specifically, the CT scan in step (a) is a clinical routine low-dose non-enhanced three-dimensional whole-body CT scan, and each two-dimensional image is 512x512 in size; the acquisition time of each window level of the conventional dose PET scan is 60 seconds, the acquisition time of each window level of the low dose PET scan is 10 seconds, both the acquisition time and the acquisition time are reconstructed by adopting a classical 3D-OSEM (3D ended subframes expedition) algorithm, and a 60-second three-dimensional whole body PET image and a corresponding 10-second three-dimensional whole body PET image are obtained, wherein the size of each image is 144x144.
Specifically, the data collected in step (a) are randomly divided into a training set, a validation set and a test set in a ratio of 8. The preprocessing of the data set is realized in a specific way that: first, the pixel values of all images (10 second PET image, CT image, 60 second PET image) were normalized to [0,1]; secondly, the diversity of data is increased through operations such as random horizontal overturning, random vertical overturning, random 45-degree rotation and the like; then, a threshold segmentation algorithm is used for segmenting a human body region in the CT image so as to eliminate the interference of a scanning frame, background artifacts, noise and the like in the CT image; finally, images in the training set are randomly cut into small blocks (patch), so that the training efficiency is improved, and the sample size is increased;
specifically, the multi-branch deep convolutional neural network with a self-attention mechanism in the step (b) is recorded as
Figure 493244DEST_PATH_IMAGE001
Wherein
Figure 995087DEST_PATH_IMAGE002
Are network parameters. Specifically, the proposed deep convolutional neural network can be divided into two parts, front and back, and the front and back are connected by 2 residual modules. The front end of the network comprises two branches, namely a CT branch and a PET branch. Each branch is formed by sequentially connecting 5 2D convolution modules and 1 self-attention module, wherein each 2D convolution module consists of a convolution layer, a batch normalization layer (BN) and an activation layer, the number of convolution kernels is 96, the size of the convolution kernels is 5x5, the step length is 1, and the activation function is ReLU; the self-attention module extracts global information between the pixels of the image through 3 1 × 1 convolution modules and inner product operations. The network back end splices the extracted CT image features and PET image features and is formed by sequentially connecting 5 deconvolution modules and 2 self-attention modules, wherein each 2D deconvolution module consists of an deconvolution layer, a batch normalization layer and an activation layer, the number of convolution kernels is 96, the size of the convolution kernels is 5x5, the step length is 1, and the activation function is ReLU. The network inputs the PET image and the corresponding CT image for 10 seconds and outputs the PET image after reconstruction.
In particular, the loss function in step (c) is
Figure 190576DEST_PATH_IMAGE003
Wherein
Figure 26945DEST_PATH_IMAGE004
Is the output of the network, (i.e. the reconstructed PET image),
Figure 409516DEST_PATH_IMAGE005
are network parameters.
Figure 622323DEST_PATH_IMAGE006
Is a web tag (i.e. a 60 second PET image),
Figure 469056DEST_PATH_IMAGE007
is used as a CT image and is used as a CT image,
Figure 738976DEST_PATH_IMAGE008
and
Figure 405581DEST_PATH_IMAGE009
is a weight;
MSE (Mean Square Error) represents the Mean Square Error, defined as:
Figure 625341DEST_PATH_IMAGE010
SSIM (Structure Similarity Index Measurement) represents a structural Similarity coefficient defined as:
Figure 326580DEST_PATH_IMAGE011
the similarity degree between two images is described by utilizing the statistical information of the images. MMSP (Multi-Modality Structure modeling) is a Multi-modal structural similarity penalty term defined as:
Figure 239173DEST_PATH_IMAGE012
it uses the gradient information of the image to depict the structural similarity degree between the multi-modal images.
Minimization of
Figure 596336DEST_PATH_IMAGE013
The reconstructed PET image is required to have the minimum distance to the 60 second PET image in the least squares sense, and the reconstructed PET image is also required to be matched with the structural information in the CT image.
Specifically, the step (d) adopts a full supervision training method, takes 10-second PET image blocks and CT image blocks as network inputs, and takes corresponding 60-second PET image blocks as labels to train the network. And after initializing the network, optimizing network parameters by adopting an ADAM algorithm. Obtaining the optimal network parameters after the training
Figure 416524DEST_PATH_IMAGE014
Specifically, the testing process in the step (e) is specifically implemented as follows: putting the data in the test set into the network obtained by training in the step (d) one by one
Figure 506359DEST_PATH_IMAGE015
Taking the 10-second PET image and the corresponding CT image as network input, and the network output is the reconstructed PET image
Figure 589852DEST_PATH_IMAGE016
And carrying out numerical comparison on the reconstructed PET image and the 60-second PET image, and calculating an objective evaluation index.
The method provided by the invention is compared with other leading-edge multi-modal reconstruction methods in many aspects, and the advantages of the invention are reflected.
Has the advantages that: the invention provides a brand-new deep convolution neural network model based on multi-modal image structure similarity for a low-dose PET-CT reconstruction method, and the effect of enabling a 10-second PET image to reach a 60-second PET image is successfully achieved. Compared with the existing multi-modal reconstruction method, the method effectively improves the reconstruction precision of the low-dose PET image and the definition of the boundary of the organ and the focus by means of the structural information in the CT image.
Drawings
FIG. 1 is an overall flow chart of the proposed method of the present invention;
FIG. 2 is a multi-modal structure similarity deep neural network model proposed by the present invention;
FIG. 3 is a graph showing the variation of the loss function in the training process of the proposed network model; left: training errors; and (3) right: verifying an error;
FIG. 4 is a comparison of the reconstruction results of the proposed network model and other advanced multi-modal models (liver tumor); first row: sequentially forming a 10-second PET image, a CT image and a 60-second PET image from left to right; second, different model reconstruction results; third row: corresponding to the error distribution image of the reconstruction result;
FIG. 5 is a comparison of the proposed modeled reconstructed image with 60 second PET image edge information; first row: from left to right, 10 second PET image, CT image, 60 second PET image and reconstructed image are sequentially arranged; a second row: and (4) extracting an edge image by using a Sobel operator.
Detailed Description
The objects, aspects and advantages of the present invention will be described in detail with reference to the accompanying drawings and the detailed description of the invention.
The invention provides a novel low-dose PET-CT multi-modal reconstruction model and establishes an effective solving algorithm. Specifically, a multi-branch depth convolution neural network with a self-attention mechanism is provided, a multi-modal image similarity penalty term is introduced into a loss function to learn structural information contained in a CT image, the effect of a 10-second PET image is successfully achieved to 60-second PET image, and the reconstruction precision and the boundary definition of organs and lesions are remarkably improved. The overall flow chart of the present invention is shown in fig. 1, and specifically includes the following steps.
(a) Collecting data: firstly, carrying out low-dose non-enhanced CT scanning to obtain a CT image of whole body scanning, wherein the CT image is used for attenuation correction and structure correction of subsequent PET scanning; then carrying out 10-second PET scanning and 60-second PET scanning, and reconstructing the two by adopting a classical 3D-OSEM algorithm; finally, scanning each patient by adopting the steps to obtain a large number of samples, wherein each sample comprises a CT image, a 10-second PET image and a corresponding 60-second PET image; randomly dividing the obtained data into a training set, a verification set and a test set according to the proportion of 8;
(b) Establishing a model: a deep convolutional neural network was constructed according to the structure shown in fig. 2. The network can be divided into two parts, front-end and back-end: the front end comprises two branches, and the features of the CT image and the PET image are respectively extracted through a convolution module and a self-attention module; fusing the extracted CT characteristic and PET characteristic at the rear end and outputting a reconstructed PET image;
(c) Designing a loss function: firstly, definitions of Mean Square Error (MSE), structure similarity coefficient (SSIM) and multi-modal structure similarity error (MMSP) are respectively given:
Figure 762208DEST_PATH_IMAGE010
Figure 589350DEST_PATH_IMAGE011
and
Figure 999602DEST_PATH_IMAGE012
then, the weight hyperparameter in the loss function is properly selected
Figure 253997DEST_PATH_IMAGE008
And
Figure 382490DEST_PATH_IMAGE009
to obtain a loss function
Figure 744814DEST_PATH_IMAGE003
(d) Training a model: initializing network parameters by using Kaiming method
Figure 743994DEST_PATH_IMAGE017
(ii) a And (3) taking the 10-second PET image blocks and the CT image blocks as network inputs and the 60-second PET image blocks as network outputs, and optimizing network parameters by adopting an ADAM algorithm. The hyper-parameters during the training process are shown in table 1, where bs is the batch size, p is the number of extracted patches per image, ps is the patch size,
Figure 966028DEST_PATH_IMAGE018
for learning rate, N is the training times.
TABLE 1 network training hyperparameters
Figure 112975DEST_PATH_IMAGE019
(e) Testing the model: and (d) putting the test sets into the network obtained by training one by one to obtain a reconstructed PET image, and carrying out numerical comparison with the 60-second PET image, wherein the evaluation method comprises Root Mean Square Error (RMSE), peak Signal-to-Noise Ratio (PSNR) and structural similarity coefficient (SSIM).
To more clearly illustrate the feasibility and advantages of the present invention, the following demonstrates the performance of the method of the present invention in a test set and in numerical contrast to other leading-edge multi-modal reconstruction methods. Fig. 3 illustrates the convergence of the optimization algorithm by the variation curve of the loss function of the network model proposed by the present invention during the training process. Table 2 quantitatively shows the comparison of the proposed model of the present invention with the most advanced multi-modal imaging model, including the indexes of average RMSE, average PSNR, and average SSIM for all test data. A direct result shows that compared with a 10-second PET image, the average PSNR of the reconstructed image of the method provided by the invention is improved by 9%, and compared with other leading edge multi-modal methods, the method is also obviously improved.
TABLE 2 comparison of the results of the multi-modal low-dose PET-CT reconstruction
Figure 813078DEST_PATH_IMAGE020
The effect of the reconstruction result of the present invention is shown in fig. 4. It can be seen that compared to the 10s PET image and the reconstruction results of some of the existing methods, the image reconstructed by the present invention not only improves significantly in objective index, but also is visually closer to the 60s PET image and contains less noise and artifacts. The error distribution image of the different methods of the last row effectively supports this conclusion.
In order to illustrate the superiority of the method in maintaining structural information, we use the standard Sobel operator to perform edge extraction on the 10 second PET image, the CT image, the 60 second PET image, and the reconstructed image, and compare them visually, and the extraction effect is shown in fig. 5. It can be seen that the PET image obtained by the method provided by the invention contains abundant structural information, the boundaries of each organ are basically extracted, the PET image is visually clearer than the 10-second PET image, and the interference of a scanning frame, bones, noise and the like is reduced compared with the 60-second PET image and the CT image.

Claims (7)

1. The low-dose PET-CT reconstruction method based on the multi-modal structure similarity neural network is characterized by comprising the following steps of:
(a) Collecting data: collecting clinical 60-second PET-CT image data and 10-second PET-CT image data of the same patient; randomly dividing the collected data set into a training set, a verification set and a test set according to a proportion and preprocessing;
(b) Establishing a model: constructing a multi-modal multi-branch depth convolution neural network with a self-attention mechanism based on a deep learning method, wherein the network input is a 10-second PET image and a CT image, and the network output is a 60-second PET image;
(c) Designing a loss function: introducing a penalty term for describing the structure similarity of the multi-mode image into the loss function, and learning the structure information in the CT image by utilizing the gradient information of the image;
(d) Training a model: optimizing the deep convolutional neural network model constructed in the step (b) on a training set and a verification set by adopting a full-supervised learning method;
(e) Testing the model: and (d) after training is finished, testing the network obtained in the step (d) on a test set to obtain a reconstructed PET image and performing numerical comparison with the 60-second PET image.
2. The low-dose PET-CT reconstruction method based on multi-modal structural similarity neural networks according to claim 1, characterized in that: training the network model established in the steps (b) - (D) by using the collected data in the step (a), and performing multi-modal reconstruction by using the step (e), wherein the reconstruction algorithm of the PET image in the step (a) comprises but is not limited to a 3D-OSEM reconstruction algorithm.
3. The low-dose PET-CT reconstruction method based on multi-modal structural similarity neural networks according to claim 1, characterized in that: the proportion of the data collected in step (a) into training, validation and test sets includes but is not limited to 8; the preprocessing method for the collected data in the step (a) comprises normalization, random horizontal turning, random vertical turning, random 45-degree rotation, threshold segmentation and image block cutting.
4. The low-dose PET-CT reconstruction method based on multi-modal structural similarity neural networks according to claim 1, characterized in that: the deep convolutional neural network provided in the step (b) is divided into a front end and a rear end, and the front end and the rear end are connected through 2 residual error modules; the front end of the network comprises two branches, namely a CT branch and a PET branch; each branch is formed by sequentially connecting 5 2D convolution modules and 1 self-attention module, wherein each 2D convolution module consists of a convolution layer, a batch normalization layer and an activation layer, the number of convolution kernels is 96, the size of the convolution kernels is 5x5, the step length is 1, and the activation function is a ReLU; extracting global information among pixels of the image by a self-attention module through 3 times of 1x1 convolution and inner product operation; the network back end splices the extracted CT image features and PET image features and is formed by sequentially connecting 5 deconvolution modules and 2 self-attention modules, wherein each 2D deconvolution module consists of a deconvolution layer, a batch normalization layer and an activation layer, the number of convolution kernels is 96, the size of the convolution kernels is 5x5, the step length is 1, and the activation function is ReLU.
5. The low-dose PET-CT reconstruction method based on multi-modal structural similarity neural networks according to claim 1, characterized in that: the loss function in said step (c) is
Figure 580837DEST_PATH_IMAGE001
Wherein
Figure 170082DEST_PATH_IMAGE002
As a result of the network parameters,
Figure 684240DEST_PATH_IMAGE003
the output of the network, i.e. the reconstructed PET image,
Figure 548290DEST_PATH_IMAGE004
the web tag, i.e. the 60 second PET image,
Figure 34767DEST_PATH_IMAGE005
is used as a CT image and is used as a CT image,
Figure 591650DEST_PATH_IMAGE006
and
Figure 527857DEST_PATH_IMAGE007
is a weight;
MSE represents the mean square error, defined as:
Figure 195599DEST_PATH_IMAGE008
SSIM denotes a structural similarity coefficient, defined as:
Figure 598899DEST_PATH_IMAGE009
the structural similarity between two images is described by utilizing the statistical information of the images;
MMSP represents a multi-modal structural similarity penalty term defined as:
Figure 61104DEST_PATH_IMAGE010
the method uses the gradient information of the images to depict the structural similarity between the multi-modal images.
6. The low-dose PET-CT reconstruction method based on multi-modal structural similarity neural networks according to claim 1, characterized in that: and (d) adopting a full supervision training method, taking 10-second PET image blocks and CT image blocks as network input, taking corresponding 60-second PET image blocks as a label training network, initializing the network, optimizing network parameters by adopting an ADAM algorithm, and obtaining optimal network parameters after training.
7. The low-dose PET-CT reconstruction method based on multi-modal structural similarity neural networks according to claim 1, characterized in that: the process of the test in the step (e) is specifically realized as follows: putting the data in the test set into the network obtained by training in the step (d) one by one, taking the 10-second PET image and the corresponding CT image as network input, and taking the network output as a reconstructed PET image; and carrying out numerical comparison on the reconstructed PET image and the 60-second PET image, and calculating an objective evaluation index.
CN202211366435.9A 2022-11-03 2022-11-03 Low-dose PET-CT reconstruction method based on multi-modal structure similarity neural network Active CN115423893B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211366435.9A CN115423893B (en) 2022-11-03 2022-11-03 Low-dose PET-CT reconstruction method based on multi-modal structure similarity neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211366435.9A CN115423893B (en) 2022-11-03 2022-11-03 Low-dose PET-CT reconstruction method based on multi-modal structure similarity neural network

Publications (2)

Publication Number Publication Date
CN115423893A true CN115423893A (en) 2022-12-02
CN115423893B CN115423893B (en) 2023-04-28

Family

ID=84207395

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211366435.9A Active CN115423893B (en) 2022-11-03 2022-11-03 Low-dose PET-CT reconstruction method based on multi-modal structure similarity neural network

Country Status (1)

Country Link
CN (1) CN115423893B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109697741A (en) * 2018-12-28 2019-04-30 上海联影智能医疗科技有限公司 A kind of PET image reconstruction method, device, equipment and medium
CN113808106A (en) * 2021-09-17 2021-12-17 浙江大学 Ultra-low dose PET image reconstruction system and method based on deep learning
CN114332287A (en) * 2022-03-11 2022-04-12 之江实验室 Method, device, equipment and medium for reconstructing PET (positron emission tomography) image based on transformer feature sharing

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109697741A (en) * 2018-12-28 2019-04-30 上海联影智能医疗科技有限公司 A kind of PET image reconstruction method, device, equipment and medium
CN113808106A (en) * 2021-09-17 2021-12-17 浙江大学 Ultra-low dose PET image reconstruction system and method based on deep learning
CN114332287A (en) * 2022-03-11 2022-04-12 之江实验室 Method, device, equipment and medium for reconstructing PET (positron emission tomography) image based on transformer feature sharing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李振伟等: "PET/CT图像重建技术综述", 《中国医疗器械杂志》 *

Also Published As

Publication number Publication date
CN115423893B (en) 2023-04-28

Similar Documents

Publication Publication Date Title
US11844636B2 (en) Dose reduction for medical imaging using deep convolutional neural networks
JP7203852B2 (en) Estimation of full-dose PET images from low-dose PET imaging using deep learning
Xu et al. 200x low-dose PET reconstruction using deep learning
CN113711271A (en) Deep convolutional neural network for tumor segmentation by positron emission tomography
US11769237B2 (en) Multimodal medical image fusion method based on darts network
CN104463840A (en) Fever to-be-checked computer aided diagnosis method based on PET/CT images
CN113160347B (en) Low-dose double-tracer PET reconstruction method based on attention mechanism
US20230059132A1 (en) System and method for deep learning for inverse problems without training data
CN110619635B (en) Hepatocellular carcinoma magnetic resonance image segmentation system and method based on deep learning
Song et al. Bridging the gap between 2D and 3D contexts in CT volume for liver and tumor segmentation
CN109978966A (en) The correction information acquiring method of correction for attenuation is carried out to PET activity distributed image
CN110874860A (en) Target extraction method of symmetric supervision model based on mixed loss function
CN115187689A (en) Swin-Transformer regularization-based PET image reconstruction method
Nguyen et al. 3D Unet generative adversarial network for attenuation correction of SPECT images
CN114881914A (en) System and method for determining three-dimensional functional liver segment based on medical image
Amirkolaee et al. Development of a GAN architecture based on integrating global and local information for paired and unpaired medical image translation
CN112150378B (en) Low-dose whole-body PET image enhancement method based on self-inverse convolution generation countermeasure network
CN115423893B (en) Low-dose PET-CT reconstruction method based on multi-modal structure similarity neural network
CN115984401A (en) Dynamic PET image reconstruction method based on model-driven deep learning
CN112396579A (en) Human tissue background estimation method and device based on deep neural network
CN104720840A (en) Semi-automatic quantification method based on development specificity extraction ratio of dopamine transporter
Xue et al. PET Synthesis via Self-supervised Adaptive Residual Estimation Generative Adversarial Network
Sanaat et al. A novel convolutional neural network for predicting full dose from low dose PET scans
CN112819713B (en) Low-dose PET image noise reduction method based on unsupervised learning
CN112634147B (en) PET image noise reduction method, system, device and medium for self-supervision learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant