CN113450427B - PET image reconstruction method based on joint dictionary learning and depth network - Google Patents

PET image reconstruction method based on joint dictionary learning and depth network Download PDF

Info

Publication number
CN113450427B
CN113450427B CN202110730163.5A CN202110730163A CN113450427B CN 113450427 B CN113450427 B CN 113450427B CN 202110730163 A CN202110730163 A CN 202110730163A CN 113450427 B CN113450427 B CN 113450427B
Authority
CN
China
Prior art keywords
dose
low
dictionary
standard
sample vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110730163.5A
Other languages
Chinese (zh)
Other versions
CN113450427A (en
Inventor
郑海荣
李彦明
万丽雯
张娜
徐英杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen National Research Institute of High Performance Medical Devices Co Ltd
Original Assignee
Shenzhen National Research Institute of High Performance Medical Devices Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen National Research Institute of High Performance Medical Devices Co Ltd filed Critical Shenzhen National Research Institute of High Performance Medical Devices Co Ltd
Priority to CN202110730163.5A priority Critical patent/CN113450427B/en
Publication of CN113450427A publication Critical patent/CN113450427A/en
Application granted granted Critical
Publication of CN113450427B publication Critical patent/CN113450427B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • A61B6/037Emission tomography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5229Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image
    • A61B6/5247Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image combining images from an ionising-radiation diagnostic technique and a non-ionising radiation diagnostic technique, e.g. X-ray and ultrasound
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Surgery (AREA)
  • Mathematical Physics (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The application discloses a PET image reconstruction method based on joint dictionary learning and a depth network, and relates to the field of medical image processing. The method comprises the following steps: acquiring a training sample, and acquiring a combined dictionary by utilizing dictionary learning according to the training sample, wherein the training sample comprises a low-dose patch, and an MR patch and a standard dose patch corresponding to the low-dose patch; constructing a DNN network; training a DNN network according to the low-dose sample vector and the standard-dose sample vector until convergence to obtain a mapping model, wherein the low-dose sample vector, the MR sample vector and the standard-dose sample vector are sparse coefficients of the low-dose patch, the MR patch and the standard-dose patch under a low-dose dictionary, an MR dictionary and a standard-dose dictionary corresponding to the low-dose sample vector, the MR sample vector and the standard-dose sample vector respectively; preprocessing the low-dose PET image and the corresponding MR image thereof, and predicting by using the acquired joint dictionary and the trained DNN network to obtain the standard-dose PET image. The application is used for reducing low-dose PET image noise and enhancing image details.

Description

PET image reconstruction method based on joint dictionary learning and depth network
Technical Field
The application relates to the field of medical image processing, in particular to a PET image reconstruction method based on joint dictionary learning and depth network.
Background
A PET-MR system combining magnetic resonance imaging (magnetic resonance imaging, MRI) with positron emission computed tomography (positron emission tomography, PET). The method has the checking functions of PET and MR, and has the advantages of high sensitivity, good accuracy, small radiation quantity and the like. However, the radioactivity of standard doses of PET tracers has a significant health risk and increases the likelihood of occurrence of various diseases under cumulative effects.
Reducing the PET tracer dose is a viable solution, the biggest problem with this solution is that the low dose PET images are noisy, with loss of detail, which is detrimental to disease diagnosis. At present, the prediction from a low-dose PET image to a standard-dose PET image mainly comprises two types, namely a traditional algorithm of sparse expression and deep learning through a neural network, wherein the traditional algorithm of sparse expression is mainly used for dictionary learning, and the traditional algorithm of sparse expression is mainly used for deep learning and the CNN network and the GAN countermeasure network are mainly used for deep learning. However, the existing method has the problems of noise generation and detail loss of images and the problems of large calculation amount and low iteration speed in the prediction process due to higher resolution of PET images.
Disclosure of Invention
The application provides a PET image reconstruction method based on joint dictionary learning and a depth network, which utilizes training samples and a constructed joint dictionary to respectively acquire sparse coefficients corresponding to low dose, MR and standard dose, introduces the sparse coefficients as sample vectors into a DNN network, maps the low dose sample vectors to the standard dose sample vectors, avoids the defects of noise generation and detail loss of images caused by reducing the dose of PET developer during PET imaging, and simultaneously realizes the prediction from the low dose PET image to the standard dose PET image.
In order to achieve the above purpose, the PET image reconstruction method based on the joint dictionary learning and the depth network comprises the following steps:
obtaining a training sample, wherein the training sample comprises a low dose patch and an MR patch and a standard dose patch corresponding to the low dose patch;
obtaining a joint dictionary by utilizing dictionary learning according to a training sample, wherein the joint dictionary comprises a low-dose dictionary, an MR dictionary and a standard-dose dictionary;
obtaining a low-dose sample vector, an MR sample vector and a standard-dose sample vector, wherein the low-dose sample vector, the MR sample vector and the standard-dose sample vector are sparse coefficients of a low-dose patch, an MR patch and a standard-dose patch under a low-dose dictionary, an MR dictionary and a standard-dose dictionary corresponding to the low-dose sample vector, the MR sample vector and the standard-dose sample vector respectively;
constructing a DNN network;
training the DNN network according to the low-dose sample vector, the MR sample vector and the standard-dose sample vector until convergence, and obtaining a mapping model from the low-dose sample vector to the standard-dose sample vector;
preprocessing the low-dose PET image and the corresponding MR image thereof, and predicting by using the acquired joint dictionary and the trained DNN network to obtain the standard-dose PET image.
Further, training the DNN network according to the low-dose sample vector, the MR sample vector and the standard-dose sample vector until convergence, and obtaining a mapping model of the low-dose sample vector to the standard-dose sample vector specifically includes:
and taking the low-dose sample vector and the MR sample vector as inputs of the DNN network, taking the standard-dose sample vector as a result, training the DNN network until convergence, and obtaining a mapping model of the low-dose sample vector to the standard-dose sample vector.
Further, in the step of constructing the DNN network, the DNN network includes an input layer, a hidden layer and an output layer, where the hidden layer adopts a 3-layer network, and the number of neurons in each layer is 2048.
Further, dictionary learning adopts a mode of alternately acquiring sparse coefficients and updating a joint dictionary, and the method specifically comprises the following steps:
constructing an initialized joint dictionary;
acquiring sparse coefficients according to the initialized joint dictionary;
splitting the initialized joint dictionary into a low-dose dictionary, an MR dictionary and a standard-dose dictionary, and respectively updating the low-dose dictionary, the MR dictionary and the standard-dose dictionary by using the acquired sparse coefficients; combining the updated low-dose dictionary, the MR dictionary and the standard dose dictionary into a joint dictionary;
and carrying out iterative updating on the sparse coefficient and the joint dictionary until convergence.
Further, when the joint dictionary is acquired, the OMP method is adopted for acquiring the first sparse coefficient, and the KSVD method is adopted for dictionary updating.
Further, before the step of constructing the DNN network, the method further comprises: preprocessing the low dose sample vector, the MR sample vector and the standard dose sample vector;
the pretreatment steps comprise: taking sparse indexes corresponding to the low-dose patch, the MR sample vector and the standard dose patch which are not zero as a low-dose sample vector, an MR sample vector and a standard dose sample vector respectively; the sparse index is a vector formed by sparse coefficients corresponding to vector combination.
Further, the step of obtaining a training sample, wherein the training sample includes a low dose patch and an MR patch and a standard dose patch corresponding to the low dose patch specifically includes:
acquiring a low-dose PET image, and an MR image and a standard-dose PET image corresponding to the low-dose PET image;
randomly selecting small blocks from the low-dose PET image and extending the small blocks into one-dimensional vectors to serve as low-dose patches, and simultaneously selecting the small blocks from corresponding positions in the MR image and the standard-dose image and extending the small blocks into one-dimensional vectors to serve as MR patches and standard-dose patches.
Further, before the step of acquiring the joint dictionary by using dictionary learning according to the training sample, the method further comprises:
repeated low dose patches and their corresponding MR patches and standard dose patches are removed from the training samples.
Further, the specific steps of preprocessing the low-dose PET image and the corresponding MR image thereof, and predicting the standard-dose PET image by using the acquired joint dictionary and the trained DNN network include:
blocking the low-dose PET image and the MR image with a certain step length, and extending the blocks into one-dimensional block vectors;
combining the block vector with a low-dose dictionary and an MR dictionary to obtain a sparse coefficient corresponding to the block vector; inputting the model as a low-dose sample vector into a trained DNN network model to obtain a predicted standard-dose sample vector;
combining the obtained standard dose sample vector with a standard dose dictionary to obtain a standard dose image block, and combining the standard dose image block according to a set step length to obtain a predicted standard dose PET image.
Further, before the step of inputting the sample vector as a low-dose sample vector into the trained DNN network model to obtain a predicted standard-dose sample vector, the method further comprises: preprocessing the obtained sparse coefficient;
the pretreatment steps comprise: taking the sparse index corresponding to the low-dose patch which is not zero as a low-dose sample vector; the sparse index is a vector formed by sparse coefficients corresponding to vector combination.
Compared with the prior art, the application has the following beneficial effects: the standard dose PET image is restored from the low dose PET image based on the combined dictionary learning and DNN network, so that the defect that details cannot be reserved in the denoising process by the traditional method is overcome; according to the application, the low-dose sparse coefficient matrix is not directly combined with the standard dose dictionary to conduct image prediction, but the sparse coefficient matrix obtained by dictionary learning is combined with the DNN network to obtain the standard dose sparse coefficient matrix, and then the standard dose dictionary is combined to conduct image prediction, so that the similarity between a predicted image and a real standard dose PET image is improved; meanwhile, the technology introduces MR image prior, and effectively improves the effect of the predicted image.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of dictionary learning principles;
FIG. 2 is a schematic diagram of dictionary updating in accordance with the present application;
FIG. 3 is a schematic diagram of sample vector acquisition according to the present application;
FIG. 4 is a schematic diagram of a DNN network;
FIG. 5 is a comparison of a low dose PET image, a standard dose PET image, and a reconstructed image obtained by the method of the present application;
fig. 6 is a flow chart of the reconstruction method of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present application, unless otherwise indicated, the meaning of "a plurality" is two or more.
As shown in fig. 6, the embodiment of the application provides a PET image reconstruction method based on joint dictionary learning and depth network all the time, which comprises the following specific steps:
s1: obtaining training samples
S101: a training PET image is acquired, the training PET image including a low dose PET image, an MR image, and a standard dose PET image. PET images are a set of continuous tomographic images of the human body.
S102: randomly selecting small blocks from the low-dose PET image and extending the small blocks into one-dimensional vectors to obtain a low-dose patch, and simultaneously selecting small blocks from corresponding positions in the MR image and the standard dose image and extending the small blocks into one-dimensional vectors to obtain the MR patch and the standard dose patch.
S103: removing repeated patches, and removing patches which are unfavorable for acquiring the joint dictionary according to the variance; the processed low dose patch and its corresponding MR patch and standard dose patch are then used as training samples.
S2: and acquiring a joint dictionary according to the training sample and in an alternating manner of sparse coefficient and dictionary updating, wherein the joint dictionary comprises a low-dose dictionary, an MR dictionary and a standard-dose dictionary. The dictionary learning principle is shown in fig. 1.
Solution formula for dictionary learning:
in the formula, X represents a sparse coefficient, D represents a feature matrix (dictionary), and the main purpose of dictionary learning is to obtain D, wherein X minimizes the result of the formula.
S201: initializing a joint dictionary, wherein the joint dictionary adopts a K-means clustering algorithm, and K cluster centers are obtained as the initialized joint dictionary according to training samples, and the joint dictionary needs to be subjected to normal processing.
S202: dictionary updating
Referring to fig. 2, in order to improve the sparse consistency of two dictionaries of training results, first, a joint dictionary is used to obtain sparse coefficients; splitting the joint dictionary into a low-dose dictionary, an MR dictionary and a standard-dose dictionary, and then respectively updating the low-dose dictionary, the MR dictionary and the standard-dose dictionary by using sparse coefficients; and finally, combining the updated three dictionaries into a new joint dictionary. And (5) repeating the iterative updating until convergence.
The sparse coefficient is obtained by an OMP method, and the dictionary is updated by a KSVD method.
S3: obtaining a low dose sample vector and a standard dose sample vector
Sparse coefficients of the low dose patch, the MR patch, and the standard dose patch under the low dose dictionary, the MR dictionary, and the standard dose dictionary corresponding thereto. The sparseness of the sparse coefficient is 3, namely, the coefficient has at most 3 values which are not zero.
The obtained sparse coefficient can be directly used as a sample vector; the sparse index which is not zero is taken as a vector to combine the corresponding sparse coefficients to form a vector, and the vector is taken as a sample vector, and the specific method is shown in figure 3.
S4: referring to fig. 4, a DNN network is constructed to achieve a mapping from low dose sparse coefficients to standard dose sparse coefficients.
The DNN network comprises an input layer, a hidden layer and an output layer, wherein the hidden layer adopts a 3-layer network, and the number of neurons in each layer is 2048.
S5: and taking the low-dose sample vector and the MR sample vector as inputs of the DNN network, taking the standard-dose sample vector as a result, training the DNN network until convergence, and obtaining a mapping model from the low-dose sample vector to the standard-dose sample vector.
S6: reconstruction of standard dose PET images
Blocking the low-dose PET image and the MR image with the step length of 1, and extending the blocks into one-dimensional block vectors;
splitting the joint dictionary into a low dose dictionary, an MR dictionary and a standard dose sample vector;
combining the block vectors with the trained low-dose dictionary and MR dictionary to obtain a sparse coefficient corresponding to the block vectors; inputting the model as a low-dose sample vector into a trained DNN network model to obtain a predicted standard-dose sample vector; the obtained sparse coefficient can be directly used as a low-dose sample vector; the sparse index which is not zero can be used as a sparse coefficient corresponding to vector combination to form a vector, and the vector is used as a low-dose sample vector;
combining the predicted standard dose sample vector with a standard dose dictionary to obtain a standard dose image block, and combining the standard dose image block according to a set step length to obtain a predicted standard dose PET image.
Performance test of reconstruction method: by using 1000 head low-dose PET images as samples and performing iterative training for 50 times to obtain a joint dictionary, reconstructing an image under the condition of not using a deep learning mapping sparse matrix is shown in fig. 5, and compared with the low-dose PET images, the image noise reconstructed by the method is obviously reduced, and the reconstructed image is more similar to the standard-dose PET images.
When 18 PET images are tested, the average PSNR is improved from 29.65 to 30.86, and the reconstructed image is more similar to the standard dose PET image than the original low dose PET image, so that the ideal effect is achieved.
The present application is not limited to the above embodiments, and any changes or substitutions within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the present application should be subject to the protection scope of the claims.

Claims (9)

1. The PET image reconstruction method based on the joint dictionary learning and the depth network is characterized by comprising the following steps of:
obtaining a training sample, wherein the training sample comprises a low dose patch and an MR patch and a standard dose patch corresponding to the low dose patch;
obtaining a joint dictionary by utilizing dictionary learning according to a training sample, wherein the joint dictionary comprises a low-dose dictionary, an MR dictionary and a standard-dose dictionary;
obtaining a low-dose sample vector, an MR sample vector and a standard-dose sample vector, wherein the low-dose sample vector, the MR sample vector and the standard-dose sample vector are sparse coefficients of a low-dose patch, an MR patch and a standard-dose patch under a low-dose dictionary, an MR dictionary and a standard-dose dictionary corresponding to the low-dose sample vector, the MR sample vector and the standard-dose sample vector respectively;
constructing a DNN network;
training the DNN network according to the low-dose sample vector, the MR sample vector and the standard-dose sample vector until convergence, and obtaining a mapping model from the low-dose sample vector to the standard-dose sample vector;
preprocessing a low-dose PET image and a corresponding MR image thereof, and predicting by using the acquired joint dictionary and a trained DNN network to obtain a standard-dose PET image, wherein the specific steps comprise:
blocking the low-dose PET image and the MR image with a certain step length, and extending the blocks into one-dimensional block vectors;
combining the block vector with a low-dose dictionary and an MR dictionary to obtain a sparse coefficient corresponding to the block vector; inputting the model as a low-dose sample vector into a trained DNN network model to obtain a predicted standard-dose sample vector;
combining the obtained standard dose sample vector with a standard dose dictionary to obtain a standard dose image block, and combining the standard dose image block according to a set step length to obtain a predicted standard dose PET image.
2. The method for reconstructing a PET image based on a joint dictionary learning and depth network according to claim 1, wherein the step of training the DNN network according to the low-dose sample vector, the MR sample vector and the standard-dose sample vector until convergence, to obtain a mapping model of the low-dose sample vector to the standard-dose sample vector specifically comprises:
and taking the low-dose sample vector and the MR sample vector as inputs of the DNN network, taking the standard-dose sample vector as a result, training the DNN network until convergence, and obtaining a mapping model of the low-dose sample vector to the standard-dose sample vector.
3. The PET image reconstruction method based on the joint dictionary learning and depth network according to claim 2, wherein in the step of constructing the DNN network, the DNN network comprises an input layer, a hidden layer and an output layer, the hidden layer adopts a 3-layer network, and the number of neurons in each layer is 2048.
4. The method for reconstructing a PET image based on joint dictionary learning and depth network according to claim 2, wherein the dictionary learning adopts a mode of alternately acquiring sparse coefficients and updating the joint dictionary, and the specific steps include:
constructing an initialized joint dictionary;
acquiring sparse coefficients according to the initialized joint dictionary;
splitting the initialized joint dictionary into a low-dose dictionary, an MR dictionary and a standard-dose dictionary, and respectively updating the low-dose dictionary, the MR dictionary and the standard-dose dictionary by using the acquired sparse coefficients; combining the updated low-dose dictionary, the MR dictionary and the standard dose dictionary into a joint dictionary; and carrying out iterative updating on the sparse coefficient and the joint dictionary until convergence.
5. The method for reconstructing a PET image based on joint dictionary learning and depth network according to claim 4, wherein when the joint dictionary is acquired, the first sparse coefficient is acquired by an OMP method, and the dictionary update is performed by a KSVD method.
6. The method for PET image reconstruction based on joint dictionary learning and depth network according to claim 5, further comprising, before the step of constructing the DNN network: preprocessing the low dose sample vector, the MR sample vector and the standard dose sample vector;
the pretreatment steps comprise: taking sparse indexes corresponding to the low-dose patch, the MR sample vector and the standard dose patch which are not zero as a low-dose sample vector, an MR sample vector and a standard dose sample vector respectively; the sparse index is a vector formed by sparse coefficients corresponding to vector combination.
7. The method for PET image reconstruction based on joint dictionary learning and depth network according to claim 1, wherein the step of obtaining training samples, wherein the training samples include low dose patches and MR patches and standard dose patches corresponding thereto specifically includes:
acquiring a low-dose PET image, and an MR image and a standard-dose PET image corresponding to the low-dose PET image;
randomly selecting small blocks from the low-dose PET image and extending the small blocks into one-dimensional vectors to serve as low-dose patches, and simultaneously selecting the small blocks from corresponding positions in the MR image and the standard-dose image and extending the small blocks into one-dimensional vectors to serve as MR patches and standard-dose patches.
8. The method for PET image reconstruction based on joint dictionary learning and depth network according to claim 7, further comprising, before the step of acquiring the joint dictionary using dictionary learning from the training sample:
repeated low dose patches and their corresponding MR patches and standard dose patches are removed from the training samples.
9. The method for PET image reconstruction based on joint dictionary learning and depth network according to claim 1, further comprising, before the step of inputting it as a low dose sample vector into the trained DNN network model to obtain a predicted standard dose sample vector: preprocessing the obtained sparse coefficient;
the pretreatment steps comprise: taking the sparse index corresponding to the low-dose patch which is not zero as a low-dose sample vector; the sparse index is a vector formed by sparse coefficients corresponding to vector combination.
CN202110730163.5A 2021-06-29 2021-06-29 PET image reconstruction method based on joint dictionary learning and depth network Active CN113450427B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110730163.5A CN113450427B (en) 2021-06-29 2021-06-29 PET image reconstruction method based on joint dictionary learning and depth network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110730163.5A CN113450427B (en) 2021-06-29 2021-06-29 PET image reconstruction method based on joint dictionary learning and depth network

Publications (2)

Publication Number Publication Date
CN113450427A CN113450427A (en) 2021-09-28
CN113450427B true CN113450427B (en) 2023-09-01

Family

ID=77814057

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110730163.5A Active CN113450427B (en) 2021-06-29 2021-06-29 PET image reconstruction method based on joint dictionary learning and depth network

Country Status (1)

Country Link
CN (1) CN113450427B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107845120A (en) * 2017-09-27 2018-03-27 深圳先进技术研究院 PET image reconstruction method, system, terminal and readable storage medium storing program for executing
CN109166161A (en) * 2018-07-04 2019-01-08 东南大学 A kind of low-dose CT image processing system inhibiting convolutional neural networks based on noise artifacts
CN109741254A (en) * 2018-12-12 2019-05-10 深圳先进技术研究院 Dictionary training and Image Super-resolution Reconstruction method, system, equipment and storage medium
CN111311704A (en) * 2020-01-21 2020-06-19 上海联影智能医疗科技有限公司 Image reconstruction method and device, computer equipment and storage medium
CN112488949A (en) * 2020-12-08 2021-03-12 深圳先进技术研究院 Low-dose PET image restoration method, system, equipment and medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105931179B (en) * 2016-04-08 2018-10-26 武汉大学 A kind of image super-resolution method and system of joint sparse expression and deep learning
US20180197317A1 (en) * 2017-01-06 2018-07-12 General Electric Company Deep learning based acceleration for iterative tomographic reconstruction

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107845120A (en) * 2017-09-27 2018-03-27 深圳先进技术研究院 PET image reconstruction method, system, terminal and readable storage medium storing program for executing
CN109166161A (en) * 2018-07-04 2019-01-08 东南大学 A kind of low-dose CT image processing system inhibiting convolutional neural networks based on noise artifacts
CN109741254A (en) * 2018-12-12 2019-05-10 深圳先进技术研究院 Dictionary training and Image Super-resolution Reconstruction method, system, equipment and storage medium
CN111311704A (en) * 2020-01-21 2020-06-19 上海联影智能医疗科技有限公司 Image reconstruction method and device, computer equipment and storage medium
CN112488949A (en) * 2020-12-08 2021-03-12 深圳先进技术研究院 Low-dose PET image restoration method, system, equipment and medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于字典学习的低剂量X-ray CT图像去噪;朱永成;陈阳;罗立民;Toumoulin Christine;;东南大学学报(自然科学版)(第05期);第864页-第868页 *

Also Published As

Publication number Publication date
CN113450427A (en) 2021-09-28

Similar Documents

Publication Publication Date Title
CN111709953B (en) Output method and device in lung lobe segment segmentation of CT (computed tomography) image
CN109035284B (en) Heart CT image segmentation method, device, equipment and medium based on deep learning
Fu et al. A deep learning reconstruction framework for differential phase-contrast computed tomography with incomplete data
US11605162B2 (en) Systems and methods for determining a fluid and tissue volume estimations using electrical property tomography
Wang et al. MMNet: A multi-scale deep learning network for the left ventricular segmentation of cardiac MRI images
Chen et al. Generative adversarial U-Net for domain-free medical image augmentation
Nguyen et al. 3D Unet generative adversarial network for attenuation correction of SPECT images
CN115512110A (en) Medical image tumor segmentation method related to cross-modal attention mechanism
CN114332287A (en) Method, device, equipment and medium for reconstructing PET (positron emission tomography) image based on transformer feature sharing
He Automated detection of intracranial hemorrhage on head computed tomography with deep learning
Deng et al. A strategy of MR brain tissue images' suggestive annotation based on modified U-net
Wang et al. IGNFusion: an unsupervised information gate network for multimodal medical image fusion
Li et al. Learning non-local perfusion textures for high-quality computed tomography perfusion imaging
Wang et al. Deep transfer learning-based multi-modal digital twins for enhancement and diagnostic analysis of brain mri image
Davamani et al. Biomedical image segmentation by deep learning methods
CN115868923A (en) Fluorescence molecule tomography method and system based on expanded cyclic neural network
Fu et al. A two-branch neural network for short-axis PET image quality enhancement
CN114358285A (en) PET system attenuation correction method based on flow model
Nadeem et al. A fully automated CT-based airway segmentation algorithm using deep learning and topological leakage detection and branch augmentation approaches
Zhang et al. Automatic parotid gland segmentation in MVCT using deep convolutional neural networks
Taher et al. Automatic cerebrovascular segmentation methods-a review
CN113450427B (en) PET image reconstruction method based on joint dictionary learning and depth network
Luo et al. Tissue segmentation in nasopharyngeal ct images using two-stage learning
CN116385809A (en) MRI brain tumor classification method and system based on semi-supervised learning
Penarrubia et al. Improving motion‐mask segmentation in thoracic CT with multiplanar U‐nets

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant