CN114332287A - Method, device, equipment and medium for reconstructing PET (positron emission tomography) image based on transformer feature sharing - Google Patents

Method, device, equipment and medium for reconstructing PET (positron emission tomography) image based on transformer feature sharing Download PDF

Info

Publication number
CN114332287A
CN114332287A CN202210235862.7A CN202210235862A CN114332287A CN 114332287 A CN114332287 A CN 114332287A CN 202210235862 A CN202210235862 A CN 202210235862A CN 114332287 A CN114332287 A CN 114332287A
Authority
CN
China
Prior art keywords
image
pet
decoder
coder
pet image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210235862.7A
Other languages
Chinese (zh)
Other versions
CN114332287B (en
Inventor
杨宝
朱闻韬
吴元锋
李少杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202210235862.7A priority Critical patent/CN114332287B/en
Publication of CN114332287A publication Critical patent/CN114332287A/en
Application granted granted Critical
Publication of CN114332287B publication Critical patent/CN114332287B/en
Priority to JP2023007950A priority patent/JP7246116B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Nuclear Medicine (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention discloses a method, a device, equipment and a medium for reconstructing a PET image based on transformer feature sharing. The PET image reconstruction network model is composed of two groups of coders and decoders, wherein one group of coders and decoders establishes mapping between a PET back projection image and a PET reconstruction image, the other group of coders and decoders establishes mapping between the PET back projection image and a prior information image, and meanwhile, the two groups of coders and decoders are optimized to achieve the purpose of reducing noise in a target PET image by using prior knowledge in the prior information image and simultaneously retain image detail information. And a transformer unit is used between the two groups of encoders to replace a convolution-based attention mechanism to realize the parameter sharing of the self-learning encoder in the reconstruction network training process, so that the reconstruction error is further reduced, and the quality of the reconstructed PET image is improved.

Description

Method, device, equipment and medium for reconstructing PET (positron emission tomography) image based on transformer feature sharing
Technical Field
The invention belongs to the technical field of medical imaging, and particularly relates to a method, a device, equipment and a medium for reconstructing a network of a PET (positron emission tomography) image based on transform feature sharing.
Background
Positron Emission Tomography (PET) is a medical image that can provide both biological functional metabolic information and morphological anatomical structure information of a human body, and is widely used in clinical applications such as diagnosis of tumors, diagnosis and treatment of neuropsychiatric diseases and cardiovascular diseases. PET imaging procedures involve the injection of a radiotracer into a patient prior to scanning; the tracer decays to generate positive electrons when participating in physiological metabolism in a human body; the positron generates annihilation effect with electrons in adjacent tissues to generate a high-energy photon pair with reverse motion; coincidence detection of photon pairs by two detectors in opposite positions can record a coincidence response line; collecting a certain number of coincidence response lines, and arranging the coincidence response lines into three-dimensional PET original data according to position and angle information; after the original data are corrected, a three-dimensional PET image which expresses the metabolic intensity of each tissue in a human body can be obtained by utilizing a reconstruction algorithm.
With the update of computer hardware and the rapid development of deep learning technology, the construction of a PET image reconstruction network based on deep learning has become a hot topic in recent years. There have been advances in schemes in which encoders and decoders are used to establish an end-to-end mapping from a low quality PET image to a high quality PET image, or to establish a mapping from PET raw data to a PET image. However, the existing reconstruction network based on a single encoding-decoding structure cannot introduce a priori knowledge about the PET image existing in other modality medical images (such as computed tomography CT, magnetic resonance imaging MRI) in the reconstruction process, so that the imaging quality of the PET image still has room for improvement.
The multi-task deep neural network formed by multiple sets of coding-decoding devices has been successfully applied to tasks such as segmentation and detection of natural images and medical images. The network structure can realize the introduction of prior information of an auxiliary task in a main task, and a common characteristic sharing mode in the network structure is the sharing of a common encoder or the parameter sharing of a learning encoder by utilizing a convolution-based space and channel attention mechanism. The learnable parameter sharing mechanism may push network training towards a better solution than forced parameter sharing with a common encoder. However, the existing convolution-based shared parameter learning lacks the global property, and only attention coefficients among local features can be calculated to realize local parameter sharing. On the other hand, the network structure of the transformer replaces the traditional convolution structure, and pioneering progress is made in the fields of character translation, voice recognition and the like. The Transformer is composed of a multi-head self-attention mechanism with global property and a multi-layer perceptron.
Disclosure of Invention
The invention aims to provide a method, a device, equipment and a medium for reconstructing a PET image based on transform feature sharing, aiming at the defects of the existing PET image reconstruction technology based on deep learning.
The invention constructs a novel PET image reconstruction network model based on Transformer feature sharing based on the structure and the function of the Transformer. The model is composed of two groups of coders and decoders, wherein one group of coders and decoders establishes mapping between PET back projection images and PET reconstruction images, the other group of coders and decoders establishes mapping between the PET back projection images and prior information images, and reconstruction errors of a PET image reconstruction network are reduced by simultaneously optimizing the two groups of coders and decoders through prior knowledge in the prior information images. Between two groups of encoders, a transformer unit is used for replacing an attention mechanism based on convolution to achieve autonomous learning encoder parameter sharing in the reconstruction network training process, specifically, a penetration connection based on the transformer unit is designed, and overall channel and space attention coefficient calculation is achieved, so that the encoder parameter sharing is autonomously learned in the reconstruction network training process, the reconstruction error of the network is further reduced, and the reconstruction PET image quality is improved.
The technical scheme of the invention is realized by the following technical scheme:
a method for reconstructing a PET image based on transform feature sharing comprises the following steps:
acquiring a back projection image containing PET original data information;
and inputting the back projection image containing the PET original data information into a pre-trained PET image reconstruction network model based on transformer feature sharing to obtain a PET image.
Wherein the transform feature sharing-based PET image reconstruction network model comprises a first coder-decoder, a second coder-decoder and at least one transform unit, wherein the first coder-decoder is used for generating a predicted PET image according to a back projection image mapping; a second coder-decoder for generating a predicted a priori information image from the backprojected image map; the prior information image is one of other modal images acquired in the same batch with the PET image; the transformer unit is used for connecting the convolution units in the coding structures of the first coder-decoder and the second coder-decoder, receiving the output of the convolution units in the coding structures of the first coder-decoder and the second coder-decoder, and sending the processed output to the next convolution unit in the coding structures of the first coder-decoder and the second coder-decoder; each transformer unit comprises an attention mechanism module and a multi-layer sensing module which are sequentially connected;
the pre-trained PET image reconstruction network model based on transform feature sharing is obtained by training through minimizing the loss of a predicted PET image generated by a first coder-decoder, a predicted prior information image generated by a second coder-decoder and a true value and maximizing the structural similarity between the predicted PET image generated by the first coder-decoder and the predicted prior information image generated by the second coder-decoder on the basis of a training data set.
Further, still include:
each set of training data in the set of training data comprises: a back projection image containing PET raw data information, a reconstructed PET image and a prior information image.
Further, the prior information image is a CT image or an MRI image acquired in the same batch as the PET image.
Further, the back projection image containing the PET raw data information is obtained by the following method:
after the PET original data are regularized, attenuated, randomized and scattered, the PET original data are back projected to an image domain, and a back projection image containing PET original data information is obtained.
Further, the PET image is obtained by:
and carrying out iterative reconstruction on the PET original data after physical rectification to obtain a PET image.
Furthermore, the attention mechanism module is used for acquiring a query vector group, a keyword vector group and a feature vector group according to output splicing characteristics of convolution units in the coding structures of the first coder-decoder and the second coder-decoder, and then sequentially performing inner product operation on each vector in the query vector group and all vectors in the keyword vector group; and then acquiring attention parameters of the corresponding vector aiming at the corresponding vector and other vectors according to inner product operation, calculating the weighted sum of all vectors in the feature vector group by using the acquired attention parameters, finishing corresponding feature vector updating, sequentially updating all feature vectors, splicing all updated feature vectors, calculating to obtain output features, and adding the output features and the output splicing features of the convolution unit to be used as the final output of the attention mechanism module.
A PET image reconstruction apparatus, the apparatus comprising:
the image acquisition module is used for acquiring a back projection image containing PET original data information;
and the reconstruction module is used for inputting the back projection image containing the PET original data information into a pre-trained PET image reconstruction network model based on transformer feature sharing to obtain a PET image.
Further, the apparatus further comprises:
and the training module is used for training to obtain a pre-trained PET image reconstruction network model based on the transform feature sharing by taking the goal of minimizing the loss of the predicted PET image generated by the first coder-decoder, the loss of the predicted prior information image generated by the second coder-decoder and the truth value and maximizing the structural similarity between the predicted PET image generated by the first coder-decoder and the predicted prior information image generated by the second coder-decoder as a target based on the training data set.
An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the PET image reconstruction method as described above when executing the computer program.
A storage medium containing computer executable instructions which, when executed by a computer processor, implement the PET image reconstruction method described above.
Compared with the existing PET image reconstruction method, the method has the beneficial effects that: the invention provides a method for simultaneously training two groups of coders and decoders to realize the mapping from PET back projection images to PET reconstruction images and other modality medical images (such as CT and MRI), and uses prior information in the CT or MRI images to improve the reconstruction quality of the PET images and improve the generalization capability of a reconstruction network. The invention provides a method for calculating the global attention mechanism between different channels and different space position characteristics by using a transformer unit to replace a convolutional layer and alternately connecting two groups of encoders, which is superior to the performance of the attention mechanism realized by a convolutional network based on local characteristics in the prior art. In addition, the invention realizes the feature sharing between two groups of encoders by optimizing a loss function, realizes the selective increase of useful information and abandons useless information compared with the prior network structure sharing one encoder, thereby being closer to the optimal solution of the network.
Drawings
Fig. 1 is a schematic flowchart of a PET image reconstruction method based on transform feature sharing according to an embodiment of the present invention;
fig. 2 is a structural diagram of a PET image reconstruction network model based on transform feature sharing according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating operation of an attention mechanism module according to an embodiment of the present invention;
fig. 4 is a graph comparing the reconstruction results of a transform feature sharing-based PET image reconstruction method and a single codec-based PET image reconstruction method according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a PET reconstruction apparatus according to a second embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to a third embodiment of the present invention.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, without departing from
In the context of the present application, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
Example one
Fig. 1 is a schematic flowchart of a PET image reconstruction method based on transform feature sharing according to an embodiment of the present invention; referring to fig. 1, the method specifically includes the following steps:
step 101: acquiring a back projection image containing PET original data information;
specifically, PET raw data is regularized, attenuated, randomized and scattering corrected, and then is back-projected to an image domain, so that a back-projected image containing raw data information is obtained.
Step 102: and inputting the back projection image containing the PET original data information into a pre-trained PET image reconstruction network model based on transformer feature sharing to obtain a PET image.
Wherein the transform feature sharing-based PET image reconstruction network model comprises a first coder-decoder, a second coder-decoder and a transform unit, wherein the first coder-decoder is used for generating a predicted PET image according to a back projection image mapping; a second coder-decoder for generating a predicted a priori information image from the backprojected image map; the prior information image is one of other modal images acquired in the same batch with the PET image; the transformer unit is used for connecting the convolution units in the coding structures of the first coder-decoder and the second coder-decoder, receiving the output of the convolution units in the coding structures of the first coder-decoder and the second coder-decoder, and sending the processed output to the next convolution unit in the coding structures of the first coder-decoder and the second coder-decoder; each transducer unit comprises an attention mechanism module and a multilayer sensing module which are connected in sequence.
The encoder-decoder is a commonly used neural network structure, and is composed of a plurality of convolution units and deconvolution units.
The invention generates a predicted PET image and a predicted prior information image by setting up a coder-decoder, respectively maps and generates the predicted PET image and the predicted prior information image, reduces noise in a target PET image by using prior knowledge in the prior information image and reserves image detail information, simultaneously uses a transformer unit consisting of a multi-head self-attention mechanism and a multi-layer sensor to calculate a global attention coefficient, designs a penetration connection based on the transformer unit between two groups of coders, realizes calculation of a global channel and a spatial attention coefficient, thereby autonomously learning coder parameter sharing in a reconstruction network training process, further reducing reconstruction errors of a network and improving the quality of a reconstruction PET image.
The prior information image refers to one of other modality images acquired in the same batch as the PET image, such as a CT image, an MRI image, and the like, for example. Due to the fact that the acquisition time interval of the prior information image and the acquisition time interval of the PET image are short, structural changes of organs and tissues in the human body can be ignored, and the prior information image and the PET image have the same anatomical structure. In addition, because the prior information image has higher resolution than the PET image, the noise inside the corresponding organs and tissues of the PET image can be reduced by utilizing the local smoothness of the prior information image, and the structure detail information is reserved.
The number of the transform units is the same as the number of the convolution units in the coding structures of the first coder-decoder and the second coder-decoder, and the autonomous learning coder parameters are shared for each group of convolution units, so that the reconstructed PET image quality is improved to the greatest extent.
The output of the convolution unit in the codec coding structure is composed of multiple channels, and as a preferred embodiment, the attention mechanism module performs operation processing for each channel, as shown in fig. 3, specifically as follows:
passing each channel data through a projection matrix according to output splicing characteristics of convolution units in a first codec and a second codec coding structureW i qryW i keyW i valiIs shown asiEach channel is projected into a query vector group, a keyword vector group and a feature vector group respectively, and then each vector in the query vector group and all vectors in the keyword vector group are subjected to inner product operation in sequence; and then obtaining attention parameters of the corresponding vector aiming at the vector and other vectors according to the inner product operation, calculating the weighted sum of all vectors in the feature vector group by using the obtained attention parameters, completing the updating of the corresponding feature vector, sequentially updating all feature vectors, and repeating the steps to complete the updating of other channel feature vector groups. Splicing the updated characteristic vector group of each channel, calculating and passing through a projection matrixW o And calculating to obtain an output characteristic 1, and adding the output characteristic 1 and the output splicing characteristic of the convolution unit to be used as the final output of the attention mechanism module.
The feature vector update method for a certain vector of a single channel can be expressed as:
Figure 796917DEST_PATH_IMAGE001
whereinf i Is the concatenation of the single channel outputs of the convolution units in the first codec and the second codec.f i W i qry,(W i key T f i f i W i valThree projection operations. softmax () is a non-linear activation function,dthe dimension of the set of single-channel feature vectors. The attention mechanism module provided by the method replaces a convolution layer, is connected with two groups of encoders in an inserting mode, realizes the attention mechanism for calculating the global property among different channels and different space position characteristics, and is superior to the existing attention mechanismThe performance of the attention mechanism implemented by the convolutional network is based on local features.
The pre-trained PET image reconstruction network model based on transform feature sharing is obtained by training through minimizing the loss of a predicted PET image generated by a first coder-decoder, a predicted prior information image generated by a second coder-decoder and a true value and maximizing the structural similarity between the predicted PET image generated by the first coder-decoder and the predicted prior information image generated by the second coder-decoder on the basis of a training data set.
Specifically, each set of training data in the training data set includes: a back projection image containing PET raw data information, a reconstructed PET image and a prior information image. The back projection image containing PET original data information is used as an input of a PET image reconstruction network model based on transformer feature sharing, the obtained PET image is reconstructed to serve as a learning label of a first coder-decoder (the first coder-decoder generates a true value corresponding to the predicted PET image), and the prior information image serves as a learning label of a second coder-decoder (the second coder-decoder generates a true value corresponding to the predicted prior information image).
The data as learning labels need to characterize the distribution of PET radiotracer in the organs of the human body as high as possible, and in this embodiment, the PET image is obtained by the following method:
and carrying out iterative reconstruction on the PET original data after physical rectification to obtain a PET image.
Further, during training, parameters of the first coder-decoder and the second coder-decoder which are connected in a interspersed mode by the transform unit are updated by using a gradient optimization algorithm to minimize the loss of the first coder-decoder generated predicted PET image and the truth value of the second coder-decoder generated predicted prior information image and maximize the structural similarity between the first coder-decoder generated predicted PET image and the second coder-decoder generated predicted prior information image. Wherein the loss of the first codec generating the predicted PET image and the second codec generating the predicted prior information image and the true value is defined as:
Figure 635429DEST_PATH_IMAGE002
wherein
Figure 978948DEST_PATH_IMAGE003
Is the output of the first codec and,I PET for the learning label of the first codec, i.e. the reconstructed acquired PET image,
Figure 905316DEST_PATH_IMAGE004
is the output of the second codec and,I CT is a learning label of the second codec, in this embodiment, a CT image. SSIM () is a structural similarity index measurement function that is used to calculate the structural similarity between two sets of codec outputs.βIs a weighting coefficient used for adjusting the importance degree of the structural similarity to the loss function. The two groups of coders and decoders are trained by a loss function simultaneously to realize the mapping from PET back projection images to PET reconstruction images and other modality medical images (such as CT and MRI), and the prior information in the CT or MRI images is used for improving the reconstruction quality of the PET images and improving the generalization capability of a reconstruction network. In addition, the training of the network realizes the characteristic sharing between two groups of encoders by autonomous learning, compared with the existing network structure sharing one encoder, the method realizes the selective addition of useful information and abandons useless information, thereby leading the output of the encoder-decoder to be closer to the optimal solution.
Fig. 4 is a comparison graph of the reconstruction results of a PET whole-body image of a patient PET based on a transform feature sharing PET image reconstruction method and a PET image reconstruction method based on a single encoder-decoder, wherein the PET back projection image is used as an input, a conventional iterative reconstruction image is used as a learning label, and the reconstruction results of the single encoder-decoder are shown in (a) of fig. 4 and are very close to the reconstruction results of the conventional iterative reconstruction. The result of reconstruction using the transform feature sharing-based multi-coding-decoding network proposed by the present invention is shown in fig. 4 (b), the noise in the generated PET image is significantly less than that in fig. 4 (a), and the contrast of the lung tumor (as shown by the arrow) is not reduced, which significantly improves the quality of the PET reconstructed image.
Example two
Corresponding to the foregoing embodiment of the method for reconstructing a PET image based on transform feature sharing, this embodiment further provides a PET image reconstruction apparatus based on transform feature sharing, and fig. 5 is a schematic structural diagram of a PET reconstruction apparatus according to a second embodiment of the present invention, and referring to fig. 5, the apparatus includes: an image acquisition module 110 and a reconstruction module 120;
the image acquisition module 110 is configured to acquire a back projection image containing PET raw data information;
and the reconstruction module 120 is configured to input the back projection image containing the PET original data information to a pre-trained PET image reconstruction network model based on transform feature sharing, so as to obtain a PET image.
The structure of the PET image reconstruction network model based on transform feature sharing is shown in fig. 2, and is not described herein again.
Further, the apparatus further comprises a training module 130 for training to obtain a pre-trained transform feature sharing-based PET image reconstruction network model by aiming at minimizing a loss of the first codec-generated predicted PET image, the second codec-generated predicted prior information image and a true value, and maximizing a structural similarity between the first codec-generated predicted PET image and the second codec-generated predicted prior information image based on a training data set.
The PET image reconstruction device provided by the embodiment can obviously improve the quality of the PET reconstructed image.
EXAMPLE III
Corresponding to the foregoing embodiment of the method for reconstructing a PET image based on transform feature sharing, this embodiment further provides an electronic device based on transform feature sharing, fig. 6 is a schematic structural diagram of an electronic device according to a third embodiment of the present invention, and referring to fig. 6, an electronic device according to an embodiment of the present invention includes a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the method for reconstructing a PET image based on transform feature sharing when executing the computer program.
The electronic device of the present invention is any device having data processing capability, and the any device having data processing capability may be a device or apparatus such as a computer.
As a device in a logical sense, in terms of hardware, the processor of any device with data processing capability reads corresponding computer program instructions in the non-volatile memory to the memory for running, as shown in fig. 6, the device with data processing capability in the embodiment may generally include other hardware according to actual functions of the device with data processing capability, except for the processor, the memory, the network interface, and the non-volatile memory shown in fig. 6.
The implementation process of the functions and actions of each unit in the electronic device is specifically described in the implementation process of the corresponding step in the method, and is not described herein again.
For the electronic device embodiment, since it basically corresponds to the method embodiment, reference may be made to the partial description of the method embodiment for relevant points. The above-described embodiments of the electronic device are merely illustrative, wherein the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the invention. One of ordinary skill in the art can understand and implement it without inventive effort.
The embodiment of the present invention further provides a computer-readable storage medium, on which a program is stored, where the program, when executed by a processor, implements the method for reconstructing a PET image based on transform feature sharing in the above embodiments.
The computer readable storage medium may be an internal storage unit, such as a hard disk or a memory, of any data processing capability device described in any of the foregoing embodiments. The computer readable storage medium can be any device with data processing capability, such as a plug-in hard disk, a Smart Media Card (SMC), an SD Card, a Flash memory Card (Flash Card), etc. provided on the device. Further, the computer readable storage medium may include both an internal storage unit and an external storage device of any data processing capable device. The computer-readable storage medium is used for storing the computer program and other programs and data required by the arbitrary data processing-capable device, and may also be used for temporarily storing data that has been output or is to be output.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A method for reconstructing PET image based on transform feature sharing is characterized by comprising the following steps:
acquiring a back projection image containing PET original data information;
inputting the back projection image containing PET original data information to a pre-trained PET image reconstruction network model based on transformer feature sharing to obtain a PET image;
wherein the transform feature sharing-based PET image reconstruction network model comprises a first coder-decoder for generating a predicted PET image from a backprojection image map, a second coder-decoder and at least one transform unit; a second coder-decoder for generating a predicted a priori information image from the backprojected image map; the prior information image is one of other modal images acquired in the same batch with the PET image; the transformer unit is used for connecting the convolution units in the coding structures of the first coder-decoder and the second coder-decoder, receiving the output of the convolution units in the coding structures of the first coder-decoder and the second coder-decoder, and sending the processed output to the next convolution unit in the coding structures of the first coder-decoder and the second coder-decoder; each transformer unit comprises an attention mechanism module and a multi-layer sensing module which are sequentially connected;
the pre-trained PET image reconstruction network model based on transform feature sharing is obtained by training through minimizing the loss of a predicted PET image generated by a first coder-decoder, a predicted prior information image generated by a second coder-decoder and a true value and maximizing the structural similarity between the predicted PET image generated by the first coder-decoder and the predicted prior information image generated by the second coder-decoder on the basis of a training data set.
2. The method of claim 1, further comprising:
each set of training data in the set of training data comprises: a back projection image containing PET raw data information, a reconstructed PET image and a prior information image.
3. The method of claim 1, wherein the prior information image is a CT image or an MRI image acquired in the same batch as the PET image.
4. The method of claim 1, wherein the back projection image containing PET raw data information is obtained by:
after the PET original data are regularized, attenuated, randomized and scattered, the PET original data are back projected to an image domain, and a back projection image containing PET original data information is obtained.
5. The method of claim 1, wherein the PET image is obtained by:
and carrying out iterative reconstruction on the PET original data after physical rectification to obtain a PET image.
6. The method according to claim 1, wherein the attention mechanism module is configured to obtain a query vector group, a keyword vector group, and a feature vector group according to output concatenation characteristics of convolution units in the first codec and the second codec, and perform inner product operation on each vector in the query vector group and all vectors in the keyword vector group in sequence; and then acquiring attention parameters of the corresponding vector aiming at the corresponding vector and other vectors according to inner product operation, calculating the weighted sum of all vectors in the feature vector group by using the acquired attention parameters, finishing corresponding feature vector updating, sequentially updating all feature vectors, splicing all updated feature vectors, calculating to obtain output features, and adding the output features and the output splicing features of the convolution unit to be used as the final output of the attention mechanism module.
7. A PET image reconstruction apparatus, characterized in that the apparatus comprises:
the image acquisition module is used for acquiring a back projection image containing PET original data information;
and the reconstruction module is used for inputting the back projection image containing the PET original data information into a pre-trained PET image reconstruction network model based on transformer feature sharing to obtain a PET image.
8. The apparatus of claim 7, further comprising:
and the training module is used for training to obtain a pre-trained PET image reconstruction network model based on the transform feature sharing by taking the goal of minimizing the loss of the predicted PET image generated by the first coder-decoder, the loss of the predicted prior information image generated by the second coder-decoder and the truth value and maximizing the structural similarity between the predicted PET image generated by the first coder-decoder and the predicted prior information image generated by the second coder-decoder as a target based on the training data set.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the PET image reconstruction method according to any one of claims 1 to 6 when executing the computer program.
10. A storage medium containing computer executable instructions which, when executed by a computer processor, implement the PET image reconstruction method of any one of claims 1-6.
CN202210235862.7A 2022-03-11 2022-03-11 Method, device, equipment and medium for reconstructing PET (positron emission tomography) image based on transformer feature sharing Active CN114332287B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210235862.7A CN114332287B (en) 2022-03-11 2022-03-11 Method, device, equipment and medium for reconstructing PET (positron emission tomography) image based on transformer feature sharing
JP2023007950A JP7246116B1 (en) 2022-03-11 2023-01-23 PET image reconstruction method, apparatus, device and medium based on transformer feature sharing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210235862.7A CN114332287B (en) 2022-03-11 2022-03-11 Method, device, equipment and medium for reconstructing PET (positron emission tomography) image based on transformer feature sharing

Publications (2)

Publication Number Publication Date
CN114332287A true CN114332287A (en) 2022-04-12
CN114332287B CN114332287B (en) 2022-07-15

Family

ID=81033499

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210235862.7A Active CN114332287B (en) 2022-03-11 2022-03-11 Method, device, equipment and medium for reconstructing PET (positron emission tomography) image based on transformer feature sharing

Country Status (2)

Country Link
JP (1) JP7246116B1 (en)
CN (1) CN114332287B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115423893A (en) * 2022-11-03 2022-12-02 南京应用数学中心 Low-dose PET-CT reconstruction method based on multi-mode structure similarity neural network

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116843679B (en) * 2023-08-28 2023-12-26 南方医科大学 PET image partial volume correction method based on depth image prior frame
CN117011673B (en) * 2023-10-07 2024-03-26 之江实验室 Electrical impedance tomography image reconstruction method and device based on noise diffusion learning

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584324A (en) * 2018-10-24 2019-04-05 南昌大学 A kind of positron e mission computed tomography (PET) method for reconstructing based on autocoder network
WO2019183584A1 (en) * 2018-03-23 2019-09-26 Memorial Sloan Kettering Cancer Center Deep encoder-decoder models for reconstructing biomedical images
CN111325686A (en) * 2020-02-11 2020-06-23 之江实验室 Low-dose PET three-dimensional reconstruction method based on deep learning
CN111462020A (en) * 2020-04-24 2020-07-28 上海联影医疗科技有限公司 Method, system, storage medium and device for correcting motion artifact of heart image
CN112150568A (en) * 2020-09-16 2020-12-29 浙江大学 Magnetic resonance fingerprint imaging reconstruction method based on Transformer model
WO2021022752A1 (en) * 2019-08-07 2021-02-11 深圳先进技术研究院 Multimodal three-dimensional medical image fusion method and system, and electronic device
CN112651890A (en) * 2020-12-18 2021-04-13 深圳先进技术研究院 PET-MRI image denoising method and device based on dual-coding fusion network model
CN113256753A (en) * 2021-06-30 2021-08-13 之江实验室 PET image region-of-interest enhancement reconstruction method based on multitask learning constraint
WO2021232653A1 (en) * 2020-05-21 2021-11-25 浙江大学 Pet image reconstruction algorithm combining filtered back-projection algorithm and neural network

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108257134B (en) 2017-12-21 2022-08-23 深圳大学 Nasopharyngeal carcinoma focus automatic segmentation method and system based on deep learning
US10719961B2 (en) 2018-05-04 2020-07-21 General Electric Company Systems and methods for improved PET imaging
US11234666B2 (en) 2018-05-31 2022-02-01 Canon Medical Systems Corporation Apparatus and method for medical image reconstruction using deep learning to improve image quality in position emission tomography (PET)
JP7254656B2 (en) * 2019-07-18 2023-04-10 キヤノンメディカルシステムズ株式会社 Medical image processing device, medical image diagnostic device and nuclear medicine diagnostic device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019183584A1 (en) * 2018-03-23 2019-09-26 Memorial Sloan Kettering Cancer Center Deep encoder-decoder models for reconstructing biomedical images
CN109584324A (en) * 2018-10-24 2019-04-05 南昌大学 A kind of positron e mission computed tomography (PET) method for reconstructing based on autocoder network
WO2021022752A1 (en) * 2019-08-07 2021-02-11 深圳先进技术研究院 Multimodal three-dimensional medical image fusion method and system, and electronic device
CN111325686A (en) * 2020-02-11 2020-06-23 之江实验室 Low-dose PET three-dimensional reconstruction method based on deep learning
CN111462020A (en) * 2020-04-24 2020-07-28 上海联影医疗科技有限公司 Method, system, storage medium and device for correcting motion artifact of heart image
WO2021232653A1 (en) * 2020-05-21 2021-11-25 浙江大学 Pet image reconstruction algorithm combining filtered back-projection algorithm and neural network
CN112150568A (en) * 2020-09-16 2020-12-29 浙江大学 Magnetic resonance fingerprint imaging reconstruction method based on Transformer model
CN112651890A (en) * 2020-12-18 2021-04-13 深圳先进技术研究院 PET-MRI image denoising method and device based on dual-coding fusion network model
CN113256753A (en) * 2021-06-30 2021-08-13 之江实验室 PET image region-of-interest enhancement reconstruction method based on multitask learning constraint

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
FAN RAO 等: "A novel supervised learning method to generate CT images for attenuation correction in delayed pet scans", 《COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE》 *
于波等: "基于深度卷积神经网络的图像重建算法", 《计算机系统应用》 *
杜倩倩等: "基于空洞U-Net神经网络的PET图像重建算法", 《太原理工大学学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115423893A (en) * 2022-11-03 2022-12-02 南京应用数学中心 Low-dose PET-CT reconstruction method based on multi-mode structure similarity neural network
CN115423893B (en) * 2022-11-03 2023-04-28 南京应用数学中心 Low-dose PET-CT reconstruction method based on multi-modal structure similarity neural network

Also Published As

Publication number Publication date
JP7246116B1 (en) 2023-03-27
JP2023133132A (en) 2023-09-22
CN114332287B (en) 2022-07-15

Similar Documents

Publication Publication Date Title
CN114332287B (en) Method, device, equipment and medium for reconstructing PET (positron emission tomography) image based on transformer feature sharing
US9965873B2 (en) Systems and methods for data and model-driven image reconstruction and enhancement
CN113256753B (en) PET image region-of-interest enhancement reconstruction method based on multitask learning constraint
CN110363797B (en) PET and CT image registration method based on excessive deformation inhibition
CN110400298B (en) Method, device, equipment and medium for detecting heart clinical index
WO2018112137A1 (en) System and method for image segmentation using a joint deep learning model
CN111340903B (en) Method and system for generating synthetic PET-CT image based on non-attenuation correction PET image
CN116402865B (en) Multi-mode image registration method, device and medium using diffusion model
CN112258456A (en) Three-dimensional image segmentation method based on convolutional neural network supervision
CN115512110A (en) Medical image tumor segmentation method related to cross-modal attention mechanism
Wang et al. IGNFusion: an unsupervised information gate network for multimodal medical image fusion
Poonkodi et al. 3d-medtrancsgan: 3d medical image transformation using csgan
CN116503506B (en) Image reconstruction method, system, device and storage medium
Li et al. wUnet: A new network used for ultrasonic tongue contour extraction
CN115439478B (en) Pulmonary lobe perfusion intensity assessment method, system, equipment and medium based on pulmonary perfusion
CN116091412A (en) Method for segmenting tumor from PET/CT image
Ta et al. Simultaneous segmentation and motion estimation of left ventricular myocardium in 3d echocardiography using multi-task learning
CN113052840A (en) Processing method based on low signal-to-noise ratio PET image
Fang et al. Combining multiple style transfer networks and transfer learning for lge-cmr segmentation
CN113450427B (en) PET image reconstruction method based on joint dictionary learning and depth network
CN113379863B (en) Dynamic double-tracing PET image joint reconstruction and segmentation method based on deep learning
Pan et al. Sparse domain approaches in dynamic SPECT imaging with high-performance computing
WO2023131061A1 (en) Systems and methods for positron emission computed tomography image reconstruction
Wang et al. Unsupervised cross-modality cardiac image segmentation via disentangled representation learning and consistency regularization
US20240029324A1 (en) Method for image reconstruction, computer device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant