CN114708345A - CT image reconstruction method, device, equipment and storage medium - Google Patents

CT image reconstruction method, device, equipment and storage medium Download PDF

Info

Publication number
CN114708345A
CN114708345A CN202210265584.XA CN202210265584A CN114708345A CN 114708345 A CN114708345 A CN 114708345A CN 202210265584 A CN202210265584 A CN 202210265584A CN 114708345 A CN114708345 A CN 114708345A
Authority
CN
China
Prior art keywords
image
reconstructed
prediction model
neural network
deep neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210265584.XA
Other languages
Chinese (zh)
Inventor
刘士远
范丽
萧毅
谢小峰
王平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yizhiyuan Health Technology Hainan Co ltd
Shanghai Changzheng Hospital
Sanya Research Institute of Hainan University
Original Assignee
Yizhiyuan Health Technology Hainan Co ltd
Shanghai Changzheng Hospital
Sanya Research Institute of Hainan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yizhiyuan Health Technology Hainan Co ltd, Shanghai Changzheng Hospital, Sanya Research Institute of Hainan University filed Critical Yizhiyuan Health Technology Hainan Co ltd
Priority to CN202210265584.XA priority Critical patent/CN114708345A/en
Publication of CN114708345A publication Critical patent/CN114708345A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The present disclosure provides a method, an apparatus, a device and a storage medium for reconstructing a CT image, the method mainly includes: acquiring a training sample set, wherein the training sample set comprises a first type CT sample image; training a deep neural network according to the training sample set to obtain a prediction model, wherein the deep neural network is based on an alternative direction multiplier method; acquiring an image to be reconstructed, wherein the image to be reconstructed is a second type CT image; and reconstructing the image to be reconstructed according to the prediction model to obtain a reconstructed CT image. The method, the device, the equipment and the storage medium for reconstructing the CT image can shorten the reconstruction time of a prediction model and well remove artifacts and noise in the reconstructed CT image.

Description

CT image reconstruction method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of medical image processing, and in particular, to a method, an apparatus, a device, and a storage medium for reconstructing a CT image.
Background
In the field of medical diagnosis, a high-resolution image of a patient can be quickly obtained through a Computed Tomography (CT) technology, and a doctor can quickly diagnose the condition of the patient through the high-resolution image, so that the diagnosis efficiency is greatly improved. In the related art, for the same CT image reconstruction algorithm, the irradiation dose received by the patient has strong positive correlation with the image quality, which leads to the significant degradation of the low-dose CT image quality. Specifically, in the CT detection process, if the irradiation dose received by the patient is low, many artifacts and noises may exist in the reconstructed CT image.
The problem that the reconstructed CT image has artifacts and noise can be solved to a certain extent by the conventional iterative algorithm, but the conventional iterative algorithm usually needs a large amount of calculation and is long in reconstruction time, and when the projection data are extremely rare and lack of additional priori knowledge, the conventional iterative algorithm cannot generate a good artifact and noise removing effect.
Disclosure of Invention
The present disclosure provides a CT image reconstruction method, apparatus, device and storage medium to at least solve the above technical problems in the prior art.
According to a first aspect of the present disclosure, there is provided a CT image reconstruction method, including: acquiring a training sample set, wherein the training sample set comprises a first type CT sample image; training a deep neural network according to the training sample set to obtain a prediction model, wherein the deep neural network is based on an alternative direction multiplier method; acquiring an image to be reconstructed, wherein the image to be reconstructed is a second type CT image; and reconstructing the image to be reconstructed according to the prediction model to obtain a reconstructed CT image.
In an implementation manner, the training the deep neural network according to the training sample set to obtain the prediction model includes: acquiring first projection data of a first type of CT sample image in the training sample set and a system matrix of CT equipment; iteratively solving model parameters of the deep neural network according to the first projection data and the system matrix to obtain the prediction model, wherein the model parameters comprise discrete attenuation coefficients of the first type of CT sample images, a first auxiliary constraint variable and a first scaling Lagrange multiplier; and iteratively solving the model parameters of the deep neural network by using the following iterative objective functions:
Figure BDA0003551685470000021
wherein x is a discrete attenuation coefficient of the first type CT sample image, y is the first projection data, A is the system matrix, | | · | | computationally |, and2representing a 2-norm, R (x) is a prior term,
Figure BDA0003551685470000022
β is the weight between the control data fidelity term and the prior term.
In an embodiment, the iteratively solving the model parameters of the deep neural network according to the first projection data and the system matrix to obtain the prediction model includes: acquiring an auxiliary constraint variable and a scaling Lagrange multiplier; according to the auxiliary constraint variable and the scaling Lagrange multiplier, the iteration target function is rewritten into an augmented Lagrange function; the augmented Lagrangian function is:
Figure BDA0003551685470000023
wherein alpha is a dual variable, rho is a penalty parameter, and the scaling Lagrange multiplier is
Figure BDA0003551685470000024
z is the auxiliary constraint variable; and according to the augmented Lagrange function, iteratively solving the model parameters of the deep neural network to obtain the prediction model.
In an implementation manner, the deep neural network includes an image reconstruction layer, a convolution layer, an overlay layer, and a multiplier update layer, and the iteratively solving the model parameters of the deep neural network according to the augmented lagrange function to obtain the prediction model includes: according to the image reconstruction layer, iteratively solving the discrete attenuation coefficient of the first type of CT sample image; generating a first prior term by convolution according to the convolutional layer; iteratively solving a first auxiliary constraint variable according to the first prior item and the superposition layer; and iteratively updating the first scaling Lagrangian multiplier according to the multiplier updating layer.
In an embodiment, the iteratively solving the model parameters of the deep neural network according to the first projection data and the system matrix to obtain the prediction model further includes: acquiring a test sample set, wherein the test sample set comprises a second type CT sample image; iteratively solving model parameters of the deep neural network according to the first projection data and the system matrix to obtain an initial prediction model; testing the initial prediction model according to the test sample set to obtain a test result; if the test result meets a preset index, determining the initial prediction model as the prediction model; and if the test result does not meet the preset index, continuously iterating and solving the model parameter of the deep neural network on the basis of the initial prediction model until the test result meets the preset index.
In an implementation manner, the reconstructing the image to be reconstructed according to the prediction model to obtain a reconstructed CT image includes: acquiring second projection data of the image to be reconstructed; and reconstructing the image to be reconstructed according to the second projection data, the system matrix and the prediction model to obtain a reconstructed CT image.
In an implementation manner, the reconstructing the image to be reconstructed according to the second projection data, the system matrix, and the prediction model to obtain a reconstructed CT image includes: taking the model parameter as an iteration initial value of the augmented Lagrangian function; according to the image reconstruction layer, iteratively solving the discrete attenuation coefficient of the image to be reconstructed, wherein the discrete attenuation coefficient of the image to be reconstructed is the image vector of the reconstructed CT image; generating a second prior term by convolution according to the convolutional layer; according to the second prior item and the superposition layer, iteratively solving a second auxiliary constraint variable; iteratively updating the second scaled lagrangian multiplier according to the multiplier update layer.
According to a second aspect of the present disclosure, there is provided a CT image reconstruction apparatus, the apparatus comprising: the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a training sample set, and the training sample set comprises a first type of Computed Tomography (CT) sample image; the training module is used for training a deep neural network according to the training sample set to obtain a prediction model, wherein the deep neural network is based on an alternating direction multiplier method; the second acquisition module is used for acquiring an image to be reconstructed, wherein the image to be reconstructed is a second type CT image; and the reconstruction module is used for reconstructing the image to be reconstructed according to the prediction model to obtain a reconstructed CT image.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the methods of the present disclosure.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of the present disclosure.
According to the CT image reconstruction method, the device, the equipment and the storage medium, a traditional iterative algorithm is developed into a deep neural network form, model parameters in the iterative algorithm are set as variables which can be learned in the deep neural network, and in the training process of the deep neural network, the model parameters and priori knowledge are iteratively optimized, so that not only is the reconstruction time of a prediction model shortened, but also artifacts and noises in a CT image obtained through reconstruction can be well removed.
It should be understood that the statements in this section are not intended to identify key or critical features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present disclosure will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
in the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Fig. 1 shows a schematic flow chart of a CT image reconstruction method according to a first embodiment of the present disclosure;
fig. 2 shows a flow chart of a CT image reconstruction method according to a second embodiment of the present disclosure;
fig. 3 shows a flowchart of a CT image reconstruction method according to a third embodiment of the disclosure;
fig. 4 shows a schematic flow chart of a CT image reconstruction method according to a fourth embodiment of the present disclosure;
fig. 5 shows a schematic flow chart of a CT image reconstruction method according to a fifth embodiment of the present disclosure;
fig. 6 shows a flowchart of a CT image reconstruction method according to a sixth embodiment of the present disclosure;
fig. 7 shows a flowchart of a CT image reconstruction method according to a seventh embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a CT image reconstruction apparatus according to an eighth embodiment of the present disclosure;
fig. 9 is a schematic diagram illustrating a composition structure of an electronic device according to an embodiment of the disclosure.
Detailed Description
In order to make the objects, features and advantages of the present disclosure more apparent and understandable, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only a part of the embodiments of the present disclosure, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
Fig. 1 is a schematic flowchart of a CT image reconstruction method according to a first embodiment of the disclosure, as shown in fig. 1, the method specifically includes:
step S101, a training sample set is obtained, wherein the training sample set comprises a first type CT sample image.
In this embodiment, a training sample set is first obtained, where the training sample set is used for training a prediction model, and the training sample set includes high-dose CT sample images, i.e., first-class CT sample images.
In the reconstruction process of the CT image, for the same CT image reconstruction algorithm, the irradiation dose received by the patient has strong positive correlation with the image quality, that is, the higher the irradiation dose received by the patient is, the better the quality of the reconstructed image is; the lower the radiation dose the patient receives, the worse the quality of the reconstructed image. Therefore, the high-dose CT sample image in the training sample set is used as the priori knowledge to train the prediction model, and therefore better model parameters can be obtained.
In an implementation manner, a large number of clinical CT sample images may be acquired first, and then the large number of clinical CT sample images are divided into a high-dose CT sample image and a low-dose CT sample image according to a concentration gradient method at a ratio of 4:1, and the high-dose CT sample image is selected as a training sample set.
And S102, training a deep neural network according to the training sample set to obtain a prediction model, wherein the deep neural network is based on an alternating direction multiplier method.
In this embodiment, the obtained training sample set needs to be used to train the deep neural network to obtain a prediction model, and the prediction model is used to reconstruct a CT image. Specifically, the deep neural network may be a deep neural network based on the alternating direction multiplier method because the alternating direction multiplier method has the advantages of fast convergence speed and good convergence performance.
In one implementation, in the process of training the deep neural network based on the alternating direction multiplier method, the reconstruction task of the high-dose CT sample image can be represented by the following linear formula: and (3) solving the formula y as Ax iteratively to obtain the discrete attenuation coefficient of the high-dose CT sample image, wherein x is the discrete attenuation coefficient of the high-dose CT sample image, A is the system matrix of the CT device, and y is the projection data of the high-dose CT sample image.
And step S103, acquiring an image to be reconstructed, wherein the image to be reconstructed is a second type CT image.
In this embodiment, after the prediction model is trained, an image to be reconstructed needs to be acquired, where the image to be reconstructed is a low-dose CT image, that is, a second-class CT image.
And step S104, reconstructing the image to be reconstructed according to the prediction model to obtain a reconstructed CT image.
In this embodiment, after the prediction model and the image to be reconstructed are obtained, the image to be reconstructed may be reconstructed according to the trained prediction model, so as to obtain a reconstructed CT image.
In an implementation manner, in the process of training the depth neural network based on the alternating direction multiplier method, the discrete attenuation coefficient of the high-dose CT sample image is already obtained, so that in the process of reconstructing the image to be reconstructed, the discrete attenuation coefficient of the high-dose CT sample image can be used as an iteration initial value, and in combination with the projection data of the image to be reconstructed, the discrete attenuation coefficient of the image to be reconstructed is obtained through iterative solution, and the reconstructed CT image is obtained.
In the first embodiment of the disclosure, the deep neural network is trained according to the first type of CT sample image to obtain a prediction model, and then the prediction model is used to reconstruct the image to be reconstructed to obtain a high-quality reconstructed CT image. In the embodiment, the high-dose CT sample image is used as priori knowledge to train the deep neural network, so that the prediction model is obtained, the reconstruction time of the prediction model is shortened, and artifacts and noise in the reconstructed CT image can be well removed.
Fig. 2 is a schematic flowchart of a CT image reconstruction method according to a second embodiment of the disclosure, and as shown in fig. 2, step S102 specifically includes:
step S201, acquiring first projection data of a first type of CT sample image in a training sample set and a system matrix of a CT device.
In the present embodiment, in the process of training the prediction model, the reconstruction task of the high-dose CT sample image is expressed by using the formula y-Ax, so first projection data of the first type CT sample image in the training sample set and the system matrix of the CT apparatus need to be acquired first.
In an implementation manner, the system matrix a of the CT apparatus is a system matrix composed of M × N elements, where M is the number of angles scanned by the CT apparatus, and N is the number of elements of each angle, which is a system parameter set by a person; the first projection data y is measured projection data, which can be obtained from a first type of CT sample image, and typically requires the acquisition of M different angle projection data in one scan.
Step S202, according to the first projection data and the system matrix, model parameters of the deep neural network are solved in an iterative mode to obtain a prediction model, and the model parameters comprise discrete attenuation coefficients of the first type of CT sample images, a first auxiliary constraint variable and a first scaling Lagrange multiplier.
In this embodiment, a prediction model may be obtained by iteratively solving model parameters of the deep neural network according to the first projection data and the system matrix and by combining a formula y ═ Ax, where the model parameters include a discrete attenuation coefficient of the first type of CT sample image, the first auxiliary constraint variable, and the first scaled lagrange multiplier.
In one possible embodiment, the model parameters of the deep neural network can be solved iteratively using the following iterative objective function:
Figure BDA0003551685470000071
wherein x is the discrete attenuation coefficient of the first type of CT sample image, y is the first projection data, A is the system matrix, | · |. survival2Representing a 2-norm, R (x) is a prior term,
Figure BDA0003551685470000072
and beta is a weight between a control data fidelity term and a prior term, the data fidelity term is used for ensuring that the result conforms to the degradation process, and the prior term is used for enhancing the output.
In the second embodiment of the disclosure, first projection data and a system matrix are obtained first, then the first projection data and the system matrix are brought into an iterative objective function, and model parameters of a deep neural network are solved iteratively, so that optimized model parameters can be obtained, and the quality of a CT image obtained by reconstructing a prediction model is improved.
Fig. 3 is a schematic flowchart of a CT image reconstruction method according to a third embodiment of the disclosure, and as shown in fig. 3, step S202 specifically includes:
step S301, acquiring an auxiliary constraint variable and a scaling Lagrange multiplier.
In this embodiment, an iterative objective function is solved by using an alternating direction multiplier method, and in this process, an auxiliary constraint variable and a scaling lagrangian multiplier need to be introduced, where the auxiliary constraint variable can solve an inequality problem into an equivalent equality constraint problem, and the scaling lagrangian multiplier can convert a constrained problem into an unconstrained problem.
And step S302, rewriting the iteration target function into an augmented Lagrange function according to the auxiliary constraint variable and the scaling Lagrange multiplier.
In this embodiment, after introducing the auxiliary constraint variable and scaling the lagrangian multiplier, the iterative objective function may be rewritten to an augmented lagrangian function, and the augmented lagrangian function may solve the optimization problem with the constraint.
In one possible embodiment, the augmented lagrange function obtained by rewriting the iterative objective function is:
Figure BDA0003551685470000081
wherein alpha is a dual variable, rho is a penalty parameter, and a scaling Lagrange multiplier is
Figure BDA0003551685470000082
z is an auxiliary constraint variable, and the penalty parameter rho can prevent the prediction model from being over-fitted, so that the generalization of the prediction model is enhanced.
And step S303, iteratively solving the model parameters of the deep neural network according to the augmented Lagrange function to obtain a prediction model.
In this embodiment, the augmented lagrange function can be split into the following three subproblems:
Figure BDA0003551685470000083
where n represents the number of iterations based on the alternative direction multiplier method, ηnRepresenting the multiplier update rate, and performing iterative solution on the three subproblems to obtain model parameters of the deep neural network, namely the discrete attenuation coefficient of the first-class CT sample image, the first auxiliary constraint variable and the first scaling Lagrange multiplier, so as to obtain a prediction model.
In an implementation, the sub-problem related to x and z in the formula can be solved by using a gradient descent method, and specifically, a secondary superscript k can be introduced to represent the iterative update of the sub-problem by using the gradient descent method, and there are:
Figure BDA0003551685470000091
wherein, Xn、ZnAnd ΛnRespectively representing the update of the variables x, z and lambda at the nth iterationA module corresponding to the reconstruction module, the auxiliary variable update module and the multiplier update module respectively, S (z) represents the gradient of the prior term R (z),
Figure BDA0003551685470000092
τn,kand gamman,kBoth represent the step size parameter in the iterative process.
In the third embodiment of the present disclosure, the iterative objective function is rewritten into an augmented lagrangian function, and the augmented lagrangian function is utilized to iteratively solve the model parameters of the deep neural network to obtain the prediction model, so that the augmented lagrangian function has high robustness and fast convergence, and the optimized model parameters can be quickly obtained, thereby improving the quality of the CT image reconstructed by the prediction model.
Fig. 4 is a schematic flowchart of a CT image reconstruction method according to a fourth embodiment of the disclosure, and as shown in fig. 4, step S303 specifically includes:
step S401, according to the image reconstruction layer, the discrete attenuation coefficient of the first type CT sample image is solved in an iterative mode.
In this embodiment, the depth neural network based on the alternating direction multiplier method includes an image reconstruction layer, a convolution layer, an overlay layer, and a multiplier update layer, each network layer has a different role, wherein the image reconstruction layer is used for iteratively solving the discrete attenuation coefficient of the first type CT sample image.
In one embodiment, the output of the image reconstruction layer can be expressed by the following formula:
Figure BDA0003551685470000093
step S402, convolution generates a first prior term according to the convolution layer.
In this embodiment, the convolution layer is used for convolution to generate a first prior term which is more complex and better meets the CT image reconstruction requirement, the convolution layer uses a residual convolution neural network to represent the gradient s (z) of the prior term, and a residual compensation mechanism is introduced to convert the direct mapping problem into a residual mapping problem, so that the residual mapping is easier to be optimized than the direct mapping, and the image noise is suppressed while the convergence is faster.
In one embodiment, the convolutional layer has two hidden layers and one output layer, each hidden layer having NfA convolution kernel with each convolution kernel having a size Wf×WfWherein N isfIs the number of convolution kernels, WfThe output layer has a convolution kernel which is the size of the side length of the convolution kernel.
And S403, iteratively solving a first auxiliary constraint variable according to the first prior item and the superposition layer.
In this embodiment, after the convolution generates the first prior term, the first auxiliary constraint variable needs to be iteratively solved according to the first prior term and the superposition layer, specifically, the superposition layer only needs to perform a simple weighted summation operation to obtain an output result, where the output of the layer is defined as:
Figure BDA0003551685470000101
Figure BDA0003551685470000102
wherein, CNNs(z) is the first prior term generated by convolutional layer convolution.
And S404, iteratively updating the first scaling Lagrangian multiplier according to the multiplier updating layer.
In this embodiment, the multiplier updating layer is configured to iteratively update the first scaled lagrange multiplier, and an output of the layer is defined as: lambdan=λn-1n(xn-zn)。
In the fourth embodiment of the present disclosure, the discrete attenuation coefficient, the first prior term, the first auxiliary constraint variable, and the first scaled lagrange multiplier of the first type of CT sample image are iteratively solved by using the image reconstruction layer, the convolution layer, the superposition layer, and the multiplier update layer of the deep neural network, respectively, so as to obtain the model parameters of the prediction model, thereby shortening the training time of the prediction model, and since the first prior term which more meets the CT image reconstruction requirements is continuously iteratively updated, the prediction model obtained by training can well remove artifacts and noise.
Fig. 5 is a schematic flowchart of a CT image reconstruction method according to a fifth embodiment of the disclosure, and as shown in fig. 5, the step S202 further includes:
step S501, a test sample set is obtained, and the test sample set comprises second type CT sample images.
In this embodiment, a test sample set needs to be obtained first, where the test sample set includes a low-dose CT sample image, that is, a second-class CT sample image, and the test sample set is used to test a model obtained by training the training sample set. Specifically, after dividing a large number of clinical CT sample images into a high-dose CT sample image and a low-dose CT sample image at a ratio of 4:1 by a concentration gradient method, the low-dose CT sample image can be selected as a test sample set.
And S502, iteratively solving model parameters of the deep neural network according to the first projection data and the system matrix to obtain an initial prediction model.
And S503, testing the initial prediction model according to the test sample set to obtain a test result.
In this embodiment, after model parameters of the deep neural network are iteratively solved according to the first projection data and the system matrix to obtain an initial prediction model, the initial prediction model may be tested according to the obtained test sample set, so as to obtain a test result.
In an implementation manner, the initial prediction model may be tested by using a second type of CT sample image in the test sample set, that is, a low-dose CT sample image, and in the test process, the second type of CT sample image is reconstructed by using a model parameter obtained by iterative solution as an initial value of iteration, so as to obtain a test result.
Step S504, if the test result meets the preset index, the initial prediction model is determined as the prediction model.
And step S505, if the test result does not meet the preset index, continuously iterating and solving the model parameter of the deep neural network on the basis of the initial prediction model until the test result meets the preset index.
In this embodiment, after the test result is obtained, the test result is compared with a preset index, and if the test result meets the preset index, the initial prediction model is determined as the prediction model; and if the test result does not meet the preset index, continuously iterating and solving the model parameter of the deep neural network on the basis of the initial prediction model until the test result meets the preset index.
In one embodiment, the predetermined criteria may be Peak Signal to Noise Ratio (PSNR), Structural Similarity (SSIM), Normalized Root Mean Square Error (NRMSE), wherein the PSNR may be in a predetermined range of 30-50 db, the SSIM may be in a predetermined range of 0.7-1.0, and the NRMSE may be in a range of 0-0.01.
In the fifth embodiment of the present disclosure, by testing the initial prediction model, it can be ensured that the finally obtained prediction model meets the preset requirements, and artifacts and noise can be well removed in the process of reconstructing the low-dose CT image.
Fig. 6 is a schematic flowchart of a CT image reconstruction method according to a sixth embodiment of the disclosure, and as shown in fig. 6, step S104 specifically includes:
step S601, obtaining second projection data of the image to be reconstructed.
In this embodiment, after the prediction model is established, the image to be reconstructed may be reconstructed according to the prediction model, first, second projection data of the image to be reconstructed needs to be obtained, where the second projection data is measured projection data, and the second projection data may be obtained according to the image to be reconstructed.
And step S602, reconstructing the image to be reconstructed according to the second projection data, the system matrix and the prediction model to obtain a reconstructed CT image.
In this embodiment, after the second projection data is obtained, the image to be reconstructed may be reconstructed according to the second projection data, the acquired system matrix, and the prediction model, so as to obtain a reconstructed CT image. Specifically, the discrete attenuation coefficient of the image to be reconstructed may be solved iteratively by taking the model parameter of the prediction model as an initial value of the iterative objective function. And if the image to be reconstructed is a low-dose CT image and is reconstructed only according to the second projection data of the image to be reconstructed, the quality of the reconstructed CT image is poor, and at the moment, the high-quality reconstructed CT image is iteratively solved by taking the model parameter as an initial value of an iterative objective function in combination with the model parameter trained by the high-dose CT sample image.
In the sixth embodiment of the present disclosure, the image to be reconstructed is reconstructed according to the second projection data of the image to be reconstructed, the system matrix and the prediction model to obtain the reconstructed CT image, and because the model parameters trained from the high-dose CT sample image are combined, artifacts and noise can be better removed, so that the reconstructed CT image with better quality is obtained.
Fig. 7 is a schematic flowchart of a CT image reconstruction method according to a seventh embodiment of the disclosure, and as shown in fig. 7, step S602 specifically includes:
and step S701, taking the model parameter as an iteration initial value of the augmented Lagrange function.
In this embodiment, the prediction model is used to reconstruct the image to be reconstructed, and the obtained model parameters may be used as the iteration initial values of the augmented lagrangian function in the prediction model to iteratively solve the reconstructed CT image.
In an implementation manner, in the process of reconstructing the image to be reconstructed, the augmented lagrange function may be expressed as:
Figure BDA0003551685470000121
wherein, y1For the second projection data, after splitting the lagrangian function and solving by using a gradient descent method, a discrete attenuation coefficient of the image to be reconstructed can be obtained, and the discrete attenuation coefficient can be used for representing the reconstructed CT image.
Step S702, according to the image reconstruction layer, the discrete attenuation coefficient of the image to be reconstructed is solved in an iterative mode, and the discrete attenuation coefficient of the image to be reconstructed is the image vector of the reconstructed CT image.
Step S703, convolving the convolution layer to generate a second prior term.
And step S704, iteratively solving a second auxiliary constraint variable according to the second prior item and the superposition layer.
Step S705, according to the multiplier updating layer, iteratively updating the second scaling Lagrange multiplier.
In this embodiment, the implementation process of step S702 to step S705 is similar to the implementation process of step S401 to step S404, wherein the output of the image reconstruction layer can be expressed as:
Figure BDA0003551685470000131
Figure BDA0003551685470000132
wherein x is1The discrete attenuation coefficient of the image to be reconstructed is the image vector of the reconstructed CT image, and the reconstructed CT image can be obtained by utilizing the image vector, z1Is a second auxiliary constraint variable, λ1Is a second scaled lagrange multiplier; the output of the superimposed layers can be expressed as:
Figure BDA0003551685470000133
Figure BDA0003551685470000134
wherein CNNs (z1) is a second prior term generated by convolution layer convolution; the output of the multiplier update layer can be expressed as: lambda1 n=λ1 n-1n(x1 n-z1 n)。
In the seventh embodiment of the present disclosure, the image reconstruction layer, the convolution layer, the superposition layer, and the multiplier update layer of the deep neural network are used to perform iterative solution on the discrete attenuation coefficient, the second prior term, the second auxiliary constraint variable, and the second scaled lagrangian multiplier of the image to be reconstructed, respectively, so as to obtain a high-quality reconstructed CT image.
Fig. 8 is a schematic structural diagram of a CT image reconstruction apparatus according to an eighth embodiment of the disclosure, as shown in fig. 8, the apparatus mainly includes:
a first obtaining module 80, configured to obtain a training sample set, where the training sample set includes a first type CT sample image; the training module 81 is used for training a deep neural network according to the training sample set to obtain a prediction model, wherein the deep neural network is based on an alternating direction multiplier method; a second obtaining module 82, configured to obtain an image to be reconstructed, where the image to be reconstructed is a second type CT image; and the reconstruction module 83 is configured to reconstruct the image to be reconstructed according to the prediction model to obtain a reconstructed CT image.
In one embodiment, the training model 81 mainly includes: the first acquisition sub-module is used for acquiring first projection data of a first type of CT sample image in a training sample set and a system matrix of the CT device; and the model parameter solving submodule is used for iteratively solving the model parameters of the deep neural network according to the first projection data and the system matrix to obtain a prediction model, and the model parameters comprise the discrete attenuation coefficient of the first type of CT sample image, a first prior term, a first auxiliary constraint variable and a first scaling Lagrange multiplier.
In one embodiment, the model parameter solving submodule mainly includes: the first acquisition unit is used for acquiring an auxiliary constraint variable and a scaling Lagrange multiplier; the rewriting unit is used for rewriting the iteration target function into an augmented Lagrange function according to the auxiliary constraint variable and the scaling Lagrange multiplier; and the model parameter solving unit is used for iteratively solving the model parameters of the deep neural network according to the augmented Lagrangian function to obtain a prediction model.
In an embodiment, the model parameter solving unit mainly includes: the reconstruction subunit is used for iteratively solving the discrete attenuation coefficient of the first type of CT sample image according to the image reconstruction layer; the convolution subunit is used for generating a first prior term by convolution according to the convolution layer; the superposition subunit is used for iteratively solving a first auxiliary constraint variable according to the first prior term and the superposition layer; and the multiplier updating subunit is used for iteratively updating the first scaling Lagrangian multiplier according to the multiplier updating layer.
In one embodiment, the model parameter solving submodule mainly includes: the second acquisition unit is used for acquiring a test sample set, and the test sample set comprises a second type CT sample image; the iterative solution unit is used for iteratively solving the model parameters of the deep neural network according to the first projection data and the system matrix to obtain an initial prediction model; the test unit is used for testing the initial prediction model according to the test sample set to obtain a test result; the first test result unit is used for determining the initial prediction model as the prediction model if the test result meets the preset index; and the second test result unit is used for continuously iterating and solving the model parameters of the deep neural network on the basis of the initial prediction model until the test result meets the preset index if the test result does not meet the preset index.
In one embodiment, the reconstruction module 83 mainly includes: the second acquisition submodule is used for acquiring second projection data of the image to be reconstructed; and the reconstruction submodule is used for reconstructing the image to be reconstructed according to the second projection data, the system matrix and the prediction model to obtain the reconstructed CT image.
In one embodiment, the reconstruction submodule essentially comprises: the replacing unit is used for taking the model parameter as an iteration initial value of the augmented Lagrange function; the reconstruction unit is used for iteratively solving the discrete attenuation coefficient of the image to be reconstructed according to the image reconstruction layer, wherein the discrete attenuation coefficient of the image to be reconstructed is the image vector of the reconstructed CT image; the convolution unit is used for generating a second prior term through convolution according to the convolution layer; the superposition unit is used for iteratively solving a second auxiliary constraint variable according to the second prior item and the superposition layer; and the multiplier updating unit is used for iteratively updating the second scaling Lagrangian multiplier according to the multiplier updating layer.
The present disclosure also provides an electronic device and a readable storage medium according to an embodiment of the present disclosure.
FIG. 9 illustrates a schematic block diagram of an example electronic device 900 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 9, the apparatus 900 includes a computing unit 901, which can perform various appropriate actions and processes in accordance with a computer program stored in a Read Only Memory (ROM)902 or a computer program loaded from a storage unit 908 into a Random Access Memory (RAM) 903. In the RAM 903, various programs and data required for the operation of the device 900 can also be stored. The calculation unit 901, ROM902, and RAM 903 are connected to each other via a bus 904. An input/output (I/O) interface 905 is also connected to bus 904.
A number of components in the device 900 are connected to the I/O interface 905, including: an input unit 906 such as a keyboard, a mouse, and the like; an output unit 907 such as various types of displays, speakers, and the like; a storage unit 908 such as a magnetic disk, optical disk, or the like; and a communication unit 909 such as a network card, a modem, a wireless communication transceiver, and the like. The communication unit 909 allows the device 900 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 901 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 901 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 901 performs the various methods and processes described above, such as a CT image reconstruction method. For example, in some embodiments, a method of CT image reconstruction may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 908. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 900 via ROM902 and/or communications unit 909. When the computer program is loaded into the RAM 903 and executed by the computing unit 901, one or more steps of a CT image reconstruction method described above may be performed. Alternatively, in other embodiments, the computing unit 901 may be configured to perform a CT image reconstruction method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present disclosure, "a plurality" means two or more unless specifically limited otherwise.
The above is only a specific embodiment of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present disclosure, and shall be covered by the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A CT image reconstruction method, the method comprising:
acquiring a training sample set, wherein the training sample set comprises a first type CT sample image;
training a deep neural network according to the training sample set to obtain a prediction model, wherein the deep neural network is based on an alternating direction multiplier method;
acquiring an image to be reconstructed, wherein the image to be reconstructed is a second type CT image;
and reconstructing the image to be reconstructed according to the prediction model to obtain a reconstructed CT image.
2. The method of claim 1, wherein training the deep neural network according to the training sample set to obtain the prediction model comprises:
acquiring first projection data of a first type of CT sample image in the training sample set and a system matrix of CT equipment;
iteratively solving model parameters of the deep neural network according to the first projection data and the system matrix to obtain the prediction model, wherein the model parameters comprise discrete attenuation coefficients of the first type of CT sample images, a first auxiliary constraint variable and a first scaling Lagrange multiplier;
and iteratively solving the model parameters of the deep neural network by using the following iterative objective functions:
Figure FDA0003551685460000011
wherein x is a discrete attenuation coefficient of the first type CT sample image, y is the first projection data, A is the system matrix, | · | | calvert |, L2Representing a 2-norm, R (x) is a prior term,
Figure FDA0003551685460000012
β is the weight between the control data fidelity term and the prior term.
3. The method of claim 2, wherein iteratively solving model parameters of the deep neural network from the first projection data and the system matrix to obtain the prediction model comprises:
acquiring an auxiliary constraint variable and a scaling Lagrange multiplier;
according to the auxiliary constraint variable and the scaling Lagrange multiplier, the iteration target function is rewritten into an augmented Lagrange function;
the augmented Lagrangian function is:
Figure FDA0003551685460000021
wherein alpha is a dual variable, rho is a penalty parameter, and the scaling Lagrange multiplier is
Figure FDA0003551685460000022
z is the auxiliary constraint variable;
and according to the augmented Lagrange function, iteratively solving the model parameters of the deep neural network to obtain the prediction model.
4. The method of claim 3, wherein the deep neural network comprises an image reconstruction layer, a convolution layer, an overlay layer, and a multiplier update layer, and wherein iteratively solving model parameters of the deep neural network according to an augmented Lagrangian function to obtain the prediction model comprises:
according to the image reconstruction layer, iteratively solving the discrete attenuation coefficient of the first type of CT sample image;
generating a first prior term by convolution according to the convolutional layer;
iteratively solving a first auxiliary constraint variable according to the first prior item and the superposition layer;
and iteratively updating the first scaling Lagrangian multiplier according to the multiplier updating layer.
5. The method of claim 2, wherein iteratively solving model parameters of the deep neural network based on the first projection data and a system matrix to obtain the prediction model, further comprises:
acquiring a test sample set, wherein the test sample set comprises a second type CT sample image;
iteratively solving model parameters of the deep neural network according to the first projection data and the system matrix to obtain an initial prediction model;
testing the initial prediction model according to the test sample set to obtain a test result;
if the test result meets a preset index, determining the initial prediction model as the prediction model;
and if the test result does not meet the preset index, continuously iterating and solving the model parameter of the deep neural network on the basis of the initial prediction model until the test result meets the preset index.
6. The method according to claim 5, wherein the reconstructing the image to be reconstructed according to the prediction model to obtain a reconstructed CT image comprises:
acquiring second projection data of the image to be reconstructed;
and reconstructing the image to be reconstructed according to the second projection data, the system matrix and the prediction model to obtain a reconstructed CT image.
7. The method according to claim 6, wherein reconstructing the image to be reconstructed according to the second projection data, the system matrix and the prediction model to obtain a reconstructed CT image comprises:
taking the model parameter as an iteration initial value of the augmented Lagrangian function;
according to the image reconstruction layer, iteratively solving the discrete attenuation coefficient of the image to be reconstructed, wherein the discrete attenuation coefficient of the image to be reconstructed is the image vector of the reconstructed CT image;
according to the convolution layer, generating a second prior term by convolution;
according to the second prior item and the superposition layer, iteratively solving a second auxiliary constraint variable;
iteratively updating the second scaled lagrangian multiplier according to the multiplier update layer.
8. A CT image reconstruction apparatus, characterized in that the apparatus comprises:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a training sample set, and the training sample set comprises a first type of Computed Tomography (CT) sample image;
the training module is used for training a deep neural network according to the training sample set to obtain a prediction model, wherein the deep neural network is based on an alternating direction multiplier method;
the second acquisition module is used for acquiring an image to be reconstructed, wherein the image to be reconstructed is a second type CT image;
and the reconstruction module is used for reconstructing the image to be reconstructed according to the prediction model to obtain a reconstructed CT image.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
10. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-7.
CN202210265584.XA 2022-03-17 2022-03-17 CT image reconstruction method, device, equipment and storage medium Pending CN114708345A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210265584.XA CN114708345A (en) 2022-03-17 2022-03-17 CT image reconstruction method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210265584.XA CN114708345A (en) 2022-03-17 2022-03-17 CT image reconstruction method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114708345A true CN114708345A (en) 2022-07-05

Family

ID=82169175

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210265584.XA Pending CN114708345A (en) 2022-03-17 2022-03-17 CT image reconstruction method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114708345A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117541481A (en) * 2024-01-09 2024-02-09 广东海洋大学 Low-dose CT image restoration method, system and storage medium
CN117611750A (en) * 2023-12-05 2024-02-27 北京思博慧医科技有限公司 Method and device for constructing three-dimensional imaging model, electronic equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117611750A (en) * 2023-12-05 2024-02-27 北京思博慧医科技有限公司 Method and device for constructing three-dimensional imaging model, electronic equipment and storage medium
CN117541481A (en) * 2024-01-09 2024-02-09 广东海洋大学 Low-dose CT image restoration method, system and storage medium
CN117541481B (en) * 2024-01-09 2024-04-05 广东海洋大学 Low-dose CT image restoration method, system and storage medium

Similar Documents

Publication Publication Date Title
CN107516330B (en) Model generation method, image processing method and medical imaging equipment
Wu et al. Computationally efficient deep neural network for computed tomography image reconstruction
Burger et al. Total variation regularization in measurement and image space for PET reconstruction
CN114708345A (en) CT image reconstruction method, device, equipment and storage medium
Ding et al. Low-dose CT with deep learning regularization via proximal forward–backward splitting
CN107595312B (en) Model generation method, image processing method and medical imaging equipment
CN113409430B (en) Drivable three-dimensional character generation method, drivable three-dimensional character generation device, electronic equipment and storage medium
CN110807821A (en) Image reconstruction method and system
CN111652863A (en) Medical image detection method, device, equipment and storage medium
CN114092673B (en) Image processing method and device, electronic equipment and storage medium
CN117011673A (en) Electrical impedance tomography image reconstruction method and device based on noise diffusion learning
CN114820861A (en) MR synthetic CT method, equipment and computer readable storage medium based on cycleGAN
Li et al. Learning non-local perfusion textures for high-quality computed tomography perfusion imaging
CN114492794A (en) Method, apparatus, device, medium and product for processing data
Zhang et al. CT image reconstruction algorithms: A comprehensive survey
WO2021253671A1 (en) Magnetic resonance cine imaging method and apparatus, and imaging device and storage medium
Landi et al. A limited memory BFGS method for a nonlinear inverse problem in digital breast tomosynthesis
CN112488949A (en) Low-dose PET image restoration method, system, equipment and medium
Wu et al. Low dose CT reconstruction via L 1 norm dictionary learning using alternating minimization algorithm and balancing principle
CN117351299A (en) Image generation and model training method, device, equipment and storage medium
US20220351455A1 (en) Method of processing image, electronic device, and storage medium
Karimi et al. A hybrid stochastic-deterministic gradient descent algorithm for image reconstruction in cone-beam computed tomography
CN115375583A (en) PET parameter image enhancement method, device, equipment and storage medium
CN114998273A (en) Blood vessel image processing method and device, electronic equipment and storage medium
Zeng et al. Noise weighting with an exponent for transmission CT

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination