CN112270725A - Image reconstruction and coding method in spectral tomography - Google Patents

Image reconstruction and coding method in spectral tomography Download PDF

Info

Publication number
CN112270725A
CN112270725A CN202011018487.8A CN202011018487A CN112270725A CN 112270725 A CN112270725 A CN 112270725A CN 202011018487 A CN202011018487 A CN 202011018487A CN 112270725 A CN112270725 A CN 112270725A
Authority
CN
China
Prior art keywords
network
coding
sub
layer
decoding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011018487.8A
Other languages
Chinese (zh)
Inventor
仇飞
颜森林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Xiaozhuang University
Original Assignee
Nanjing Xiaozhuang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Xiaozhuang University filed Critical Nanjing Xiaozhuang University
Priority to CN202011018487.8A priority Critical patent/CN112270725A/en
Publication of CN112270725A publication Critical patent/CN112270725A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/005Specific pre-processing for tomographic reconstruction, e.g. calibration, source positioning, rebinning, scatter correction, retrospective gating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an image reconstruction and encoding method in spectral tomography, which comprises the following steps: acquiring a spectral tomography image signal X as training data, preprocessing the gray level of the data and completing the corrosion of the signal to obtain X1; constructing a coding sub-network of a self-coding denoising sparse network, wherein the coding sub-network is a three-layer fully-connected neural network, and the spectral tomography image signal X obtains a measured value y through the coding sub-network; a decoding sub-network of the self-coding denoising sparse network is built, the decoding sub-network is a three-layer fully-connected neural network which is symmetrical to the coding sub-network in structure, and a measured value y is subjected to decoding by the decoding sub-network to obtain a reconstructed picture X2; introducing sparsity limit to generate a loss function; and performing joint training on the coding and decoding sub-networks, optimizing the loss function through a back propagation algorithm, updating parameters and obtaining an optimal sparse denoising self-coding network. The method provided by the invention improves the quality of the reconstructed picture, greatly reduces the reconstruction time, and greatly reduces the calculation workload and the use amount of a computer memory.

Description

Image reconstruction and coding method in spectral tomography
Technical Field
The invention relates to the field of image processing, in particular to an image reconstruction and encoding method in spectral tomography.
Background
Spectral tomography (CT), one of the most important innovations in medical imaging in recent decades, can provide rapid and efficient analysis and lesion identification by 3D visual inspection of patient tissues and organs. With the development of CT technology, cone-beam CT (cbct) has received increasing attention for radiographic imaging and therapy due to its high spatial resolution, ease of operation, and relatively low acquisition cost. Despite the high quality of the onboard volume imaging, there is a concern that too much X-rays will be radiated into the patient, which is much higher than conventional diagnostic X-rays. It is contemplated that CT scanning and imaging procedures may minimize X-ray radiation while maintaining CT image quality. To reduce the number of X-ray projections in a CT scan, the radiation dose can be directly reduced, resulting in insufficient sampled data. It was found that when applying FDK-type algorithms to under-sampled projections, the quality of the reconstructed image drops sharply. This is due to the incomplete information in the fourier domain. In contrast, another class of iterative reconstruction algorithms may use undersampled measurement data to provide better imaging quality. Such algorithms are linear matrix equations that solve for CT measurement data based on computational optimization of a specified cost function. Since the measurement data and measurement matrix are large-dimensional, the iterative algorithm performs repeated large-dimensional matrix multiplication operations to compute the optimal reconstruction solution. These matrix multiplication operations require a significant amount of computational effort and require a significant amount of computer memory to store the matrix data.
Disclosure of Invention
The main objective of the present invention is to overcome the above-mentioned drawbacks in the prior art, and to provide an image reconstruction and encoding method that improves the quality of the reconstructed image and reduces the memory space.
The invention adopts the following technical scheme:
an image reconstruction and encoding method in spectral tomography, characterized in that,
step S1: acquiring a spectral tomography image signal X as training data, preprocessing the gray level of the data and completing the corrosion of the signal to obtain X1;
step S2: constructing a coding sub-network of a self-coding denoising sparse network, wherein the coding sub-network is a three-layer fully-connected neural network, and the spectral tomography image signal X obtains a measured value y through the coding sub-network;
step S3: a decoding sub-network of the self-coding denoising sparse network is built, the decoding sub-network is a three-layer fully-connected neural network which is symmetrical to the coding sub-network in structure, and a measured value y obtains a reconstructed picture X2 through the decoding sub-network;
step S4: introducing sparsity limit to generate a loss function;
step S5: and performing joint training on the coding and decoding sub-networks, optimizing the loss function through a back propagation algorithm, updating parameters and obtaining an optimal sparse denoising self-coding network.
Preferably, the step S1 performs gray processing on the image signal X, and adds a gaussian white noise pair signal with a certain probability distribution to obtain a corrosion signal: x1 ═ X + λ n, where n represents additive gaussian sampling noise with zero mean and variance of 1, and λ represents signal corrosion strength.
Preferably, the step S2 establishes a coding sub-network Te () of the sparse denoising self-coding network, and obtains the measured value y, where the coding sub-network is a three-layer fully-connected neural network: the input layer, hidden layer and output layer, with corrosion signal X1 ═ X + λ n as input data, the hidden layer feature vector is represented as:
a1=f(W1X1+b1)
the output layer output, i.e. the measured value y, is expressed as:
y=f(W2a1+b2)
wherein W1, b1 represents weight matrix and bias value between l layer and l +1 layer, and f (.) represents sigmoid activation function; the three-layer network is regarded as a whole to obtain a coding sub-network Te (), and the coding process is as follows:
y=Te(X1,Ωe)
where Ω e ═ W1, W2, b1, b2, denotes all parameter sets in the encoding process, and Te denotes the encoding sub-network.
Preferably, in step S3, a decoding sub-network Te () of the sparse denoising self-coding network is established, and a reconstructed picture is reconstructed from the measurement value y, where the decoding sub-network is a three-layer fully-connected neural network structurally symmetric to the coding sub-network: the input layer, the hidden layer and the output layer, taking the measured value y as input data, and the hidden layer feature vector is expressed as:
a3=f(W3y+b3)
the output layer output, i.e. the reconstructed picture, is represented as:
X2=f(W4a3+b4)
wherein W3, b3 represents weight matrix and bias value between l layer and l +1 layer, and f (.) represents sigmoid activation function;
the three-layer network is regarded as a whole to obtain a decoding sub-network Td (), and the decoding process is as follows:
X2=Td(y,Ωd)
where Ω d ═ { W3, W4, b3, b4} denotes all parameter sets in the decoding process, and Td denotes the decoding subnet.
Preferably, in step S4, in order to reduce the error between the reconstructed picture and the original picture, a mean square error is used as a loss function, and sparsity constraint is introduced to improve network performance:
Figure BDA0002699902490000031
where the first term is the mean square error, N represents the number of training samples, X1iDenotes the i-th reconstructed picture, XiRepresenting the ith original picture; the second term is the sparsity-limiting term, ρ 1jThe average activation degree of the hidden neuron j in the training set is shown, ρ is an expected activation degree, and β is a sparsity limiting term parameter.
Preferably, in the step S5, joint training is performed on the coding and decoding sub-networks, the loss function is optimized through a back propagation algorithm, and the parameters are updated to minimize the loss function, so as to obtain the optimal sparse denoising self-coding network.
As can be seen from the above description of the present invention, compared with the prior art, the present invention has the following advantages:
firstly, acquiring a spectral tomography image signal X as training data, preprocessing data gray scale and completing signal corrosion to obtain X1; building a coding sub-network of a self-coding denoising sparse network, wherein the coding sub-network is a three-layer fully-connected neural network, and a spectral tomography image signal X obtains a measured value y through the coding sub-network; building a decoding sub-network of the self-coding denoising sparse network, wherein the decoding sub-network is a three-layer fully-connected neural network symmetrical to the coding sub-network in structure, and the measured value y is subjected to decoding by the decoding sub-network to obtain a reconstructed picture X2; introducing sparsity limit to generate a loss function; performing joint training on the coding and decoding sub-networks, optimizing a loss function through a back propagation algorithm, updating parameters and obtaining an optimal sparse denoising self-coding network; the method provided by the invention improves the quality of the reconstructed picture, greatly reduces the reconstruction time, and greatly reduces the calculation workload and the use amount of a computer memory.
Drawings
FIG. 1 is a flow chart of a method of the present invention providing a preferred embodiment;
FIG. 2 is a diagram of an embodiment of a self-encoding network of the present invention;
FIG. 3 is a graph showing the results of experimental verification of the present invention; (a) is an original image, (b) adopts a reconstructed image obtained by an ASD-POCS algorithm; (c) adopting a reconstruction graph obtained by an FDK algorithm; (d) the reconstructed image obtained by the invention is adopted.
Detailed Description
The invention is further described below by means of specific embodiments.
The use of "first," "second," and similar terms in this disclosure is not intended to indicate any order, quantity, or importance, but rather is used to distinguish one element from another. Likewise, the use of the terms "comprising" or "including" and the like, mean that the element or item presented before the term covers the element or item listed after the term and its equivalents, but not the exclusion of other elements or items. The terms "connected" or "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect.
Flow charts are used in this disclosure to illustrate steps of methods according to embodiments of the disclosure. It should be understood that the preceding and following steps are not necessarily performed in the exact order in which they are performed. Rather, various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to the processes, or a certain step or several steps of operations may be removed from the processes.
The technical scheme for solving the technical problems is as follows:
fig. 1 is a general flow chart of an image reconstruction and encoding method in spectral tomography according to the present invention, and the following detailed description will be made with reference to the accompanying drawings and examples, and includes the following steps:
step S101: acquiring a spectral tomography image signal X as training data, preprocessing the gray level of the data and completing the corrosion of the signal to obtain X1;
the step S101 performs gray processing on the image signal X, and adds gaussian white noise with a certain probability distribution to the signal to obtain a corrosion signal: x1 ═ X + λ n, where n represents additive gaussian sampling noise with zero mean and variance of 1, and λ represents signal corrosion strength.
Step S102: constructing a coding sub-network of a self-coding denoising sparse network, wherein the coding sub-network is a three-layer fully-connected neural network, and the spectral tomography image signal X obtains a measured value y through the coding sub-network;
step S102 is to establish a coding sub-network Te () of the sparse denoising self-coding network, and obtain a measurement value y, where the coding sub-network is a three-layer fully-connected neural network: the input layer, hidden layer and output layer, with corrosion signal X1 ═ X + λ n as input data, the hidden layer feature vector is represented as:
a1=f(W1X1+b1)
the output layer output, i.e. the measured value y, is expressed as:
y=f(W2a1+b2)
wherein W1, b1 represents weight matrix and bias value between l layer and l +1 layer, and f (.) represents sigmoid activation function; the three-layer network is regarded as a whole to obtain a coding sub-network Te (), and the coding process is as follows:
y=Te(X1,Ωe)
where Ω e ═ W1, W2, b1, b2, denotes all parameter sets in the encoding process, and Te denotes the encoding sub-network.
Step S103: a decoding sub-network of the self-coding denoising sparse network is built, the decoding sub-network is a three-layer fully-connected neural network which is symmetrical to the coding sub-network in structure, and a measured value y is subjected to decoding by the decoding sub-network to obtain a reconstructed picture X2;
step S103 is to establish a decoding subnetwork Te () of the sparse denoising self-coding network, and reconstruct the measurement value y to obtain a reconstructed picture, where the decoding subnetwork is a three-layer fully-connected neural network structurally symmetric to the coding subnetwork: the input layer, the hidden layer and the output layer take the measured value y as input data, and the characteristic vector of the hidden layer is expressed as:
a3=f(W3y+b3)
the output layer output, i.e. the reconstructed picture, is represented as:
X2=f(W4a3+b4)
wherein W3, b3 represents weight matrix and bias value between l layer and l +1 layer, and f (.) represents sigmoid activation function;
the three-layer network is regarded as a whole to obtain a decoding sub-network Td (), and the decoding process is as follows:
X2=Td(y,Ωd)
where Ω d ═ { W3, W4, b3, b4} denotes all parameter sets in the decoding process, and Td denotes the decoding subnet.
Step S104: introducing sparsity limit to generate a loss function;
in step S104, in order to reduce the error between the reconstructed picture and the original picture, a mean square error is used as a loss function, and sparsity constraint is introduced to improve the network performance:
Figure BDA0002699902490000071
where the first term is the mean square error, N represents the number of training samples, X1iDenotes the i-th reconstructed picture, XiRepresenting the ith original picture; the second term is the sparsity-limiting term, ρ 1jThe average activation degree of the hidden neuron j in the training set is shown, ρ is an expected activation degree, and β is a sparsity limiting term parameter.
Step S105: and performing joint training on the coding and decoding sub-networks, optimizing the loss function through a back propagation algorithm, updating parameters and obtaining an optimal sparse denoising self-coding network.
Step S105, the coding and decoding sub-networks are jointly trained, the loss function is optimized through a back propagation algorithm, parameters are updated to enable the loss function to be minimum, and therefore the optimal sparse denoising self-coding network is obtained.
The specific process of the step is as follows:
1. randomly initializing parameters omega e and omega d, wherein the omega e and the omega d respectively represent all parameter sets in the encoding and decoding processes;
2. calculating the activation value of each layer by using a forward propagation calculation formula;
y=Te(X1,Ωe)
X2=Td(y,Ωd)
where y denotes a measured value, X1 denotes an erosion picture, X2 denotes a reconstructed picture, Te denotes an encoding sub-network, and Td denotes a decoding sub-network.
3. Calculating a residual error item of the ith neuron node of the ith layer;
Figure BDA0002699902490000081
wherein,
Figure BDA0002699902490000082
representing hidden layer activation values
Figure BDA0002699902490000083
The derivative of (a) of (b),
Figure BDA0002699902490000084
representing weights of neurons j to i
4. Calculating a partial derivative of the function with time;
5. and updating the parameters to minimize the loss function, thereby obtaining the optimal sparse denoising self-encoding network.
The following is a detailed description based on specific experimental data.
TABLE 1 comparison of reconstruction indices for three methods at the same iteration number
Figure BDA0002699902490000085
Wherein, NMSE is the average absolute error, PSNR is the peak signal-to-noise ratio, SSIM is the structural similarity, wherein PSNR is the peak signal-to-noise ratio, SSIM is the structural similarity as the evaluation index of image quality, the larger the value, the higher the image quality is represented.
The number of iterations of the SD-POCS and FDK algorithms was set to be the same as the method of the present invention to compare their convergence performance. Since the performance of reconstructing the image decreases with decreasing number of projections, the worst-case reconstruction performance of these algorithms at 60 projections was evaluated. The values of NMSE, PSNR and SSIM for the three algorithms are shown in Table 1. It is shown that the 3DA-TVAL3 algorithm has the lowest NMSE and the highest PSNR and SSIM values, and that the reconstruction time is the same as the other algorithms.
TABLE 2 comparison of reconstruction indices under NMSE for three methods
Figure BDA0002699902490000091
Further experiments were also performed to further evaluate the reconstruction speed of the proposed method of the invention compared to the ASD-POCS and FDK algorithms. The three algorithms were set to terminate at the same NMSE value of 0.0134 and their number of iterations and calculation times were recorded and shown in table 2. It can be seen that the 3DA-TVAL3 algorithm has the least number of iterations and computation time compared to ASD-POCS and EM-TV.
Fig. 3 shows a result graph of an experimentally verified reconstructed image. (a) The image is an original image, (b) a reconstructed image is obtained by adopting an ASD-POCS algorithm; (c) adopting a reconstruction graph obtained by an FDK algorithm; (d) adopting the reconstruction graph obtained by the invention; the results of ASD-POC and FDK constitute some vague details compared to the method proposed by the present invention. The reconstruction results of the present invention do not observe significant artifacts in the boundary slices caused by sparse sampling. These results show that the algorithm proposed by the present invention can effectively suppress noise and retain structural information in the reconstructed image.
The construction quality may be improved because the ASD-POCS and FDK algorithms have to use more computation time for further iterations, but the proposed method requires less computation time to produce the same construction quality as the ASD-POCS and FDK algorithms. Therefore, our proposed method can perform faster reconstruction.
The invention provides an image reconstruction and coding method in spectral tomography, which comprises the following steps: firstly, acquiring a spectral tomography image signal X as training data, preprocessing the gray level of the data and completing the corrosion of the signal to obtain X1; building a coding sub-network of a self-coding denoising sparse network, wherein the coding sub-network is a three-layer fully-connected neural network, and a spectral tomography image signal X obtains a measured value y through the coding sub-network; building a decoding sub-network of the self-coding denoising sparse network, wherein the decoding sub-network is a three-layer fully-connected neural network symmetrical to the coding sub-network in structure, and the measured value y is subjected to decoding by the decoding sub-network to obtain a reconstructed picture X2; introducing sparsity limit to generate a loss function; performing joint training on the coding and decoding sub-networks, optimizing a loss function through a back propagation algorithm, updating parameters and obtaining an optimal sparse denoising self-coding network; the method provided by the invention improves the quality of the reconstructed picture, greatly reduces the reconstruction time, and greatly reduces the calculation workload and the use amount of a computer memory.
The foregoing is illustrative of the present invention and is not to be construed as limiting thereof. Although a few exemplary embodiments of this invention have been described, those skilled in the art will readily appreciate that many modifications are possible in the exemplary embodiments without materially departing from the novel teachings and advantages of this invention. Accordingly, all such modifications are intended to be included within the scope of this invention as defined in the claims. It is to be understood that the foregoing is illustrative of the present invention and is not to be construed as limited to the specific embodiments disclosed, and that modifications to the disclosed embodiments, as well as other embodiments, are intended to be included within the scope of the appended claims. The invention is defined by the claims and their equivalents.

Claims (6)

1. An image reconstruction and encoding method in spectral tomography, comprising the steps of:
step S1: acquiring a spectral tomography image signal X as training data, preprocessing the gray level of the data and completing the corrosion of the signal to obtain X1;
step S2: constructing a coding sub-network of a self-coding denoising sparse network, wherein the coding sub-network is a three-layer fully-connected neural network, and the spectral tomography image signal X obtains a measured value y through the coding sub-network;
step S3: a decoding sub-network of the self-coding denoising sparse network is built, the decoding sub-network is a three-layer fully-connected neural network which is symmetrical to the coding sub-network in structure, and a measured value y is subjected to decoding by the decoding sub-network to obtain a reconstructed picture X2;
step S4: introducing sparsity limit to generate a loss function;
step S5: and performing joint training on the coding and decoding sub-networks, optimizing the loss function through a back propagation algorithm, updating parameters and obtaining an optimal sparse denoising self-coding network.
2. The method for image reconstruction and encoding in spectral tomography according to claim 1, wherein said step S1 performs gray processing on the image signal X and adds a gaussian white noise pair signal with a certain probability distribution to obtain an erosion signal: x1 ═ X + λ n, where n represents additive gaussian sampling noise with zero mean and variance of 1, and λ represents signal corrosion strength.
3. The image reconstructing and encoding method in spectral tomography according to claim 2, wherein said step S2 is to establish a coding sub-network Te () of a sparse denoising self-coding network, and to obtain the measured value y, wherein the coding sub-network is a three-layer fully-connected neural network: the input layer, hidden layer and output layer, with corrosion signal X1 ═ X + λ n as input data, the hidden layer feature vector is represented as:
a1=f(W1X1+b1)
the output layer output, i.e. the measured value y, is expressed as:
y=f(W2a1+b2)
wherein W1, b1 represents weight matrix and bias value between l layer and l +1 layer, and f (.) represents sigmoid activation function; the three-layer network is regarded as a whole to obtain a coding sub-network Te (), and the coding process is as follows:
y=Te(X1,Ωe)
where Ω e ═ W1, W2, b1, b2, denotes all parameter sets in the encoding process, and Te denotes the encoding subnetwork.
4. The image reconstructing and encoding method in spectral tomography according to claim 3, wherein said step S3 is to establish a decoding sub-network Te () of the sparse de-noising self-encoding network, and reconstruct the reconstructed picture from the measured value y, wherein the decoding sub-network is a three-layer fully-connected neural network structurally symmetrical to the encoding sub-network: the input layer, the hidden layer and the output layer, taking the measured value y as input data, and the hidden layer feature vector is expressed as:
a3=f(W3y+b3)
the output layer output, i.e. the reconstructed picture, is represented as:
X2=f(W4a3+b4)
wherein W3, b3 represents weight matrix and bias value between l layer and l +1 layer, and f (.) represents sigmoid activation function;
the three-layer network is regarded as a whole to obtain a decoding sub-network Td (), and the decoding process is as follows:
X2=Td(y,Ωd)
where Ω d ═ { W3, W4, b3, b4} denotes all parameter sets in the decoding process, and Td denotes the decoding subnetwork.
5. The image reconstructing and encoding method in spectral tomography according to claim 4, wherein said step S4 employs mean square error as a loss function for reducing the error between the reconstructed picture and the original picture, and introduces sparsity constraint to improve network performance:
Figure FDA0002699902480000021
where the first term is the mean square error, N represents the number of training samples, X1iDenotes the i-th reconstructed picture, XiRepresenting the ith original picture; the second term is the sparsity-limiting term, ρ 1jThe average activation degree of the hidden neuron j in the training set is shown, ρ is an expected activation degree, and β is a sparsity limiting term parameter.
6. The method for image reconstruction and encoding in spectral tomography according to claim 5, wherein said step S5 performs joint training on the encoding and decoding sub-networks, optimizes the loss function through a back propagation algorithm, and updates the parameters to minimize the loss function, thereby obtaining the optimal sparse denoising self-encoding network.
CN202011018487.8A 2020-09-24 2020-09-24 Image reconstruction and coding method in spectral tomography Pending CN112270725A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011018487.8A CN112270725A (en) 2020-09-24 2020-09-24 Image reconstruction and coding method in spectral tomography

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011018487.8A CN112270725A (en) 2020-09-24 2020-09-24 Image reconstruction and coding method in spectral tomography

Publications (1)

Publication Number Publication Date
CN112270725A true CN112270725A (en) 2021-01-26

Family

ID=74349912

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011018487.8A Pending CN112270725A (en) 2020-09-24 2020-09-24 Image reconstruction and coding method in spectral tomography

Country Status (1)

Country Link
CN (1) CN112270725A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113762967A (en) * 2021-03-31 2021-12-07 北京沃东天骏信息技术有限公司 Risk information determination method, model training method, device, and program product

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109919864A (en) * 2019-02-20 2019-06-21 重庆邮电大学 A kind of compression of images cognitive method based on sparse denoising autoencoder network

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109919864A (en) * 2019-02-20 2019-06-21 重庆邮电大学 A kind of compression of images cognitive method based on sparse denoising autoencoder network

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113762967A (en) * 2021-03-31 2021-12-07 北京沃东天骏信息技术有限公司 Risk information determination method, model training method, device, and program product

Similar Documents

Publication Publication Date Title
CN109978778B (en) Convolutional neural network medical CT image denoising method based on residual learning
CN110443867B (en) CT image super-resolution reconstruction method based on generation countermeasure network
CN112200306B (en) Electrical impedance imaging method based on deep learning
CN112348936A (en) Low-dose cone-beam CT image reconstruction method based on deep learning
Wu et al. Self-supervised coordinate projection network for sparse-view computed tomography
CN110610528A (en) Model-based dual-constraint photoacoustic tomography image reconstruction method
CN112734871A (en) Low-dose PET image reconstruction algorithm based on ADMM and deep learning
CN111369433A (en) Three-dimensional image super-resolution reconstruction method based on separable convolution and attention
CN115187689A (en) Swin-Transformer regularization-based PET image reconstruction method
CN114270397A (en) System and method for determining fluid and tissue volume estimates using electrical property tomography
Ye et al. Unified supervised-unsupervised (super) learning for x-ray ct image reconstruction
CN114913262B (en) Nuclear magnetic resonance imaging method and system with combined optimization of sampling mode and reconstruction algorithm
CN114299185A (en) Magnetic resonance image generation method, magnetic resonance image generation device, computer equipment and storage medium
Cammarasana et al. A universal deep learning framework for real-time denoising of ultrasound images
Barbano et al. Steerable conditional diffusion for out-of-distribution adaptation in imaging inverse problems
CN115868923A (en) Fluorescence molecule tomography method and system based on expanded cyclic neural network
Thomas Bio-medical Image Denoising using Autoencoders
CN112270725A (en) Image reconstruction and coding method in spectral tomography
CN117475018A (en) CT motion artifact removal method
CN105976412B (en) A kind of CT image rebuilding methods of the low tube current intensity scan based on the sparse regularization of offline dictionary
CN117203671A (en) Iterative image reconstruction improvement based on machine learning
CN117751388A (en) Method of noninvasive medical tomography with uncertainty estimation
CN114332271A (en) Dynamic parameter image synthesis method and system based on static PET image
Cong et al. Image Reconstruction from Sparse Low-Dose CT Data via Score Matching
Kushwaha et al. Development of Advanced Noise Filtering Techniques for Medical Image Enhancement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210126

RJ01 Rejection of invention patent application after publication