CN112488951B - Training method of low-dose image denoising network and denoising method of low-dose image - Google Patents

Training method of low-dose image denoising network and denoising method of low-dose image Download PDF

Info

Publication number
CN112488951B
CN112488951B CN202011430758.0A CN202011430758A CN112488951B CN 112488951 B CN112488951 B CN 112488951B CN 202011430758 A CN202011430758 A CN 202011430758A CN 112488951 B CN112488951 B CN 112488951B
Authority
CN
China
Prior art keywords
dose
image
low
network
standard
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011430758.0A
Other languages
Chinese (zh)
Other versions
CN112488951A (en
Inventor
郑海荣
胡战利
黄振兴
梁栋
刘新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202011430758.0A priority Critical patent/CN112488951B/en
Priority to PCT/CN2020/135188 priority patent/WO2022120694A1/en
Publication of CN112488951A publication Critical patent/CN112488951A/en
Application granted granted Critical
Publication of CN112488951B publication Critical patent/CN112488951B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10108Single photon emission computed tomography [SPECT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a training method of a low-dose image denoising network, a denoising method of a low-dose image, computer equipment and a storage medium.

Description

Training method of low-dose image denoising network and denoising method of low-dose image
Technical Field
The invention relates to the technical field of image reconstruction, in particular to a training method of a low-dose image denoising network, a denoising method of a low-dose image, computer equipment and a storage medium.
Background
Computed Tomography (CT) is an important imaging means for obtaining internal structural information of an object in a nondestructive manner, has many advantages of high resolution, high sensitivity, multiple layers and the like, is one of medical image diagnosis devices with the largest machine loading amount in China, and is widely applied to various medical clinical examination fields. However, as the use of X-rays is required during CT scanning, the problem of CT radiation dose is increasingly emphasized as people become increasingly aware of the potential hazards of radiation. The principle of Reasonably using Low dose (As Low As reasonable Achievable, ALARA) requires that the radiation dose to a patient is reduced As much As possible on the premise of meeting clinical diagnosis, and more noise appears in the imaging process along with the reduction of the dose, so that the imaging quality is poorer, therefore, the research and development of a new Low-dose CT imaging method can ensure the CT imaging quality and reduce the harmful radiation dose, and has important scientific significance and application prospect in the field of medical diagnosis. Because the radiation doses used by different anatomical regions can be different, the existing low-dose CT imaging method is usually based on the same anatomical region and has poor robustness.
Disclosure of Invention
In order to solve the defects of the prior art, the invention provides a training method of a low-dose image denoising network, a denoising method of a low-dose image, computer equipment and a storage medium, dose grade information is fused in an image reconstruction process, and the robustness of the denoising method and the quality of a reconstructed image are improved.
The specific technical scheme provided by the invention is as follows: a training method of a low-dose image denoising network is provided, and comprises the following steps:
acquiring a training data set, wherein the training data set comprises a plurality of input parameter sets, and each input parameter set comprises a standard dose image and a low dose image;
establishing a training network, wherein the training network comprises a low-dose image denoising network and a low-dose image generating network;
inputting the training data set into the training network, and generating a dose level estimation value and a standard dose estimation image of the low-dose image by the low-dose image denoising network according to the low-dose image; the low-dose image generation network generates a low-dose estimation image according to the standard dose estimation image and the dose level estimation value;
constructing a loss function according to the low-dose image, the low-dose estimation image, the standard-dose image and the standard-dose estimation image;
and optimizing the loss function to obtain parameters of the low-dose image denoising network and update the low-dose image denoising network.
Further, the low-dose image denoising network comprises a first feature extraction module, a first downsampling module, a dose level generation module, a first fusion module, a first downsampling module and a first reconstruction module which are connected in sequence, wherein the dose level generation module is used for generating a dose level estimation value, and the first fusion module is used for fusing the dose level estimation value and feature information of the low-dose image.
Further, the low-dose image generation network comprises a second feature extraction module, a second down-sampling module, a second fusion module, a second down-sampling module and a second reconstruction module which are connected in sequence, wherein the second fusion module is used for fusing the dose level estimation value and the feature information of the standard dose estimation image.
Further, each of the input parameter sets further includes a dose level value corresponding to the low dose image; the training network further comprises a low-dose image discrimination network, and the low-dose image discrimination network is used for generating a dose grade predicted value according to the low-dose image and the low-dose estimation image; the constructing of the loss function according to the low-dose image, the low-dose estimation image, the standard-dose image and the standard-dose estimation image comprises:
and constructing a loss function according to the low-dose image, the low-dose estimation image, the standard-dose estimation image and the dose grade predicted value.
Further, the low-dose image discrimination network comprises a plurality of first numerical constraint layers, a first flat layer and a first full connection layer.
Further, the training network further comprises a standard dose image discrimination network, the low dose image discrimination network further comprises a second full connection layer, and the low dose image discrimination network is further configured to generate a first truth prediction value according to the low dose image and the low dose estimation image; the standard dose image discrimination network is used for generating a second truth prediction value according to the standard dose image and the standard dose estimation image; the constructing of the loss function according to the low-dose image, the low-dose estimation image, the standard-dose image and the standard-dose estimation image comprises:
and constructing a loss function according to the low-dose image, the low-dose estimation image, the standard dose estimation image, the dose grade prediction value, the first truth prediction value and the second truth prediction value.
Further, the standard dose image discrimination network comprises a plurality of second numerical constraint layers, a second flat layer and a third full-connection layer.
The invention also provides a denoising method of the low-dose image, which comprises the following steps: and inputting the low-dose image to be denoised into the low-dose image denoising network obtained by the training method of the low-dose image denoising network to obtain the reconstructed low-dose image.
The invention also provides a computer device comprising a memory, a processor and a computer program stored on the memory, the processor executing the computer program to implement the training method as described in any one of the above.
The invention also provides a computer readable storage medium having stored thereon computer instructions which, when executed by a processor, implement a training method as defined in any one of the above.
According to the training method of the low-dose image denoising network, the low-dose image denoising network can generate the dose level estimation value and the standard dose estimation image of the low-dose image according to the low-dose image, then generate the low-dose estimation image according to the standard dose estimation image and the dose level estimation value, finally construct the loss function according to the low-dose image, the low-dose estimation image, the standard dose image and the standard dose estimation image, optimize the loss function, obtain the parameters of the low-dose image denoising network, fuse dose level information into an image reconstruction process, and improve the robustness of the denoising method and the quality of the reconstructed image.
Drawings
The technical solution and other advantages of the present invention will become apparent from the following detailed description of specific embodiments of the present invention, which is to be read in connection with the accompanying drawings.
FIG. 1 is a flowchart of a training method for a medium/low dose image denoising network according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a training network according to a first embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a medium-low dose image denoising network according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a low-medium dose image generation network according to an embodiment of the present invention;
FIGS. 5 a-5 d are schematic diagrams of a standard dose image, a low dose estimation image, and a standard dose estimation image according to a first embodiment of the present invention;
FIG. 6 is a schematic structural diagram of a low-medium dose image discrimination network according to a second embodiment of the present invention;
FIG. 7 is a schematic diagram of a detailed structure of a low-medium dose image discrimination network according to a second embodiment of the present invention;
fig. 8 is a schematic structural diagram of a standard dose image discrimination network in the third embodiment of the present invention;
fig. 9 is a schematic structural diagram of a standard dose image discrimination network according to a third embodiment of the present invention;
FIG. 10 is a schematic structural diagram of a low-medium dose image discrimination network according to a third embodiment of the present invention;
FIG. 11 is a schematic diagram of a detailed structure of a low-medium dose image discrimination network according to a third embodiment of the present invention;
FIG. 12 is a schematic structural diagram of a training system of a medium-low dose image denoising network according to a fourth embodiment of the present invention;
fig. 13 is a schematic structural diagram of a computer device in the sixth embodiment of the present invention.
Detailed Description
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings. This invention may, however, be embodied in many different forms and should not be construed as limited to the specific embodiments set forth herein. Rather, these embodiments are provided to explain the principles of the invention and its practical application to thereby enable others skilled in the art to understand the invention for various embodiments and with various modifications as are suited to the particular use contemplated. In the drawings, like reference numerals will be used to refer to like elements throughout.
The training method of the low-dose image denoising network provided by the invention comprises the following steps:
acquiring a training data set, wherein the training data set comprises a plurality of input parameter groups, and each input parameter group comprises a standard dose image and a low dose image;
establishing a training network, wherein the training network comprises a low-dose image denoising network and a low-dose image generating network;
inputting the training data set into a training network, and generating a dose grade estimation value of a low-dose image and a standard dose estimation image by a low-dose image denoising network according to the low-dose image; the low-dose image generation network generates a low-dose estimation image according to the standard dose estimation image and the dose level estimation value;
constructing a loss function according to the low-dose image, the low-dose estimation image, the standard-dose image and the standard-dose estimation image;
and optimizing the loss function to obtain parameters of the low-dose image denoising network and updating the low-dose image denoising network.
According to the training method of the low-dose image denoising network, the low-dose image denoising network can generate the dose level estimation value and the standard dose estimation image of the low-dose image according to the low-dose image, and then the low-dose estimation image is generated according to the standard dose estimation image and the dose level estimation value, so that the dose level information is fused into the image reconstruction process, and the robustness of the denoising method and the quality of the reconstructed image are improved.
In the following, a CT image is taken as an example, and a training method of a low-dose image denoising network, a denoising method of a low-dose image, a computer device and a storage medium in the present application are described in detail through several specific embodiments and with reference to the accompanying drawings, it should be noted that the CT image is taken as an example and is not used to limit the application field of the present application, and the present application may also be applied to other medical imaging fields such as PET and SPECT.
Example one
Referring to fig. 1, the training method of the low-dose image denoising network in the embodiment includes the steps of:
s1, acquiring a training data set, wherein the training data set comprises a plurality of input parameter sets, and each input parameter set comprises a standard dose image and a low dose image;
s2, establishing a training network, wherein the training network comprises a low-dose image denoising network and a low-dose image generating network;
s3, inputting the training data set into a training network, and generating a dose level estimation value and a standard dose estimation image of a low dose image by the low dose image denoising network according to the low dose image; the low-dose image generation network generates a low-dose estimation image according to the standard dose estimation image and the dose level estimation value;
s4, constructing a loss function according to the low-dose image, the low-dose estimation image, the standard-dose image and the standard-dose estimation image;
and S5, optimizing the loss function, obtaining parameters of the low-dose image denoising network and updating the low-dose image denoising network.
Specifically, in step S1, the training data set in the present embodiment is:
D={(x1,y1),(x2,y2),......,(xi,yi),......,(xn,yn)},
where n denotes the number of input parameter sets in the training data set, xiRepresenting low dose images in the ith input parameter set, yiRepresenting the standard dose image, n low dose images { x ] in the ith input parameter set1,x2,......,xi,......,xnThe X-ray CT images are low-dose CT images of different anatomical parts, and can also be CT images of the same anatomical part under different dose levels, namely n low-dose images { x }1,x2,......,xi,......,xnDose levels of n low dose images { x) are not exactly the same1,x2,......,xi,......,xnAnd n standard dose images y1,y2,......,yi,......,ynX with the same subscript iniAnd yiA low dose CT image and a standard dose CT image representing the same anatomical region or a low dose CT image and a standard dose CT image at the same dose level. Wherein, different anatomical parts can comprise skull, orbit, nasal sinuses, neck, lung cavity, abdomen, pelvic cavity (male), pelvic cavity (female), knee, lumbar and other parts.
It should be noted that the low dose images and the standard dose images in the training data set for training in the present embodiment are selected from sample data sets commonly used in the art, and are not limited herein.
Referring to fig. 2, the training network in this embodiment includes a low-dose image denoising network G1 and a low-dose image generating network G2. The training data set is input into a low-dose image denoising network G1, the low-dose image denoising network G1 generates a dose level estimation value and a standard dose estimation image of the low-dose image according to the low-dose image and outputs the dose level estimation value and the standard dose estimation image to a low-dose image generation network G2, and the low-dose image generation network G2 generates the low-dose estimation image according to the standard dose estimation image and the dose level estimation value.
Referring to fig. 3, the low-dose image denoising network G1 includes a first feature extraction module 11, a first downsampling module 12, a dose level generation module 13, a first fusion module 14, a first upsampling module 15, and a first reconstruction module 16, which are connected in sequence, where the dose level generation module 13 is configured to generate a dose level estimation value, and the first fusion module 14 is configured to fuse the dose level estimation value and feature information of a low-dose image.
As an example, the first feature extraction module 11 is a convolutional layer, the size of a convolutional kernel of the convolutional layer is 1 × 1 × 1, the number of channels of the convolutional kernel is 64, and the low-dose image can be mapped to 64 channels by the first feature extraction module 11.
The first downsampling module 12 includes a plurality of residual units 100 connected in sequence, the residual units 100 are configured to downsample a low-dose image, and two adjacent residual units 100 perform downsampling on the size of the low-dose image by 2 times in a maximum pooling manner, for example, the size of the low-dose image is 512 × 512, the downsampling performed by the first residual unit 100 obtains an image with a size of 256 × 256, the downsampling performed by the second residual unit 100 obtains an image with a size of 128 × 128, and so on. The number of residual error units 100 may be determined according to the actual size of the low-dose image and the size of the image to be obtained. Fig. 3 illustrates a case where the first down-sampling module 12 includes two residual error units 100.
Each residual unit 100 includes three convolutional layers 101, the size of the convolution kernel of the first convolutional layer 101 is 1 × 1 × 64, the number of channels of the convolution kernel is 64, the size of the convolution kernel of the second convolutional layer 101 is 3 × 3 × 64, the number of channels of the convolution kernel is 16, the size of the convolution kernel of the third convolutional layer 101 is 3 × 3 × 16, and the number of channels of the convolution kernel is 64. Each residual unit 100 further includes an activation function 102, and after the convolution operation is performed on the second convolution layer 101, the data after the convolution operation needs to be subjected to a non-linear processing through the activation function 102. The activation function 102 is a ReLU function.
Preferably, in order to avoid information loss caused by continuous multiple down-sampling, each residual unit 100 in this embodiment fuses the image after convolution operation of the third convolutional layer 101 with the image after convolution operation of the second convolutional layer 101.
The dose level generation module 13 includes a pooling layer 130, a pooling layer 131, and a convolutional layer 132, where the pooling layer 130 and the pooling layer 131 are respectively connected between the first downsampling module 12 and the fully-connected layer 132, the pooling layer 130 maps the image output by the first downsampling module 12 to 1 × 1 × 64 by using an average pooling method, the pooling layer 131 maps the image output by the first downsampling module 12 to 1 × 1 × 64 by using a maximum pooling method, the convolutional kernel size of the convolutional layer 132 is 1 × 1 × 128, and the number of channels is 5, where the number of channels of the convolutional kernel of the convolutional layer 132 is equal to the number of set dose levels, and in this embodiment, the number of dose levels is set to 5 as an example, as shown in the following table:
table-dose rating table
Scanning current (mA) Dose rating
0~30 Grade one
30~130 Grade two
130~230 Grade three
230~330 Grade four
≥330 Grade five
The convolutional layer 132 maps the images outputted from the pooling layers 130 and 131 to 1 × 1 × 5, for example, if the output of the convolutional layer 132 is {0, 1, 0, 0, 0} indicates that the dose level estimation value is level two, that is, the image is a CT image obtained when the scan current is 30 to 130 mA.
The first fusion module 14 includes a convolution layer 140 and an activation function (not shown), the size of a convolution kernel of the convolution layer 140 is 1 × 1 × 5, the number of channels is 64, after the convolution operation is performed on the convolution layer 140, the activation function performs a nonlinear processing on data after the convolution operation to generate a weight mask with the number of channels being 64, the activation function in this embodiment is a Sigmod function, and the first fusion module 14 performs a point multiplication on the weight mask with the number of channels being 64 and an input image of the last residual unit 100 in the first downsampling module 12, so as to fuse the dose level information into an image reconstruction process.
The first upsampling module 15 includes a plurality of residual error units 200 connected in sequence, the plurality of residual error units 200 are configured to upsample the image output by the first fusion module 14, two adjacent residual error units 200 perform 2 times upsampling on the image size by means of bicubic interpolation, for example, the image output by the first fusion module 14 has a size of 2 × 2, the upsampling is performed by the first residual error unit 200 to obtain an image with a size of 4 × 4, the upsampling is performed by the second residual error unit 200 to obtain an image with a size of 8 × 8, and so on. The number of residual error units 200 may be determined according to the size of the image output by the first fusion module 14 and the size of the image to be obtained. Fig. 3 shows an exemplary case where the first upsampling module 15 comprises two residual units 200.
Each residual unit 200 includes three convolutional layers 201, the size of the convolution kernel of the first convolutional layer 201 is 1 × 1 × 64, the number of channels of the convolution kernel is 64, the size of the convolution kernel of the second convolutional layer 201 is 3 × 3 × 64, the number of channels of the convolution kernel is 16, the size of the convolution kernel of the third convolutional layer 201 is 3 × 3 × 16, and the number of channels of the convolution kernel is 64. Each residual unit 200 further includes an activation function 202, and after the convolution operation is performed on the second convolution layer 201, the data after the convolution operation needs to be subjected to a non-linear processing through the activation function 202. The activation function 102 is a ReLU function.
Preferably, in order to avoid information loss caused by continuous multiple down-sampling, each residual unit 100 in this embodiment fuses the image after convolution operation of the third convolutional layer 101 with the image after convolution operation of the second convolutional layer 101.
Further, in order to avoid information loss caused by continuous multiple up-sampling after continuous multiple down-sampling, each residual unit 200 in this embodiment fuses the image of the third convolutional layer 101 after convolution operation with the corresponding input image of the residual unit 100 in the first down-sampling module 12.
The first reconstruction module 16 is a convolution layer, the size of convolution kernel of the convolution layer is 1 × 1 × 64, the number of channels of the convolution kernel is 1, and the standard dose estimation image is reconstructed and generated by the first reconstruction module 16.
Referring to fig. 4, the low-dose image generation network G2 includes a second feature extraction module 21, a second downsampling module 22, a second fusion module 23, a second upsampling module 24, and a second reconstruction module 25, which are connected in sequence, where the second fusion module 23 is configured to fuse the dose level estimation value output by the dose level generation module 13 with the feature information of the standard dose estimation image.
Specifically, the second feature extraction module 21 is a convolutional layer, the size of a convolutional kernel of the convolutional layer is 1 × 1 × 1, the number of channels of the convolutional kernel is 64, and the standard dose estimation image can be mapped to 64 channels by the second feature extraction module 21.
The second down-sampling module 22 includes a plurality of residual error units 300 connected in sequence, the residual error units 300 are configured to down-sample the standard dose estimation image, and two adjacent residual error units 300 are configured to down-sample the standard dose estimation image by 2 times in size in a maximum pooling manner, for example, the size of the standard dose estimation image is 512 × 512, the down-sampling by the first residual error unit 300 obtains an image with a size of 256 × 256, the down-sampling by the second residual error unit 300 obtains an image with a size of 128 × 128, and so on. The number of residual error units 300 can be determined according to the actual size of the standard dose estimation image and the size of the image to be obtained. The case where the second downsampling module 22 includes two residual units 300 is exemplarily shown in fig. 4.
Each residual unit 300 includes three convolutional layers 301, the size of the convolution kernel of the first convolutional layer 301 is 1 × 1 × 64, the number of channels of the convolution kernel is 64, the size of the convolution kernel of the second convolutional layer 301 is 3 × 3 × 64, the number of channels of the convolution kernel is 16, the size of the convolution kernel of the third convolutional layer 301 is 3 × 3 × 16, and the number of channels of the convolution kernel is 64. Each residual unit 300 further includes an activation function 302, and after the convolution operation is performed on the second convolution layer 301, the data after the convolution operation needs to be subjected to a non-linear processing through the activation function 302. Wherein the activation function 302 is a ReLU function.
Preferably, in order to avoid information loss caused by continuous multiple down-sampling, each residual unit 300 in this embodiment fuses the image of the third convolutional layer 301 after convolution operation with the image of the second convolutional layer 301 after convolution operation.
The second fusion module 23 includes a convolutional layer 230 and an activation function (not shown), where the size of a convolutional kernel of the convolutional layer 230 is 1 × 1 × 5, the number of channels is 64, the convolutional layer 230 is configured to map the dose level estimation value to 64 channels, after the convolutional layer 230 performs a convolution operation, the data after the convolution operation is subjected to a nonlinear processing by using the activation function to generate a weight mask with the number of channels being 64, the activation function in this embodiment is a Sigmod function, and the second fusion module 23 performs a dot multiplication on the weight mask with the number of channels being 64 and an input image of the last residual unit 300 in the second downsampling module 22, so as to fuse the dose level information into an image reconstruction process.
The second upsampling module 24 includes a plurality of residual error units 400 connected in sequence, the plurality of residual error units 400 are configured to upsample the image output by the second fusion module 23, two adjacent residual error units 400 perform 2 times upsampling on the image size by using a bicubic interpolation manner, for example, the size of the image output by the second fusion module 23 is 2 × 2, an image with a size of 4 × 4 is obtained after upsampling by the first residual error unit 400, an image with a size of 8 × 8 is obtained after upsampling by the second residual error unit 400, and so on. The number of the residual error units 400 may be determined according to the size of the image output by the second fusion module 23 and the size of the image to be obtained. Fig. 4 shows an exemplary case where the second upsampling module 24 comprises two residual units 400.
Each residual unit 400 includes three convolutional layers 401, the size of the convolutional kernel of the first convolutional layer 401 is 1 × 1 × 64, the number of channels of the convolutional kernel is 64, the size of the convolutional kernel of the second convolutional layer 401 is 3 × 3 × 64, the number of channels of the convolutional kernel is 16, the size of the convolutional kernel of the third convolutional layer 401 is 3 × 3 × 16, and the number of channels of the convolutional kernel is 64. Each residual unit 400 further includes an activation function 402, and after the convolution operation is performed on the second convolution layer 401, the activated function 402 is further required to perform a non-linear processing on the data after the convolution operation. Wherein the activation function 402 is a ReLU function.
Preferably, in order to avoid information loss caused by continuous multiple down-sampling, each residual unit 400 in this embodiment fuses the image after convolution operation of the third convolutional layer 401 with the image after convolution operation of the second convolutional layer 401.
Further, in order to avoid information loss caused by continuous multiple up-sampling after continuous multiple down-sampling, each residual unit 400 in this embodiment fuses the image of the third convolutional layer 401 after convolution operation with the corresponding input image of the residual unit 300 in the second down-sampling module 22.
The second reconstruction module 25 is a convolution layer, the size of a convolution kernel of the convolution layer is 1 × 1 × 64, the number of channels of the convolution kernel is 1, and a low-dose estimation image is reconstructed and generated by the second reconstruction module 25. In order to avoid information loss, the image output after the convolution operation of the third convolution layer 301 in the last residual unit 300 in the second downsampling module 22 and the image obtained after the convolution operation of the second convolution layer 301 are fused are output to the second reconstruction block 25, so as to retain more context information.
In step S4, the expression of the loss function constructed from the low dose image, the low dose estimation image, the standard dose image, and the standard dose estimation image is as follows:
Figure RE-RE-RE-GDA0002832344850000101
wherein L isTotalRepresenting the loss function, n representing the number of input parameter sets in the training data set, alpha1、α2Respectively represent a weight factor, Xi、YiRespectively representing the ith low dose image and the ith standard dose image,
Figure RE-RE-RE-GDA0002832344850000102
respectively representing the ith low dose estimate image and the ith standard dose estimate image.
The loss function is constructed according to the mean square error of the low dose image and the low dose estimated image and the mean square error of the standard dose image and the standard dose estimated image, however, the loss function may be constructed in other manners, for example, the loss function may be constructed according to the absolute value error of the low dose image and the low dose estimated image and the absolute value error of the standard dose image and the standard dose estimated image, which is shown by way of example only and is not limited.
In step S5, a corresponding optimization method may be selected according to practical applications to optimize the loss function, for example, if the low-dose image denoising network G1 in this embodiment is applied to supervised learning, an Adam optimization method is used to optimize the loss function, and if the low-dose image denoising network G1 in this embodiment is applied to the generative confrontation model, an SDG optimization method is used to optimize the loss function.
The updated low-dose image denoising network can be obtained after the optimization, and the training method of the low-dose image denoising network provided by the embodiment fuses the dose level information into the image reconstruction process, so that the low-dose image denoising network obtained by training can be suitable for images with different dose levels, namely different anatomical structures or the same anatomical structure under different dose levels, and the robustness of the denoising method is improved. Referring to fig. 5a to 5d, fig. 5a to 5d exemplarily show a standard dose image, a low dose estimation image, and a standard dose estimation image in this embodiment, and it can be seen from the diagrams that the low dose image reconstructed by using the low dose image denoising network in this embodiment can well retain image details, and the reconstructed image has high definition.
Example two
Referring to fig. 6 to 7, the training network in this embodiment further includes a low-dose image discrimination network D1, and each input parameter set in this embodiment further includes a dose level value corresponding to the low-dose image. The low-dose image discrimination network is used for generating a dose grade predicted value according to the low-dose image and the low-dose estimation image.
Specifically, the low-dose image discrimination network D1 includes a plurality of first numerical constraint layers 31, a first flat-laid layer 32, and a first fully-connected layer 33 connected in sequence.
The first numerical constraint layer 31 includes a convolutional layer 310, a regularization layer 311, and an activation function 312, which are connected in sequence, where the activation function 312 is a leak ReLU function. The present embodiment exemplarily shows that the low-dose image discrimination network D1 includes 7 first numerical constraint layers 31, wherein the network parameters of the convolutional layer 312 and the first fully-connected layer 33 in the 7 first numerical constraint layers 31 are shown in the following table:
TABLE 2 network parameters of the Low dose image discrimination network D1
Figure RE-RE-RE-GDA0002832344850000111
Figure RE-RE-RE-GDA0002832344850000121
The low-dose image discrimination network D1 is used for carrying out numerical value constraint on the low-dose image and the low-dose estimation image, so that the final output dose grade predicted value is constrained to be 0-1, and the condition that the numerical value is too large and is not beneficial to classification is avoided. Here, the dose level prediction value includes a dose level prediction value of a low-dose image and a dose level prediction value of a low-dose estimation image.
Step S4 of the present embodiment is different from that of the first embodiment, specifically, step S4 is to construct a loss function according to the low dose image, the low dose estimation image, the standard dose estimation image, and the dose level prediction value, and the expression of the loss function is as follows:
Figure RE-RE-RE-GDA0002832344850000122
wherein L isTotalRepresenting the loss function, n representing the number of input parameter sets in the training data set, alpha1、α2、α5、α6Respectively representing a weight factor, Xi、YiRespectively representing the ith low dose image and the ith standard dose image,
Figure RE-RE-RE-GDA0002832344850000123
respectively representing the ith low dose estimation image, the ith standard dose estimation image, r3 X
Figure RE-RE-RE-GDA0002832344850000124
Dose level prediction values, d, and d representing a low dose image and a low dose estimated image, respectively,
Figure RE-RE-RE-GDA0002832344850000125
Dose level values, dose level estimates, respectively, are indicated, CrossEntropy is indicated by CrossEntropy.
In this embodiment, a low-dose image discrimination network D1 is added on the basis of the first embodiment, the dose level prediction value of the low-dose image and the dose level prediction value of the low-dose estimation image are obtained through the low-dose image discrimination network D1, and then the dose level prediction value of the low-dose image and the dose level prediction value of the low-dose estimation image are considered in constructing a loss function, so that the robustness of the denoising method is further improved.
EXAMPLE III
Referring to fig. 8 to 11, the training network in this embodiment further includes a standard dose image discrimination network D2, and the standard dose image discrimination network D2 is configured to generate a second truth prediction value according to the standard dose image and the standard dose estimation image. The low dose image discrimination network D1 in this embodiment also includes a second fully connected layer 34 connected to the first laid-up layer 32.
Specifically, the standard dose image discrimination network D2 includes a plurality of second numerical constraint layers 41, a second planarization layer 42, and a third fully-connected layer 43 connected in series.
The second numerical constraint layer 41 includes a convolutional layer 410, a regularization layer 411 and an activation function 412 connected in sequence, where the activation function 412 is a leak ReLU function. The present embodiment exemplarily shows that the standard dose image discrimination network D2 includes 7 second numerical constraint layers 41. Network parameters of the convolutional layer 412, the third fully-connected layer 43, and the second fully-connected layer 34 in the 7 second numerical constraint layers 41 are shown in the following table:
TABLE 3 network parameters of the Standard dose image discrimination network D2
Unit cell Convolution with a bit line Number of channels Stride length
The first convolutional layer 3x3x32 32 2
The second convolution layer 3x3x32 64 1
The third convolutional layer 3x3x64 64 2
The fourth convolution layer 3x3x64 128 1
The fifth convolution layer 3x3x128 128 2
The sixth convolutional layer 3x3x128 256 1
The seventh convolutional layer 3x3x256 256 2
Second full connection layer 1 - -
Third full connection layer 1 - -
The standard dose image discrimination network D2 is used for carrying out numerical value constraint on the standard dose image and the standard dose estimation image, so that the finally output second truth prediction value is constrained to be 0-1. The low-dose image discrimination network D1 is also used for carrying out numerical value constraint on the low-dose image and the low-dose estimation image to generate a first truth prediction value, so that the finally output first truth prediction value is constrained to be 0-1. The first authenticity predicted value comprises a first authenticity predicted value of the low-dose image and the low-dose estimation image, the second authenticity predicted value comprises a second authenticity predicted value of the standard-dose image and the standard-dose estimation image, and the first authenticity predicted value and the second authenticity predicted value are 0 or 1.
Step S4 of this embodiment is also different from that of the second embodiment, and specifically, step S4 is to construct a loss function according to the low-dose image, the low-dose estimated image, the standard-dose estimated image, the dose level predicted value, the first truth predicted value, and the second truth predicted value, where an expression of the loss function is as follows:
Figure RE-RE-RE-GDA0002832344850000141
wherein L isTotalRepresenting the loss function, n representing the number of input parameter sets in the training data set, alpha1、α2、α3、α4、α5、α6Respectively representing a weight factor, Xi、YiRespectively representing the ith low dose image and the ith standard dose image,
Figure RE-RE-RE-GDA0002832344850000142
respectively representing the ith low dose estimation image, the ith standard dose estimation image, r1 X
Figure RE-RE-RE-GDA0002832344850000143
Respectively representing the first truth prediction values r of the low-dose image and the low-dose estimation image2 Y
Figure RE-RE-RE-GDA0002832344850000144
Respectively representing second truth prediction values of the standard dose image and the standard dose estimation image, wherein lambda represents a balance factor, r3 X
Figure RE-RE-RE-GDA0002832344850000145
Dose level prediction values, d, and d representing the low dose image and the low dose estimated image, respectively,
Figure RE-RE-RE-GDA0002832344850000146
The dose level value, dose level estimate, E for expectation and CrossEntropy for CrossEntropy are indicated, respectively.
In this embodiment, a standard dose image discrimination network D2 is added on the basis of the second embodiment, the low dose image discrimination network D1 further includes a second full-connection layer 34, the second truth prediction value of the standard dose image and the standard dose estimation image and the first truth prediction value of the low dose image and the low dose estimation image can be obtained through the standard dose image discrimination network D2 and the low dose image discrimination network D1, then the second truth prediction value of the standard dose image and the standard dose estimation image and the first truth prediction value of the low dose image and the low dose estimation image are considered in constructing a loss function, and resistance loss information is introduced, so that the visual effect is improved, and the image reconstruction quality is further improved.
Example four
Referring to fig. 12, the present embodiment provides a training system for a low-dose image denoising network, where the training system includes a training data set acquisition module 100, a network construction module 101, and a training module 102.
The training data set acquisition module 100 is configured to acquire a training data set, where the training data set includes a plurality of input parameter sets, and each input parameter set includes a standard dose image and a low dose image. The training data set in this embodiment is:
D={(x1,y1),(X2,y2),......,(xi,yi),......,(xn,yn)},
where n denotes the number of input parameter sets in the training data set, xiRepresenting low dose images in the ith input parameter set, yiRepresenting the standard dose image, n low dose images { x ] in the ith input parameter set1,x2,......,xi,......,xnThe X-ray CT images are low-dose CT images of different anatomical parts, and can also be CT images of the same anatomical part under different dose levels, namely n low-dose images { x }1,x2,......,xi,......,xnDose levels of n low dose images { x) are not exactly the same1,x2,......,xi,......,xnAnd n standard dose images y1,y2,......,yi,......,ynX with the same subscript iniAnd yiLow dose and standard dose CT images or the same dose level representing the same anatomical regionLow dose CT images and standard dose CT images. Wherein, different anatomical parts can comprise skull, orbit, nasal sinuses, neck, lung cavity, abdomen, pelvic cavity (male), pelvic cavity (female), knee, lumbar and other parts.
It should be noted that the low dose images and the standard dose images in the training data set for training in the present embodiment are selected from sample data sets commonly used in the art, and are not limited herein.
The network construction module 101 is used for constructing and establishing a training network, wherein the training network comprises a low-dose image denoising network and a low-dose image generating network; inputting the training data set into a training network, and generating a dose grade estimation value of a low-dose image and a standard dose estimation image by a low-dose image denoising network according to the low-dose image; the low-dose image generation network generates a low-dose estimation image from the standard dose estimation image and the dose level estimation value.
The training module 102 is configured to construct a loss function according to the low-dose image, the low-dose estimated image, the standard-dose image, and the standard-dose estimated image, optimize the loss function, obtain parameters of the low-dose image denoising network, and update the low-dose image denoising network.
EXAMPLE five
The embodiment provides a denoising method of a low-dose CT image, which comprises the following steps: and inputting the low-dose image to be denoised into a low-dose image denoising network obtained by using the training method of the low-dose image denoising network described in the first embodiment to the third embodiment, and obtaining a reconstructed low-dose image.
It should be noted that the denoising method in this embodiment includes two implementation manners, the first implementation manner is to use the low-dose image denoising network trained in the first to third embodiments as the denoising network for the low-dose image, and the low-dose image to be denoised is input to the low-dose image denoising network, so as to obtain the reconstructed low-dose image. The second implementation manner is that the low-dose image denoising network is trained by using the training method of the low-dose image denoising network described in the first to third embodiments, and then the low-dose image to be denoised is input into the trained low-dose image denoising network to obtain the reconstructed low-dose image.
By the denoising method, the details of the original image can be better extracted, so that the reconstructed image is clearer.
EXAMPLE six
Referring to fig. 13, the present embodiment provides a computer device, which includes a processor 200 and a memory 201, and a computer program stored on the memory 201, wherein the processor 200 executes the computer program to implement the training method according to the first to third embodiments.
The Memory 201 may include a Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory.
The processor 200 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the training method described in the first to third embodiments may be implemented by integrated logic circuits of hardware in the processor 200 or instructions in the form of software. The Processor 200 may also be a general-purpose Processor including a Central Processing Unit (CPU), a Network Processor (NP), etc., and may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, or discrete hardware components.
The memory 201 is used for storing a computer program, and the processor 200 executes the computer program to implement the training method according to the first to third embodiments after receiving the execution instruction.
The embodiment also provides a computer storage medium, in which a computer program is stored, and the processor 200 is configured to read and execute the computer program stored in the computer storage medium, so as to implement the training method according to the first to third embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer storage medium or transmitted from one computer storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer storage media may be any available media that can be accessed by a computer or a data storage device, such as a server, data center, etc., that incorporates one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus, and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is directed to embodiments of the present application and it is noted that numerous modifications and adaptations may be made by those skilled in the art without departing from the principles of the present application and are intended to be within the scope of the present application.

Claims (8)

1. A training method of a low-dose image denoising network is characterized by comprising the following steps:
acquiring a training data set, wherein the training data set comprises a plurality of input parameter sets, and each input parameter set comprises a standard dose image and a low dose image;
establishing a training network, wherein the training network comprises a low-dose image denoising network and a low-dose image generating network;
inputting the training data set into the training network, and generating a dose level estimation value and a standard dose estimation image of the low-dose image by the low-dose image denoising network according to the low-dose image; the low-dose image generation network generates a low-dose estimation image according to the standard dose estimation image and the dose level estimation value;
constructing a loss function according to the low-dose image, the low-dose estimation image, the standard-dose image and the standard-dose estimation image;
optimizing the loss function to obtain parameters of the low-dose image denoising network and updating the low-dose image denoising network;
the low-dose image denoising network comprises a first feature extraction module, a first downsampling module, a dose level generation module, a first fusion module, a first downsampling module and a first reconstruction module which are sequentially connected, wherein the dose level generation module is used for generating a dose level estimation value, and the first fusion module is used for fusing the dose level estimation value and feature information of a low-dose image;
the low-dose image generation network comprises a second feature extraction module, a second down-sampling module, a second fusion module, a second down-sampling module and a second reconstruction module which are sequentially connected, wherein the second fusion module is used for fusing the dose grade estimation value with the feature information of the standard dose estimation image.
2. The training method of claim 1, wherein each of the input parameter sets further comprises a dose level value corresponding to the low dose image; the training network further comprises a low-dose image discrimination network, and the low-dose image discrimination network is used for generating a dose grade predicted value according to the low-dose image and the low-dose estimation image; the constructing of the loss function according to the low-dose image, the low-dose estimation image, the standard-dose image and the standard-dose estimation image comprises:
and constructing a loss function according to the low-dose image, the low-dose estimation image, the standard-dose estimation image and the dose grade predicted value.
3. The training method of claim 2, wherein the low-dose image discrimination network comprises a plurality of first numerically constrained layers, a first flattened layer, and a first fully-connected layer.
4. A training method as claimed in claim 3, wherein the training network further comprises a standard dose image discrimination network, the low dose image discrimination network further comprises a second fully connected layer, and the low dose image discrimination network is further configured to generate a first prediction value for authenticity from the low dose image and the low dose estimate image; the standard dose image discrimination network is used for generating a second truth prediction value according to the standard dose image and the standard dose estimation image; the constructing of the loss function according to the low-dose image, the low-dose estimation image, the standard-dose image and the standard-dose estimation image comprises:
and constructing a loss function according to the low-dose image, the low-dose estimation image, the standard dose estimation image, the dose grade prediction value, the first truth prediction value and the second truth prediction value.
5. The training method of claim 4, wherein the standard dose image discrimination network comprises a plurality of second numerically constrained layers, a second flattened layer, and a third fully-connected layer.
6. A denoising method of a low-dose image, the denoising method comprising: inputting the low-dose image to be denoised into a low-dose image denoising network obtained by using the training method of the low-dose image denoising network according to any one of claims 1-5, and obtaining a reconstructed low-dose image.
7. A computer device comprising a memory, a processor and a computer program stored on the memory, wherein the processor executes the computer program to implement the training method of any one of claims 1 to 5.
8. A computer-readable storage medium having computer instructions stored thereon, wherein the computer instructions, when executed by a processor, implement the training method of any one of claims 1-5.
CN202011430758.0A 2020-12-07 2020-12-07 Training method of low-dose image denoising network and denoising method of low-dose image Active CN112488951B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011430758.0A CN112488951B (en) 2020-12-07 2020-12-07 Training method of low-dose image denoising network and denoising method of low-dose image
PCT/CN2020/135188 WO2022120694A1 (en) 2020-12-07 2020-12-10 Low-dose image denoising network training method and low-dose image denoising method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011430758.0A CN112488951B (en) 2020-12-07 2020-12-07 Training method of low-dose image denoising network and denoising method of low-dose image

Publications (2)

Publication Number Publication Date
CN112488951A CN112488951A (en) 2021-03-12
CN112488951B true CN112488951B (en) 2022-05-20

Family

ID=74940024

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011430758.0A Active CN112488951B (en) 2020-12-07 2020-12-07 Training method of low-dose image denoising network and denoising method of low-dose image

Country Status (2)

Country Link
CN (1) CN112488951B (en)
WO (1) WO2022120694A1 (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11250600B2 (en) * 2018-01-12 2022-02-15 Korea Advanced Institute Of Science And Technology Method for processing X-ray computed tomography image using neural network and apparatus therefor
US11580410B2 (en) * 2018-01-24 2023-02-14 Rensselaer Polytechnic Institute 3-D convolutional autoencoder for low-dose CT via transfer learning from a 2-D trained network
CN109785243B (en) * 2018-11-28 2023-06-23 西安电子科技大学 Denoising method and computer based on unregistered low-dose CT of countermeasure generation network
CN110827216B (en) * 2019-10-23 2023-07-14 上海理工大学 Multi-generator generation countermeasure network learning method for image denoising
CN110930318B (en) * 2019-10-31 2023-04-18 中山大学 Low-dose CT image repairing and denoising method
CN110992290B (en) * 2019-12-09 2023-09-15 深圳先进技术研究院 Training method and system for low-dose CT image denoising network
CN111179366B (en) * 2019-12-18 2023-04-25 深圳先进技术研究院 Anatomical structure difference priori based low-dose image reconstruction method and system

Also Published As

Publication number Publication date
CN112488951A (en) 2021-03-12
WO2022120694A1 (en) 2022-06-16

Similar Documents

Publication Publication Date Title
CN110009669B (en) 3D/2D medical image registration method based on deep reinforcement learning
CN111179366B (en) Anatomical structure difference priori based low-dose image reconstruction method and system
CN112541871A (en) Training method of low-dose image denoising network and denoising method of low-dose image
US11514621B2 (en) Low-dose image reconstruction method and system based on prior anatomical structure difference
CN109215014B (en) Training method, device and equipment of CT image prediction model and storage medium
CN112489154A (en) MRI motion artifact correction method for generating countermeasure network based on local optimization
CN109741254B (en) Dictionary training and image super-resolution reconstruction method, system, equipment and storage medium
US20220245868A1 (en) System and method for image reconstruction
CN114359360A (en) Two-way consistency constraint medical image registration algorithm based on countermeasure
CN111612689B (en) Medical image processing method, medical image processing device, computer equipment and readable storage medium
CN111325695B (en) Low-dose image enhancement method and system based on multi-dose grade and storage medium
CN111584066B (en) Brain medical image diagnosis method based on convolutional neural network and symmetric information
CN111292322B (en) Medical image processing method, device, equipment and storage medium
CN111489406B (en) Training and generating method, device and storage medium for generating high-energy CT image model
US20230079353A1 (en) Image correction using an invertable network
CN113989110A (en) Lung image registration method and device, computer equipment and storage medium
Chen et al. DuSFE: Dual-Channel Squeeze-Fusion-Excitation co-attention for cross-modality registration of cardiac SPECT and CT
US20230342882A1 (en) Medical use image processing method, medical use image processing program, medical use image processing device, and learning method
CN112488951B (en) Training method of low-dose image denoising network and denoising method of low-dose image
Xie et al. 3D few-view CT image reconstruction with deep learning
CN107799166B (en) Medical imaging system
US10347014B2 (en) System and method for image reconstruction
CN114693831B (en) Image processing method, device, equipment and medium
CN112614205B (en) Image reconstruction method and device
US20240029324A1 (en) Method for image reconstruction, computer device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant