WO2022120694A1 - 低剂量图像去噪网络的训练方法、低剂量图像的去噪方法 - Google Patents
低剂量图像去噪网络的训练方法、低剂量图像的去噪方法 Download PDFInfo
- Publication number
- WO2022120694A1 WO2022120694A1 PCT/CN2020/135188 CN2020135188W WO2022120694A1 WO 2022120694 A1 WO2022120694 A1 WO 2022120694A1 CN 2020135188 W CN2020135188 W CN 2020135188W WO 2022120694 A1 WO2022120694 A1 WO 2022120694A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- dose
- image
- low
- network
- standard
- Prior art date
Links
- 238000012549 training Methods 0.000 title claims abstract description 112
- 238000000034 method Methods 0.000 title claims abstract description 89
- 230000006870 function Effects 0.000 claims description 83
- 230000004927 fusion Effects 0.000 claims description 29
- 238000004590 computer program Methods 0.000 claims description 18
- 238000000605 extraction Methods 0.000 claims description 13
- 238000005070 sampling Methods 0.000 claims description 6
- 230000008569 process Effects 0.000 abstract description 12
- 238000003860 storage Methods 0.000 abstract description 12
- 230000004913 activation Effects 0.000 description 22
- 238000002591 computed tomography Methods 0.000 description 20
- 238000010586 diagram Methods 0.000 description 19
- 210000003484 anatomy Anatomy 0.000 description 12
- 238000011176 pooling Methods 0.000 description 12
- 238000012545 processing Methods 0.000 description 11
- 230000005855 radiation Effects 0.000 description 5
- 238000005457 optimization Methods 0.000 description 4
- 238000013170 computed tomography imaging Methods 0.000 description 3
- 210000001015 abdomen Anatomy 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000002059 diagnostic imaging Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 210000003127 knee Anatomy 0.000 description 2
- 210000004705 lumbosacral region Anatomy 0.000 description 2
- 210000004072 lung Anatomy 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 210000003625 skull Anatomy 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000001066 destructive effect Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000002603 single-photon emission computed tomography Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10104—Positron emission tomography [PET]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10108—Single photon emission computed tomography [SPECT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30061—Lung
Definitions
- the invention relates to the technical field of image reconstruction, in particular to a training method for a low-dose image denoising network, a low-dose image denoising method, computer equipment and a storage medium.
- Computed tomography is an important imaging method to obtain the internal structure information of an object through a non-destructive method. It has many advantages such as high resolution, high sensitivity and multi-level. It is one of the medical imaging diagnostic equipment with the largest installed capacity in my country. It is widely used in various medical clinical examination fields. However, due to the need to use X-rays during CT scanning, with the gradual understanding of the potential hazards of radiation, the issue of CT radiation dose has received more and more attention.
- the principle of rational use of low dose (As Low As Reasonably Achievable, ALARA) requires that the radiation dose to patients should be reduced as much as possible under the premise of satisfying clinical diagnosis. As a result, the imaging quality is poor.
- the present invention provides a low-dose image denoising network training method, a low-dose image denoising method, computer equipment and a storage medium, which integrates the dose level information into the image reconstruction process and improves the Robustness of denoising methods and quality of reconstructed images.
- the specific technical solution proposed by the present invention is to provide a training method for a low-dose image denoising network, the training method comprising:
- the training data set includes a plurality of input parameter groups, each of the input parameter groups includes a standard dose image and a low dose image;
- the training network includes a low-dose image denoising network and a low-dose image generation network;
- the training data set is input into the training network, and the low-dose image denoising network generates a dose level estimate of the low-dose image and a standard dose-estimated image according to the low-dose image; the low-dose image generation network generating a low dose estimate image from the standard dose estimate image and the dose level estimate;
- the loss function is optimized, parameters of the low-dose image denoising network are obtained, and the low-dose image denoising network is updated.
- the low-dose image denoising network includes a first feature extraction module, a first downsampling module, a dose level generation module, a first fusion module, a first downsampling module, and a first reconstruction module, which are connected in sequence.
- the dose level generation module is used for generating dose level estimates
- the first fusion module is used for fusing the dose level estimates with feature information of the low-dose image.
- the low-dose image generation network includes a second feature extraction module, a second downsampling module, a second fusion module, a second downsampling module, and a second reconstruction module connected in sequence, and the second fusion module is used for The dose level estimate is fused with feature information of the standard dose estimate image.
- each of the input parameter groups also includes a dose level value corresponding to the low-dose image;
- the training network further includes a low-dose image discrimination network, and the low-dose image discrimination network is used for and the low-dose estimated image to generate a dose level prediction value;
- the loss function is constructed according to the low-dose image, the low-dose estimated image, the standard dose image, and the standard dose estimated image, including:
- a loss function is constructed from the low-dose image, the low-dose estimated image, the standard dose image, the standard dose-estimated image, and the predicted dose level.
- the low-dose image discrimination network includes a plurality of first numerical constraint layers, a first tiling layer, and a first fully connected layer.
- the training network further includes a standard dose image discrimination network
- the low-dose image discrimination network further includes a second fully-connected layer
- the low-dose image discrimination network is further used for
- the dose estimation image generates a first true and false prediction value
- the standard dose image discriminating network is used to generate a second true and false prediction value according to the standard dose image and the standard dose estimation image
- Low-dose estimated images, standard-dose images, and standard-dose estimated images construct loss functions, including:
- a loss function is constructed from the low-dose image, the low-dose estimated image, the standard dose image, the standard dose-estimated image, the dose level predicted value, the first true-false predicted value, and the second true-false predicted value.
- the standard dose image discrimination network includes a plurality of second numerical constraint layers, a second tiling layer, and a third fully connected layer.
- the present invention also provides a low-dose image de-noising method, the de-noising method includes: inputting the low-dose image to be de-noised into the low-dose image obtained by using the above-mentioned training method of the low-dose image de-noising network In the image denoising network, the reconstructed low-dose image is obtained.
- the present invention also provides a computer device, comprising a memory, a processor, and a computer program stored on the memory, the processor executing the computer program to implement the training method described in any one of the above.
- the present invention also provides a computer-readable storage medium, where computer instructions are stored on the computer-readable storage medium, and when the computer instructions are executed by a processor, implement the training method described in any of the above.
- the low-dose image de-noising network In the training method of the low-dose image denoising network provided by the present invention, the low-dose image de-noising network generates the dose level estimation value of the low-dose image and the standard dose estimation image according to the low-dose image, and then according to the standard dose estimation image and dose level estimation Generate a low-dose estimated image, and finally construct a loss function according to the low-dose image, low-dose estimated image, standard dose image, and standard dose-estimated image, optimize the loss function, and obtain the parameters of the low-dose image denoising network, so as to The dose level information is fused into the image reconstruction process, which improves the robustness of the denoising method and the quality of the reconstructed image.
- FIG. 1 is a flowchart of a training method for a low-dose image denoising network in Embodiment 1 of the present invention
- FIG. 2 is a schematic structural diagram of a training network in Embodiment 1 of the present invention.
- FIG. 3 is a schematic structural diagram of a low-dose image denoising network in Embodiment 1 of the present invention.
- FIG. 4 is a schematic structural diagram of a low-dose image generation network in Embodiment 1 of the present invention.
- 5a-5d are schematic diagrams of a standard dose image, a low dose image, a low dose estimated image, and a standard dose estimated image in Embodiment 1 of the present invention
- FIG. 6 is a schematic structural diagram of a low-dose image discrimination network in Embodiment 2 of the present invention.
- FIG. 7 is a schematic diagram of a specific structure of a low-dose image discrimination network in Embodiment 2 of the present invention.
- Embodiment 8 is a schematic structural diagram of a standard dose image discrimination network in Embodiment 3 of the present invention.
- Embodiment 9 is a schematic diagram of a specific structure of a standard dose image discrimination network in Embodiment 3 of the present invention.
- FIG. 10 is a schematic structural diagram of a low-dose image discrimination network in Embodiment 3 of the present invention.
- FIG. 11 is a schematic diagram of a specific structure of a low-dose image discrimination network in Embodiment 3 of the present invention.
- FIG. 12 is a schematic structural diagram of a training system for a low-dose image denoising network in Embodiment 4 of the present invention.
- FIG. 13 is a schematic structural diagram of a computer device in Embodiment 6 of the present invention.
- the training method of the low-dose image denoising network proposed by the present invention includes:
- the training data set includes multiple input parameter groups, and each input parameter group includes standard dose images and low dose images;
- Establish a training network which includes a low-dose image denoising network and a low-dose image generation network;
- the training data set is input into the training network, and the low-dose image denoising network generates dose level estimates of low-dose images and standard dose-estimated images based on low-dose images; dose estimation images;
- the loss function is optimized to obtain the parameters of the low-dose image denoising network and update the low-dose image denoising network.
- the low-dose image de-noising network In the training method of the low-dose image denoising network provided by the present invention, the low-dose image de-noising network generates the dose level estimation value of the low-dose image and the standard dose estimation image according to the low-dose image, and then according to the standard dose estimation image and dose level estimation value to generate low-dose estimated images, thereby incorporating dose-level information into the image reconstruction process, improving the robustness of the denoising method and the quality of the reconstructed images.
- CT image as an example, the training method of low-dose image denoising network, the denoising method of low-dose image, computer equipment and storage medium in this application will be described in detail through several specific embodiments and the accompanying drawings.
- CT images as an example is not used to limit the application field of the present application, and the present application can also be applied to other medical imaging fields such as PET and SPECT.
- the training method of the low-dose image denoising network in this embodiment includes the steps:
- step S1 the training data set in this embodiment is:
- n represents the number of input parameter groups in the training dataset
- x i represents the low-dose image in the ith input parameter group
- y i represents the standard dose image in the ith input parameter group
- n low-dose images ⁇ x 1 , x 2 ,..., xi ,...,x n ⁇ are low-dose CT images of different anatomical parts, or CT images of the same anatomical part under different dose levels , that is, the dose levels of the n low-dose images ⁇ x 1 ,x 2 ,..., xi ,...,x n ⁇ are not exactly the same
- the different anatomical parts may include the skull, the orbit, the sinuses, the neck, the lung cavity, the abdomen, the pelvic cavity (male), the pelvic cavity (female), the knee, and the lumbar spine.
- low-dose images and standard-dose images in the training data set used for training in this embodiment are selected from sample data sets commonly used in the art, which are not specifically limited here.
- the training network in this embodiment includes a low-dose image denoising network G1 and a low-dose image generation network G2.
- Input the training data set into the low-dose image denoising network G1, and the low-dose image denoising network G1 generates the dose level estimation value of the low dose image and the standard dose estimation image according to the low dose image, and outputs the dose level estimation value and the standard dose estimation image.
- the low-dose image generation network G2 generates a low-dose estimated image based on the standard dose-estimated image and the dose-level estimate.
- the low-dose image denoising network G1 includes a first feature extraction module 11, a first downsampling module 12, a dose level generation module 13, a first fusion module 14, a first upsampling module 15, a first
- the reconstruction module 16 and the dose level generation module 13 are used for generating dose level estimates
- the first fusion module 14 is used for fusing the dose level estimates with the feature information of the low-dose image.
- the first feature extraction module 11 is a convolution layer
- the size of the convolution kernel of the convolution layer is 1 ⁇ 1 ⁇ 1
- the number of channels of the convolution kernel is 64.
- the low Dose images are mapped to 64 channels.
- the first down-sampling module 12 includes a plurality of residual units 100 connected in sequence, the plurality of residual units 100 are used to down-sample the low-dose image, and the two adjacent residual units 100 adopt the maximum pooling method to reduce the size of the image.
- the size of the low-dose image is 512 ⁇ 512.
- an image of size 256 ⁇ 256 is obtained.
- the second residual unit 100 Get an image of size 128 ⁇ 128, and so on.
- the number of residual units 100 may be determined according to the actual size of the low-dose image and the size of the image to be obtained.
- FIG. 3 exemplarily shows a case where the first downsampling module 12 includes two residual units 100 .
- Each residual unit 100 includes three convolution layers 101 , the size of the convolution kernel of the first convolution layer 101 is 1 ⁇ 1 ⁇ 64, the number of channels of the convolution kernel is 64, and the second convolution layer 101 The size of the convolution kernel is 3 ⁇ 3 ⁇ 64, the number of channels of the convolution kernel is 16, the size of the convolution kernel of the third convolution layer 101 is 3 ⁇ 3 ⁇ 16, and the number of channels of the convolution kernel is 64 .
- Each residual unit 100 further includes an activation function 102. After the second convolution layer 101 performs the convolution operation, the activation function 102 also needs to perform nonlinear processing on the data after the convolution operation.
- the activation function 102 is a ReLU function.
- each residual unit 100 in this embodiment will combine the image obtained by the third convolutional layer 101 after the convolution operation with the second convolutional layer 101.
- the images after the convolution operation are fused.
- the dose level generation module 13 includes a pooling layer 130, a pooling layer 131, and a convolutional layer 132.
- the pooling layer 130 and the pooling layer 131 are respectively connected between the first downsampling module 12 and the fully connected layer 132.
- the pooling layer 130 uses the average pooling method to map the image output by the first downsampling module 12 to 1 ⁇ 1 ⁇ 64
- the pooling layer 131 uses the maximum pooling method to map the image output from the first downsampling module 12 to 1 ⁇ 1 ⁇ 64
- the size of the convolution kernel of the convolutional layer 132 is 1 ⁇ 1 ⁇ 128, and the number of channels is 5, wherein the number of channels of the convolution kernel of the convolutional layer 132 is equal to the number of dose levels set.
- the number of dose levels to 5 as shown in the following table:
- the convolutional layer 132 maps the images output by the pooling layer 130 and the pooling layer 131 to 1 ⁇ 1 ⁇ 5.
- the output of the convolutional layer 132 is ⁇ 0, 1, 0, 0, 0 ⁇ , indicating the dose level estimation
- the value is level 2, that is, the image is a CT image obtained when the scanning current is 30 to 130 mA.
- the first fusion module 14 includes a convolution layer 140 and an activation function (not shown).
- the size of the convolution kernel of the convolution layer 140 is 1 ⁇ 1 ⁇ 5, the number of channels is 64, and the convolution operation is performed in the convolution layer 140 Then, the data after the convolution operation is nonlinearly processed by an activation function to generate a weight mask with a channel number of 64.
- the activation function in this embodiment is a sigmod function, and the first fusion module 14 then masks the weight with a channel number of 64.
- the code is dot-multiplied with the input image of the last residual unit 100 in the first downsampling module 12, so that the dose level information is fused into the image reconstruction process.
- the first upsampling module 15 includes a plurality of residual units 200 connected in sequence, the plurality of residual units 200 are used to upsample the image output by the first fusion module 14, and two adjacent residual units 200 use bicubic interpolation.
- the size of the image is upsampled by 2 times in the manner of , for example, the size of the image output by the first fusion module 14 is 2 ⁇ 2, after upsampling by the first residual unit 200, an image of size 4 ⁇ 4 is obtained.
- the second residual unit 200 obtains an image of size 8 ⁇ 8 after upsampling, and so on.
- the number of residual units 200 may be determined according to the size of the image output by the first fusion module 14 and the size of the image to be obtained.
- FIG. 3 exemplarily shows a case where the first upsampling module 15 includes two residual units 200 .
- Each residual unit 200 includes three convolution layers 201 , the size of the convolution kernel of the first convolution layer 201 is 1 ⁇ 1 ⁇ 64, the number of channels of the convolution kernel is 64, and the second convolution layer 201 The size of the convolution kernel is 3 ⁇ 3 ⁇ 64, the number of channels of the convolution kernel is 16, the size of the convolution kernel of the third convolution layer 201 is 3 ⁇ 3 ⁇ 16, and the number of channels of the convolution kernel is 64 .
- Each residual unit 200 further includes an activation function 202. After the second convolution layer 201 performs the convolution operation, the activation function 202 also needs to perform nonlinear processing on the data after the convolution operation.
- the activation function 102 is a ReLU function.
- each residual unit 100 in this embodiment will combine the image obtained by the third convolutional layer 101 after the convolution operation with the second convolutional layer 101.
- the images after the convolution operation are fused.
- each residual unit 200 will correspond to the image after the third convolution layer 101 is convolved.
- the input image of the residual unit 100 in the first downsampling module 12 is fused.
- the first reconstruction module 16 is a convolution layer, the size of the convolution kernel of the convolution layer is 1 ⁇ 1 ⁇ 64, the number of channels of the convolution kernel is 1, and the standard dose estimation image is reconstructed and generated by the first reconstruction module 16 .
- the low-dose image generation network G2 includes a second feature extraction module 21, a second downsampling module 22, a second fusion module 23, a second upsampling module 24, a second reconstruction module 25, and a second fusion module, which are connected in sequence.
- the module 23 is used to fuse the estimated dose level outputted by the dose level generation module 13 with the feature information of the standard dose estimated image.
- the second feature extraction module 21 is a convolution layer, the size of the convolution kernel of the convolution layer is 1 ⁇ 1 ⁇ 1, and the number of channels of the convolution kernel is 64.
- the standard The dose estimation images are mapped to 64 channels.
- the second down-sampling module 22 includes a plurality of residual units 300 connected in sequence, the plurality of residual units 300 are used to down-sample the standard dose estimation image, and two adjacent residual units 300 use the maximum pooling method in the image 2 times downsampling is performed on the size.
- the size of the standard dose estimation image is 512 ⁇ 512.
- an image of size 256 ⁇ 256 is obtained.
- an image of size 128 ⁇ 128 is obtained, and so on.
- the number of residual units 300 may be determined according to the actual size of the standard dose estimation image and the size of the image to be obtained.
- FIG. 4 exemplarily shows the case where the second downsampling module 22 includes two residual units 300 .
- Each residual unit 300 includes three convolution layers 301 , the size of the convolution kernel of the first convolution layer 301 is 1 ⁇ 1 ⁇ 64, the number of channels of the convolution kernel is 64, and the second convolution layer 301 The size of the convolution kernel is 3 ⁇ 3 ⁇ 64, the number of channels of the convolution kernel is 16, the size of the convolution kernel of the third convolution layer 301 is 3 ⁇ 3 ⁇ 16, and the number of channels of the convolution kernel is 64 .
- Each residual unit 300 further includes an activation function 302. After the second convolution layer 301 performs the convolution operation, the activation function 302 also needs to perform nonlinear processing on the data after the convolution operation.
- the activation function 302 is a ReLU function.
- each residual unit 300 in this embodiment will combine the image after the third convolutional layer 301 is convolved with the second convolutional layer 301.
- the images after the convolution operation are fused.
- the second fusion module 23 includes a convolution layer 230 and an activation function (not shown).
- the size of the convolution kernel of the convolution layer 230 is 1 ⁇ 1 ⁇ 5, and the number of channels is 64.
- the convolution layer 230 is used to combine the dose level The estimated value is mapped to 64 channels.
- the data after the convolution operation is nonlinearly processed by the activation function to generate a weight mask with 64 channels.
- the activation function in this embodiment is Sigmod. function
- the second fusion module 23 then performs point multiplication of the weight mask with the channel number of 64 and the input image of the last residual unit 300 in the second downsampling module 22, so as to fuse the dose level information into the image reconstruction process.
- the second upsampling module 24 includes a plurality of residual units 400 connected in sequence.
- the plurality of residual units 400 are used for upsampling the image output by the second fusion module 23, and two adjacent residual units 400 use bicubic interpolation
- the size of the image is up-sampled by 2 times in the manner of .
- the size of the image output by the second fusion module 23 is 2 ⁇ 2.
- an image of size 4 ⁇ 4 is obtained.
- the second residual unit 400 obtains an image of size 8 ⁇ 8 after upsampling, and so on.
- the number of residual units 400 may be determined according to the size of the image output by the second fusion module 23 and the size of the image to be obtained.
- FIG. 4 exemplarily shows the case where the second upsampling module 24 includes two residual units 400 .
- Each residual unit 400 includes three convolution layers 401 , the size of the convolution kernel of the first convolution layer 401 is 1 ⁇ 1 ⁇ 64, the number of channels of the convolution kernel is 64, and the second convolution layer 401 The size of the convolution kernel is 3 ⁇ 3 ⁇ 64, the number of channels of the convolution kernel is 16, the size of the convolution kernel of the third convolution layer 401 is 3 ⁇ 3 ⁇ 16, and the number of channels of the convolution kernel is 64 .
- Each residual unit 400 further includes an activation function 402. After the second convolution layer 401 performs the convolution operation, the activation function 402 also needs to perform nonlinear processing on the data after the convolution operation.
- the activation function 402 is a ReLU function.
- each residual unit 400 in this embodiment will combine the image obtained by the third convolutional layer 401 after the convolution operation with the second convolutional layer 401.
- the images after the convolution operation are fused.
- each residual unit 400 in this embodiment will correspond to the image after the convolution operation of the third convolution layer 401.
- the input image of the residual unit 300 in the second downsampling module 22 is fused.
- the second reconstruction module 25 is a convolution layer, the size of the convolution kernel of the convolution layer is 1 ⁇ 1 ⁇ 64, the number of channels of the convolution kernel is 1, and the second reconstruction module 25 is used to reconstruct and generate a low-dose estimated image.
- the image output after the convolution operation of the third convolution layer 301 in the last residual unit 300 in the second down-sampling module 22 is convoluted with the second convolution layer 301.
- the image after image fusion is output to the second reconstruction module 25 to retain more contextual information.
- step S4 the expression of the loss function constructed according to the low-dose image, the low-dose estimated image, the standard-dose image, and the standard-dose estimated image is as follows:
- L Total represents the loss function
- n represents the number of input parameter groups in the training data set
- ⁇ 1 and ⁇ 2 represent the weight factors, respectively
- X i and Y i represent the ith low-dose image and the ith standard dose image, respectively
- X i and Y i represent the ith low-dose estimated image and the ith standard dose estimated image, respectively.
- the loss function is constructed according to the mean square error between the low-dose image and the low-dose estimated image, and the standard dose image and the standard dose-estimated image.
- this embodiment can also construct the loss function in other ways.
- the loss function is constructed from the absolute value error between the dose image and the low-dose estimated image and the standard dose image and the standard dose estimated image, which is only shown as an example, and is not used for limitation.
- a corresponding optimization method can be selected to optimize the loss function according to the actual application. For example, if the low-dose image denoising network G1 in this embodiment is applied to supervised learning, the Adam optimization method is used to optimize the loss function. The function is optimized. If the low-dose image denoising network G1 in this embodiment is applied to the generative adversarial model, the SDG optimization method is used to optimize the loss function.
- an updated low-dose image denoising network can be obtained.
- the dose level information is fused into the image reconstruction process, so that the low-dose image obtained by training
- the denoising network can be applied to images of different dose levels, that is, to different anatomical structures or to the same anatomical structure under different dose levels, which improves the robustness of the denoising method.
- Figures 5a-5d exemplarily show schematic diagrams of standard dose images, low-dose images, low-dose estimated images, and standard dose-estimated images in this embodiment.
- the low-dose image reconstructed by the low-dose image denoising network in the example can preserve the image details well, and the reconstructed image has high definition.
- the training network in this embodiment further includes a low-dose image discrimination network D1, and each input parameter group in this embodiment also includes a dose level value corresponding to the low-dose image.
- a low-dose image discriminant network is used to generate dose-level predictions from low-dose images and low-dose estimated images.
- the low-dose image discrimination network D1 includes a plurality of first numerical constraint layers 31 , a first tiling layer 32 , and a first fully connected layer 33 that are connected in sequence.
- the first numerical constraint layer 31 includes a convolution layer 310, a regularization layer 311 and an activation function 312 connected in sequence, wherein the activation function 312 is a Leaky ReLU function.
- This embodiment exemplarily shows that the low-dose image discrimination network D1 includes seven first numerical constraint layers 31 , wherein the network parameters of the convolutional layer 312 and the first fully connected layer 33 in the seven first numerical constraint layers 31 are as follows The table shows:
- the seventh convolutional layer 3x3x32 32 1 The first fully connected layer 5 - -
- the low-dose image discrimination network D1 is used to constrain the low-dose image and the low-dose estimated image numerically, so that the final output dose level prediction value is restricted to 0-1, so as to avoid the numerical value being too large, which is not conducive to classification.
- the dose level predicted value includes the dose level predicted value of the low-dose image and the dose level predicted value of the low-dose estimated image.
- Step S4 in this embodiment is also different from that in the first embodiment. Specifically, step S4 is to construct a loss function according to the low-dose image, the low-dose estimated image, the standard dose image, the standard dose-estimated image, and the dose level prediction value.
- the expression is as follows:
- L Total represents the loss function
- n represents the number of input parameter groups in the training data set
- ⁇ 1 , ⁇ 2 , ⁇ 5 , and ⁇ 6 represent the weighting factors, respectively
- X i and Y i represent the ith low-dose image
- X i and Y i represent the i-th low-dose estimated image and the i-th standard dose-estimated image, respectively
- r 3 X and r 3 X represent the dose levels of the low-dose image and the low-dose estimated image, respectively predicted value
- d Respectively represent the dose level value and dose level estimate value
- CrossEntropy represents the cross entropy.
- a low-dose image discrimination network D1 is added on the basis of the first embodiment, and the low-dose image discrimination network D1 is used to obtain the dose-level prediction value of the low-dose image and the dose-level prediction value of the low-dose estimated image, and then construct the loss
- the function considers the predicted dose level of the low-dose image and the predicted dose of the low-dose estimated image, which further improves the robustness of the denoising method.
- the training network in this embodiment further includes a standard dose image discriminating network D2, which is used to generate a second true-false prediction value according to the standard dose image and the standard dose estimation image.
- the low-dose image discrimination network D1 in this embodiment further includes a second fully-connected layer 34 connected to the first tiling layer 32 .
- the standard dose image discrimination network D2 includes a plurality of second numerical constraint layers 41 , a second tiling layer 42 , and a third fully connected layer 43 connected in sequence.
- the second numerical constraint layer 41 includes a convolution layer 410, a regularization layer 411 and an activation function 412 connected in sequence, wherein the activation function 412 is a Leaky ReLU function.
- This embodiment exemplifies that the standard dose image discrimination network D2 includes seven second numerical constraint layers 41 .
- the network parameters of the convolutional layer 412, the third fully connected layer 43 and the second fully connected layer 34 in the seven second numerical constraint layers 41 are shown in the following table:
- the standard dose image discriminating network D2 is used to perform numerical constraints on the standard dose image and the standard dose estimation image, so that the final output second true-false prediction value is constrained to be between 0 and 1.
- the low-dose image discriminating network D1 is also used to perform numerical constraints on the low-dose image and the low-dose estimated image to generate a first true-false predicted value, so that the final output first true-false predicted value is constrained to be between 0 and 1.
- the first true and false predicted values include the first true and false predicted values of the low-dose image and the low-dose estimated image
- the second true and false predicted values include the second true and false predicted values of the standard dose image and the standard dose estimated image
- the first true-false predicted value and the second true-false predicted value are 0 or 1.
- Step S4 in this embodiment is also different from that in the second embodiment. Specifically, step S4 is based on the low-dose image, low-dose estimated image, standard dose image, standard dose-estimated image, dose level prediction value, first authenticity prediction value and the second true and false prediction value to construct a loss function, and the expression of the loss function is as follows:
- L Total represents the loss function
- n represents the number of input parameter groups in the training data set
- ⁇ 1 , ⁇ 2 , ⁇ 3 , ⁇ 4 , ⁇ 5 , ⁇ 6 respectively represent the weight factor
- X i , Y i represent the first i low-dose images and i-th standard dose images
- X i and Y i represent the i-th low-dose estimated image and the i-th standard dose-estimated image, respectively
- r 1 X and r 1 X represent the low-dose images, respectively
- r 2 Y and r 2 Y represent the standard dose image and the second true-false predicted value of the standard dose estimated image
- ⁇ represents a balance factor
- r 3 X , r 3 X represent the dose level prediction values of the low-dose image and low-dose estimated image, respectively
- d respectively represent the dose level value and dose level estimated value
- the standard dose image discrimination network D2 is added on the basis of the second embodiment, and the low-dose image discrimination network D1 also includes a second fully connected layer 34, which can be obtained through the standard dose image discrimination network D2 and the low-dose image discrimination network D1
- the second true and false prediction values of the standard dose image and the standard dose estimation image, and the first true and false prediction values of the low dose image and the low dose estimated image, and then the second true and false prediction values of the standard dose image and the standard dose estimation image are considered in the construction of the loss function.
- the pseudo-predicted value and the first true-false predicted value of the low-dose image and low-dose estimated image introduce adversarial loss information, improve the visual effect, and further improve the image reconstruction quality.
- this embodiment provides a training system for a low-dose image denoising network.
- the training system includes a training data set acquisition module 100 , a network construction module 101 , and a training module 102 .
- the training data set acquisition module 100 is configured to acquire a training data set, the training data set includes a plurality of input parameter groups, and each input parameter group includes a standard dose image and a low dose image.
- the training data set in this embodiment is:
- n represents the number of input parameter groups in the training dataset
- x i represents the low-dose image in the ith input parameter group
- y i represents the standard dose image in the ith input parameter group
- n low-dose images ⁇ x 1 , x 2 ,..., xi ,...,x n ⁇ are low-dose CT images of different anatomical parts, or CT images of the same anatomical part under different dose levels , that is, the dose levels of the n low-dose images ⁇ x 1 ,x 2 ,..., xi ,...,x n ⁇ are not exactly the same
- the different anatomical parts may include the skull, the orbit, the sinuses, the neck, the lung cavity, the abdomen, the pelvic cavity (male), the pelvic cavity (female), the knee, and the lumbar spine.
- low-dose images and standard-dose images in the training data set used for training in this embodiment are selected from sample data sets commonly used in the art, which are not specifically limited here.
- the network building module 101 is used to construct and establish a training network, and the training network includes a low-dose image denoising network and a low-dose image generation network; the training data set is input into the training network, and the low-dose image denoising network generates a low-dose image based on the low-dose image. Dose grade estimates and standard dose estimate images; low dose image generation network generates low dose estimate images based on standard dose estimate images and dose grade estimates.
- the training module 102 is configured to construct a loss function according to the low-dose image, the low-dose estimated image, the standard-dose image, and the standard-dose estimated image, optimize the loss function, obtain the parameters of the low-dose image denoising network, and update the low-dose image to de-noise. noise network.
- This embodiment provides a method for denoising a low-dose CT image.
- the denoising method includes: inputting the low-dose image to be de-noised into a low-dose image denoising network using the low-dose image denoising network described in Embodiments 1 to 3. In the low-dose image denoising network obtained by training the method, the reconstructed low-dose image is obtained.
- the denoising method in this embodiment includes two implementations.
- the first implementation is to use the low-dose image denoising network trained in Examples 1 to 3 as the denoising of the low-dose image.
- the low-dose image to be de-noised is input into the low-dose image de-noising network to obtain the reconstructed low-dose image.
- the second embodiment is to first use the training method of the low-dose image de-noising network described in the first to third embodiments to train the low-dose image de-noising network, and then input the low-dose image to be de-noised into the training method. Good low-dose image denoising network to obtain reconstructed low-dose images.
- the details of the original image can be better extracted, so that the reconstructed image is clearer.
- this embodiment provides a computer device, including a processor 200 and a memory 201 , a computer program stored on the memory 201 , and the processor 200 executes the computer program to implement the first to third embodiments. training method.
- the memory 201 may include a high-speed random access memory (Random Access Memory, RAM), and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory.
- RAM Random Access Memory
- non-volatile memory such as at least one disk memory.
- the processor 200 may be an integrated circuit chip with signal processing capability. In the implementation process, each step of the training method described in Embodiments 1 to 3 may be completed by an integrated logic circuit of hardware in the processor 200 or an instruction in the form of software.
- the processor 200 may also be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc., and may also be a digital signal processor (DSP), an application specific integrated circuit (ASIC) , off-the-shelf programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
- CPU Central Processing Unit
- NP Network Processor
- DSP digital signal processor
- ASIC application specific integrated circuit
- FPGA off-the-shelf programmable gate array
- the memory 201 is used to store a computer program. After the processor 200 receives the execution instruction, the computer program is executed to implement the training methods described in the first to third embodiments.
- This embodiment also provides a computer storage medium, where a computer program is stored in the computer storage medium, and the processor 200 is configured to read and execute the computer program stored in the computer storage medium, so as to implement the first to third embodiments training method.
- the above-mentioned embodiments it may be implemented in whole or in part by software, hardware, firmware or any combination thereof.
- software it can be implemented in whole or in part in the form of a computer program product.
- the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, all or part of the processes or functions described in the embodiments of the present invention are generated.
- the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
- the computer instructions may be stored in or transmitted from one computer storage medium to another computer storage medium, for example, from a website site, computer, server, or data center over a wired (eg, coaxial) cable, fiber optic, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.) to another website site, computer, server, or data center.
- the computer storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, a data center, etc. that includes an integration of one or more available media.
- the usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVDs), or semiconductor media (eg, solid state disks (SSDs)), and the like.
- Embodiments of the present invention are described with reference to flowcharts and/or block diagrams of methods, apparatuses, and computer program products according to embodiments of the present invention. It will be understood that each process and/or block in the flowchart illustrations and/or block diagrams, and combinations of processes and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to the processor of a general purpose computer, special purpose computer, embedded processor or other programmable data processing device to produce a machine such that the instructions executed by the processor of the computer or other programmable data processing device produce Means for implementing the functions specified in a flow or flow of a flowchart and/or a block or blocks of a block diagram.
- These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions
- the apparatus implements the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
一种低剂量图像去噪网络的训练方法、低剂量图像的去噪方法、计算机设备及存储介质,所述低剂量图像去噪网络的训练方法中低剂量图像去噪网络会根据低剂量图像生成低剂量图像的剂量等级估计值及标准剂量估计图像,再根据标准剂量估计图像和剂量等级估计值生成低剂量估计图像,最后根据低剂量图像、低剂量估计图像、标准剂量图像、标准剂量估计图像构建损失函数,对损失函数进行优化,获得所述低剂量图像去噪网络的参数,从而将剂量等级信息融合到图像重建过程中,提升了去噪方法的鲁棒性以及重建图像的质量。
Description
本发明涉及图像重建技术领域,尤其涉及一种低剂量图像去噪网络的训练方法、低剂量图像的去噪方法、计算机设备及存储介质。
计算机断层成像(CT)是通过无损方式获取物体内部结构信息的一种重要成像手段,它拥有高分辨率、高灵敏度以及多层次等众多优点,是我国装机量最大的医疗影像诊断设备之一,被广泛应用于各个医疗临床检查领域。然而,由于CT扫描过程中需要使用X射线,随着人们对辐射潜在危害的逐步了解,CT辐射剂量问题越来越受到人们的重视。合理使用低剂量(As Low As Reasonably Achievable,ALARA)原则要求在满足临床诊断的前提下,尽量降低对患者的辐射剂量,而随着剂量的降低,更多的噪声会出现在成像过程中,从而导致成像质量较差,因此,研究和开发新的低剂量CT成像方法,既能保证CT成像质量又减少有害的辐射剂量,对于医疗诊断领域具有重要的科学意义和应用前景。由于不同的解剖部位使用的辐射剂量会存在差异,而现有的低剂量CT成像方法通常基于同一解剖部位,鲁棒性较差。
发明内容
为了解决现有技术的不足,本发明提供一种低剂量图像去噪网络的训练方法、低剂量图像的去噪方法、计算机设备及存储介质,将剂量等级信息融合到图像重建过程中,提升了去噪方法的鲁棒性以及重建图像的质量。
本发明提出的具体技术方案为:提供一种低剂量图像去噪网络的训练方法,所述训练方法包括:
获取训练数据集,所述训练数据集包括多个输入参数组,每一个所述输入参数组包括标准剂量图像、低剂量图像;
建立训练网络,所述训练网络包括低剂量图像去噪网络、低剂量图像生成网络;
将所述训练数据集输入所述训练网络,所述低剂量图像去噪网络根据所述低剂量图像生成所述低剂量图像的剂量等级估计值及标准剂量估计图像;所述低剂量图像生成网络根据所述标准剂量估计图像和所述剂量等级估计值生成低剂量估计图像;
根据所述低剂量图像、低剂量估计图像、标准剂量图像、标准剂量估计图像构建损失函数;
对所述损失函数进行优化,获得所述低剂量图像去噪网络的参数并更新所述低剂量图像去噪网络。
进一步地,所述低剂量图像去噪网络包括依次连接的第一特征提取模块、第一下采样模块、剂量等级生成模块、第一融合模块、第一下采样模块、第一重建模块,所述剂量等级生成模块用于生成剂量等级估计值,所述第一融合模块用于将所述剂量等级估计值与所述低剂量图像的特征信息进行融合。
进一步地,所述低剂量图像生成网络包括依次连接的第二特征提取模块、第二下采样模块、第二融合模块、第二下采样模块、第二重建模块,所述第二融合模块用于将所述剂量等级估计值与所述标准剂量估计图像的特征信息进行融合。
进一步地,每一个所述输入参数组还包括所述低剂量图像对应的剂量等级值;所述训练网络还包括低剂量图像判别网络,所述低剂量图像判别网络用于根据所述低剂量图像和所述低剂量估计图像生成剂量等级预测值;所述根据所述低剂量图像、低剂量估计图像、标准剂量图像、标准剂量估计图像构建损失函数,包括:
根据所述低剂量图像、低剂量估计图像、标准剂量图像、标准剂量估计图像、剂量等级预测值构建损失函数。
进一步地,所述低剂量图像判别网络包括多个第一数值约束层、第一平铺层、第一全连接层。
进一步地,所述训练网络还包括标准剂量图像判别网络,所述低剂量图像判别网络还包括第二全连接层,所述低剂量图像判别网络还用于根据所述低剂量图像和所述低剂量估计图像生成第一真伪预测值;所述标准剂量图像判别网 络用于根据所述标准剂量图像和所述标准剂量估计图像生成第二真伪预测值;所述根据所述低剂量图像、低剂量估计图像、标准剂量图像、标准剂量估计图像构建损失函数,包括:
根据所述低剂量图像、低剂量估计图像、标准剂量图像、标准剂量估计图像、剂量等级预测值、第一真伪预测值、第二真伪预测值构建损失函数。
进一步地,所述标准剂量图像判别网络包括多个第二数值约束层、第二平铺层、第三全连接层。
本发明还提供了一种低剂量图像的去噪方法,所述去噪方法包括:将待去噪的低剂量图像输入到利用如上所述的低剂量图像去噪网络的训练方法获得的低剂量图像去噪网络中,获得重建后的低剂量图像。
本发明还提供了一种计算机设备,包括存储器、处理器及存储在存储器上的计算机程序,所述处理器执行所述计算机程序以实现如上任一项所述的训练方法。
本发明还提供了一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机指令,所述计算机指令被处理器执行时实现如上任一项所述的训练方法。
本发明提供的低剂量图像去噪网络的训练方法中低剂量图像去噪网络会根据低剂量图像生成低剂量图像的剂量等级估计值及标准剂量估计图像,再根据标准剂量估计图像和剂量等级估计值生成低剂量估计图像,最后根据低剂量图像、低剂量估计图像、标准剂量图像、标准剂量估计图像构建损失函数,对损失函数进行优化,获得所述低剂量图像去噪网络的参数,从而将剂量等级信息融合到图像重建过程中,提升了去噪方法的鲁棒性以及重建图像的质量。
图1为本发明实施例一中低剂量图像去噪网络的训练方法的流程图;
图2为本发明实施例一中训练网络的结构示意图;
图3为本发明实施例一中低剂量图像去噪网络的结构示意图;
图4为本发明实施例一中低剂量图像生成网络的结构示意图;
图5a~5d为本发明实施例一中标准剂量图像、低剂量图像、低剂量估计图像、标准剂量估计图像的示意图;
图6为本发明实施例二中低剂量图像判别网络的结构示意图;
图7为本发明实施例二中低剂量图像判别网络的具体结构示意图;
图8为本发明实施例三中标准剂量图像判别网络的结构示意图;
图9为本发明实施例三中标准剂量图像判别网络的具体结构示意图;
图10为本发明实施例三中低剂量图像判别网络的结构示意图;
图11为本发明实施例三中低剂量图像判别网络的具体结构示意图;
图12为本发明实施例四中低剂量图像去噪网络的训练系统的结构示意图;
图13为本发明实施例六中计算机设备的结构示意图。
以下,将参照附图来详细描述本发明的实施例。然而,可以以许多不同的形式来实施本发明,并且本发明不应该被解释为限制于这里阐述的具体实施例。相反,提供这些实施例是为了解释本发明的原理及其实际应用,从而使本领域的其他技术人员能够理解本发明的各种实施例和适合于特定预期应用的各种修改。在附图中,相同的标号将始终被用于表示相同的元件。
本发明提出的低剂量图像去噪网络的训练方法包括:
获取训练数据集,训练数据集包括多个输入参数组,每一个输入参数组包括标准剂量图像、低剂量图像;
建立训练网络,训练网络包括低剂量图像去噪网络、低剂量图像生成网络;
将训练数据集输入训练网络,低剂量图像去噪网络根据低剂量图像生成低剂量图像的剂量等级估计值及标准剂量估计图像;低剂量图像生成网络根据标准剂量估计图像和剂量等级估计值生成低剂量估计图像;
根据低剂量图像、低剂量估计图像、标准剂量图像、标准剂量估计图像构建损失函数;
对损失函数进行优化,获得低剂量图像去噪网络的参数并更新低剂量图像去噪网络。
本发明提供的低剂量图像去噪网络的训练方法中低剂量图像去噪网络会根据低剂量图像生成低剂量图像的剂量等级估计值及标准剂量估计图像,再根据标准剂量估计图像和剂量等级估计值生成低剂量估计图像,从而将剂量等级信息融合到图像重建过程中,提升了去噪方法的鲁棒性以及重建图像的质量。
下面以CT图像为例,通过几个具体的实施例并结合附图来对本申请中的低剂量图像去噪网络的训练方法、低剂量图像的去噪方法、计算机设备及存储介质进行详细的描述,需要说明的是,将CT图像作为示例并不用于对本申请的应用领域进行限定,本申请还可以应用到PET、SPECT等其他医学影像成像领域。
实施例一
参照图1,本实施例中的低剂量图像去噪网络的训练方法包括步骤:
S1、获取训练数据集,其中,训练数据集包括多个输入参数组,每一个输入参数组包括标准剂量图像、低剂量图像;
S2、建立训练网络,训练网络包括低剂量图像去噪网络、低剂量图像生成网络;
S3、将训练数据集输入训练网络,低剂量图像去噪网络根据低剂量图像生成低剂量图像的剂量等级估计值及标准剂量估计图像;低剂量图像生成网络根据标准剂量估计图像和剂量等级估计值生成低剂量估计图像;
S4、根据低剂量图像、低剂量估计图像、标准剂量图像、标准剂量估计图像构建损失函数;
S5、对损失函数进行优化,获得低剂量图像去噪网络的参数并更新低剂量图像去噪网络。
具体地,在步骤S1中,本实施例中的训练数据集为:
D={(x
1,y
1),(x
2,y
2),......,(x
i,y
i),......,(x
n,y
n)}
,
其中,n表示训练数据集中输入参数组的个数,x
i表示第i个输入参数组中的低剂量图像,y
i表示第i个输入参数组中的标准剂量图像,n个低剂量图像{x
1,x
2,......,x
i,......,x
n}为不同解剖部位的低剂量CT图像,也可以为不同剂量等级下的同一解剖部位的CT图像,即n个低剂量图像{x
1,x
2,......,x
i,......,x
n}的剂量等级不完全相同,n个低剂量图像{x
1,x
2,......,x
i,......,x
n}与n个标准剂量图像{y
1,y
2,......,y
i,......,y
n}中下标相同的x
i和y
i表示同一解剖部位的低剂量CT图像和标准剂量CT图像或者同一剂量等级下的低剂量CT图像和标准剂量CT图像。其中,不同解剖部位可以包括头颅、眼眶、鼻窦、颈部、肺腔、腹部、盆腔(男)、盆腔(女)、膝盖和腰椎等部位。
这里需要说明的是,本实施例中用于训练的训练数据集中的低剂量图像和标准剂量图像是从本领域常用的样本数据集中选取的,这里不再具体限定。
参照图2,本实施例中的训练网络包括低剂量图像去噪网络G1、低剂量图像生成网络G2。将训练数据集输入低剂量图像去噪网络G1,低剂量图像去噪网络G1根据低剂量图像生成低剂量图像的剂量等级估计值及标准剂量估计图像并将剂量等级估计值及标准剂量估计图像输出至低剂量图像生成网络G2,低剂量图像生成网络G2根据标准剂量估计图像和剂量等级估计值生成低剂量估计图像。
参照图3,低剂量图像去噪网络G1包括依次连接的第一特征提取模块11、第一下采样模块12、剂量等级生成模块13、第一融合模块14、第一上采样模块15、第一重建模块16,剂量等级生成模块13用于生成剂量等级估计值,第一融合模块14用于将剂量等级估计值与低剂量图像的特征信息进行融合。
作为示例,第一特征提取模块11为卷积层,该卷积层的卷积核的大小为1╳1╳1,卷积核的通道数为64,通过第一特征提取模块11可以将低剂量图像映射到64通道。
第一下采样模块12包括依次连接的多个残差单元100,多个残差单元100用于对低剂量图像进行下采样,相邻两个残差单元100采用最大池化的方式在图像尺寸上进行2倍下采样,例如,低剂量图像的尺寸为512╳512,经过第一个残 差单元100下采样后获得尺寸为256╳256的图像,经过第二个残差单元100下采样后获得尺寸为128╳128的图像,以此类推。其中,残差单元100的个数可以根据低剂量图像的实际大小以及所要获得的图像的尺寸大小来确定。其中,图3中示例性的给出了第一下采样模块12包括两个残差单元100的情况。
每个残差单元100包括三个卷积层101,第一个卷积层101的卷积核的大小为1╳1╳64,卷积核的通道数为64,第二个卷积层101的卷积核的大小为3╳3╳64,卷积核的通道数为16,第三个卷积层101的卷积核的大小为3╳3╳16,卷积核的通道数为64。每个残差单元100还包括一个激活函数102,在第二个卷积层101进行卷积操作后,还需要通过激活函数102对卷积操作后的数据进行非线性处理。其中,激活函数102为ReLU函数。
较佳地,为了避免连续多次下采样造成的信息缺失,本实施例中每个残差单元100会将第三个卷积层101进行卷积操作后的图像与第二个卷积层101进行卷积操作后的图像进行融合。
剂量等级生成模块13包括池化层130、池化层131、卷积层132,池化层130、池化层131分别连接于第一下采样模块12和全连接层132之间,池化层130采用平均池化的方法将第一下采样模块12输出的图像映射到1╳1╳64,池化层131采用最大池化的方法将第一下采样模块12输出的图像映射到1╳1╳64,卷积层132的卷积核大小为1╳1╳128,通道数为5,其中,卷积层132的卷积核的通道数与设置的剂量等级的数目相等,本实施例中作为示例将剂量等级的数目设置为5,如下表所示:
表一剂量等级表
扫描电流(mA) | 剂量等级 |
0~30 | 等级一 |
30~130 | 等级二 |
130~230 | 等级三 |
230~330 | 等级四 |
≥330 | 等级五 |
卷积层132将池化层130、池化层131输出的图像映射到1╳1╳5,例如,卷积层132的输出为{0,1,0,0,0}则表示剂量等级估计值为等级二,即该图像是扫描电流为30~130mA时获得的CT图像。
第一融合模块14包括卷积层140和激活函数(图未标),卷积层140的卷积核的大小为1╳1╳5,通道数为64,在卷积层140进行卷积操作后通过激活函数对卷积操作后的数据进行非线性处理生成通道数为64的权重掩码,本实施例中的激活函数为Sigmod函数,第一融合模块14再将通道数为64的权重掩码与第一下采样模块12中最后一个残差单元100的输入图像进行点乘,从而将剂量等级信息融合到图像重建过程中。
第一上采样模块15包括依次连接的多个残差单元200,多个残差单元200用于对第一融合模块14输出的图像进行上采样,相邻两个残差单元200采用双三次插值的方式在图像尺寸上进行2倍上采样,例如,第一融合模块14输出的图像的尺寸为2╳2,经过第一个残差单元200上采样后获得尺寸为4╳4的图像,经过第二个残差单元200上采样后获得尺寸为8╳8的图像,以此类推。其中,残差单元200的个数可以根据第一融合模块14输出的图像的大小以及所要获得的图像的尺寸大小来确定。图3中示例性的给出了第一上采样模块15包括两个残差单元200的情况。
每个残差单元200包括三个卷积层201,第一个卷积层201的卷积核的大小为1╳1╳64,卷积核的通道数为64,第二个卷积层201的卷积核的大小为3╳3╳64,卷积核的通道数为16,第三个卷积层201的卷积核的大小为3╳3╳16,卷积核的通道数为64。每个残差单元200还包括一个激活函数202,在第二个卷积层201进行卷积操作后,还需要通过激活函数202对卷积操作后的数据进行非线性处理。其中,激活函数102为ReLU函数。
较佳地,为了避免连续多次下采样造成的信息缺失,本实施例中每个残差单元100会将第三个卷积层101进行卷积操作后的图像与第二个卷积层101进行卷积操作后的图像进行融合。
进一步地,为了避免连续多次下采样后再连续多次上采样造成的信息缺失,本实施例中每个残差单元200会将第三个卷积层101进行卷积操作后的图像与其对应的第一下采样模块12中的残差单元100的输入图像进行融合。
第一重建模块16为卷积层,该卷积层的卷积核的大小为1╳1╳64,卷积核的通道数为1,通过第一重建模块16重建生成标准剂量估计图像。
参照图4,低剂量图像生成网络G2包括依次连接的第二特征提取模块21、第二下采样模块22、第二融合模块23、第二上采样模块24、第二重建模块25,第二融合模块23用于将剂量等级生成模块13输出的剂量等级估计值与标准剂量估计图像的特征信息进行融合。
具体地,第二特征提取模块21为卷积层,该卷积层的卷积核的大小为1╳1╳1,卷积核的通道数为64,通过第二特征提取模块21可以将标准剂量估计图像映射到64通道。
第二下采样模块22包括依次连接的多个残差单元300,多个残差单元300用于对标准剂量估计图像进行下采样,相邻两个残差单元300采用最大池化的方式在图像尺寸上进行2倍下采样,例如,标准剂量估计图像的尺寸为512╳512,经过第一个残差单元300下采样后获得尺寸为256╳256的图像,经过第二个残差单元300下采样后获得尺寸为128╳128的图像,以此类推。其中,残差单元300的个数可以根据标准剂量估计图像的实际大小以及所要获得的图像的尺寸大小来确定。图4中示例性的给出了第二下采样模块22包括两个残差单元300的情况。
每个残差单元300包括三个卷积层301,第一个卷积层301的卷积核的大小为1╳1╳64,卷积核的通道数为64,第二个卷积层301的卷积核的大小为3╳3╳64,卷积核的通道数为16,第三个卷积层301的卷积核的大小为3╳3╳16,卷积核的通道数为64。每个残差单元300还包括一个激活函数302,在第二个卷积层301进行卷积操作后,还需要通过激活函数302对卷积操作后的数据进行非线性处理。其中,激活函数302为ReLU函数。
较佳地,为了避免连续多次下采样造成的信息缺失,本实施例中每个残差单元300会将第三个卷积层301进行卷积操作后的图像与第二个卷积层301进行卷积操作后的图像进行融合。
第二融合模块23包括卷积层230和激活函数(图未示),卷积层230的卷积核的大小为1╳1╳5,通道数为64,卷积层230用于将剂量等级估计值映射到64通道,在卷积层230进行卷积操作后通过激活函数对卷积操作后的数据进行非线性处理生成通道数为64的权重掩码,本实施例中的激活函数为Sigmod函数,第二融合模块23再将通道数为64的权重掩码与第二下采样模块22中最后一个 残差单元300的输入图像进行点乘,从而将剂量等级信息融合到图像重建过程中。
第二上采样模块24包括依次连接的多个残差单元400,多个残差单元400用于对第二融合模块23输出的图像进行上采样,相邻两个残差单元400采用双三次插值的方式在图像尺寸上进行2倍上采样,例如,第二融合模块23输出的图像的尺寸为2╳2,经过第一个残差单元400上采样后获得尺寸为4╳4的图像,经过第二个残差单元400上采样后获得尺寸为8╳8的图像,以此类推。其中,残差单元400的个数可以根据第二融合模块23输出的图像的大小以及所要获得的图像的尺寸大小来确定。图4中示例性的给出了第二上采样模块24包括两个残差单元400的情况。
每个残差单元400包括三个卷积层401,第一个卷积层401的卷积核的大小为1╳1╳64,卷积核的通道数为64,第二个卷积层401的卷积核的大小为3╳3╳64,卷积核的通道数为16,第三个卷积层401的卷积核的大小为3╳3╳16,卷积核的通道数为64。每个残差单元400还包括一个激活函数402,在第二个卷积层401进行卷积操作后,还需要通过激活函数402对卷积操作后的数据进行非线性处理。其中,激活函数402为ReLU函数。
较佳地,为了避免连续多次下采样造成的信息缺失,本实施例中每个残差单元400会将第三个卷积层401进行卷积操作后的图像与第二个卷积层401进行卷积操作后的图像进行融合。
进一步地,为了避免连续多次下采样后再连续多次上采样造成的信息缺失,本实施例中每个残差单元400会将第三个卷积层401进行卷积操作后的图像与其对应的第二下采样模块22中的残差单元300的输入图像进行融合。
第二重建模块25为卷积层,该卷积层的卷积核的大小为1╳1╳64,卷积核的通道数为1,通过第二重建模块25重建生成低剂量估计图像。其中,为了避免信息缺失,第二下采样模块22中最后一个残差单元300中的第三个卷积层301卷积操作后输出的图像与第二个卷积层301进行卷积操作后的图像进行融合后的图像输出至第二重建模块25,以保留更多上下文信息。
在步骤S4中,根据低剂量图像、低剂量估计图像、标准剂量图像、标准剂量估计图像构建的损失函数的表达式如下:
其中,L
Total表示损失函数,n表示训练数据集中输入参数组的个数,α
1、α
2分别表示权重因子,X
i、Y
i分别表示第i个低剂量图像、第i个标准剂量图像,X
i、Y
i分别表示第i个低剂量估计图像、第i个标准剂量估计图像。
本实施例根据低剂量图像与低剂量估计图像以及标准剂量图像与标准剂量估计图像的均方误差来构建损失函数,当然,本实施例也可以采用其他方式来构建损失函数,例如,可以根据低剂量图像与低剂量估计图像以及标准剂量图像与标准剂量估计图像的绝对值误差来构建损失函数,这里仅仅作为示例示出,并不用于限定。
在步骤S5中,可以根据实际应用来选择相应的优化方法对损失函数进行优化,例如,若将本实施例中的低剂量图像去噪网络G1应用到监督学习,则采用Adam优化方法来对损失函数进行优化,若将本实施例中的低剂量图像去噪网络G1应用到生成对抗模型,则采用SDG优化方法来对损失函数进行优化。
经过上述优化之后便可以得到更新后的低剂量图像去噪网络,本实施例提供的低剂量图像去噪网络的训练方法中将剂量等级信息融合到图像重建过程中,使得训练得到的低剂量图像去噪网络可以适用于不同剂量等级的图像,即适用于不同解剖结构或者适用于不同剂量等级下的同一解剖结构,提升了去噪方法的鲁棒性。参照图5a~5d,图5a~5d示例性的给出了本实施例中的标准剂量图像、低剂量图像、低剂量估计图像、标准剂量估计图像的示意图,从图中可以看出采用本实施例中的低剂量图像去噪网络重建后的低剂量图像能够很好的保留图像细节,重建图像清晰度也很高。
实施例二
参照图6~图7,本实施例中的训练网络还包括低剂量图像判别网络D1,本实施例中的每一个输入参数组还包括低剂量图像对应的剂量等级值。低剂量图像判别网络用于根据低剂量图像和低剂量估计图像生成剂量等级预测值。
具体地,低剂量图像判别网络D1包括依次连接的多个第一数值约束层31、第一平铺层32、第一全连接层33。
第一数值约束层31包括依次连接的卷积层310、正则化层311和激活函数312,其中,激活函数312为Leaky ReLU函数。本实施例示例性给出了低剂量图像判别网络D1包括7个第一数值约束层31,其中,7个第一数值约束层31中卷积层312以及第一全连接层33的网络参数如下表所示:
表2低剂量图像判别网络D1的网络参数
单元 | 卷积 | 通道数 | 步幅 |
第一个卷积层 | 3x3x1 | 32 | 1 |
第二个卷积层 | 3x3x32 | 32 | 2 |
第三个卷积层 | 3x3x32 | 32 | 1 |
第四个卷积层 | 3x3x32 | 32 | 2 |
第五个卷积层 | 3x3x32 | 32 | 1 |
第六个卷积层 | 3x3x32 | 32 | 2 |
第七个卷积层 | 3x3x32 | 32 | 1 |
第一全连接层 | 5 | - | - |
低剂量图像判别网络D1用于对低剂量图像和低剂量估计图像进行数值约束,使得最终输出的剂量等级预测值约束在0~1,从而避免数值过大,不利于分类。这里需要说明的是,剂量等级预测值中包括低剂量图像的剂量等级预测值和低剂量估计图像的剂量等级预测值。
本实施例的步骤S4与实施例一中的也不同,具体地,步骤S4为根据低剂量图像、低剂量估计图像、标准剂量图像、标准剂量估计图像、剂量等级预测值构建损失函数,损失函数的表达式如下:
其中,L
Total表示损失函数,n表示训练数据集中输入参数组的个数,α
1、α
2、α
5、α
6分别表示权重因子,X
i、Y
i分别表示第i个低剂量图像、第i个标准剂量图像,X
i、Y
i分别表示第i个低剂量估计图像、第i个标准剂量估计图像,r
3
X、r
3
X分别表示低剂量图像、低剂量估计图像的剂量等级预测值,d、
分别表示剂量等级值、剂量等级估计值,CrossEntropy表示交叉熵。
本实施例在实施例一的基础上增加了低剂量图像判别网络D1,通过低剂量图像判别网络D1获得低剂量图像的剂量等级预测值和低剂量估计图像的剂量等级预测值,然后在构建损失函数考虑低剂量图像的剂量等级预测值和低剂量估计图像的剂量等级预测值,从而进一步提升了去噪方法的鲁棒性。
实施例三
参照图8~11,本实施例中的训练网络还包括标准剂量图像判别网络D2,标准剂量图像判别网络D2用于根据标准剂量图像和标准剂量估计图像生成第二真伪预测值。本实施例中的低剂量图像判别网络D1还包括与第一平铺层32连接的第二全连接层34。
具体地,标准剂量图像判别网络D2包括依次连接的多个第二数值约束层41、第二平铺层42、第三全连接层43。
第二数值约束层41包括依次连接的卷积层410、正则化层411和激活函数412,其中,激活函数412为Leaky ReLU函数。本实施例示例性给出了标准剂量图像判别网络D2包括7个第二数值约束层41。其中,7个第二数值约束层41中卷积层412以及第三全连接层43、第二全连接层34的网络参数如下表所示:
表3标准剂量图像判别网络D2的网络参数
单元 | 卷积 | 通道数 | 步幅 |
第一个卷积层 | 3x3x32 | 32 | 2 |
第二个卷积层 | 3x3x32 | 64 | 1 |
第三个卷积层 | 3x3x64 | 64 | 2 |
第四个卷积层 | 3x3x64 | 128 | 1 |
第五个卷积层 | 3x3x128 | 128 | 2 |
第六个卷积层 | 3x3x128 | 256 | 1 |
第七个卷积层 | 3x3x256 | 256 | 2 |
第二全连接层 | 1 | - | - |
第三全连接层 | 1 | - | - |
标准剂量图像判别网络D2用于对标准剂量图像和标准剂量估计图像进行数值约束,使得最终输出的第二真伪预测值约束在0~1。低剂量图像判别网络D1还用于对低剂量图像和低剂量估计图像进行数值约束生成第一真伪预测值,使得最终输出的第一真伪预测值约束在0~1。其中,第一真伪预测值中包括低剂量图像和低剂量估计图像的第一真伪预测值,第二真伪预测值中包括标准剂量图像和标准剂量估计图像的第二真伪预测值,第一真伪预测值、第二真伪预测值为0或1。
本实施例的步骤S4与实施例二中的也不同,具体地,步骤S4为根据低剂量图像、低剂量估计图像、标准剂量图像、标准剂量估计图像、剂量等级预测值、第一真伪预测值、第二真伪预测值构建损失函数,损失函数的表达式如下:
其中,L
Total表示损失函数,n表示训练数据集中输入参数组的个数,α
1、α
2、α
3、α
4、α
5、α
6分别表示权重因子,X
i、Y
i分别表示第i个低剂量图像、第i个标准剂量图像,X
i、Y
i分别表示第i个低剂量估计图像、第i个标准剂量估计图像,r
1
X、r
1
X分别表示所述低剂量图像、低剂量估计图像的第一真伪预测 值,r
2
Y、r
2
Y分别表示所述所述标准剂量图像、标准剂量估计图像的第二真伪预测值,λ表示平衡因子,r
3
X、r
3
X分别表示所述低剂量图像、低剂量估计图像的剂量等级预测值,d、
分别表示所述剂量等级值、剂量等级估计值,E表示期望,CrossEntropy表示交叉熵。
本实施例在实施例二的基础上增加了标准剂量图像判别网络D2且低剂量图像判别网络D1还包括第二全连接层34,通过标准剂量图像判别网络D2和低剂量图像判别网络D1可以获得标准剂量图像、标准剂量估计图像的第二真伪预测值以及低剂量图像、低剂量估计图像的第一真伪预测值,然后在构建损失函数考虑标准剂量图像、标准剂量估计图像的第二真伪预测值以及低剂量图像、低剂量估计图像的第一真伪预测值,引入对抗损失信息,提高了视觉效果,从而进一步提升了图像重建质量。
实施例四
参照图12,本实施例提供了一种低剂量图像去噪网络的训练系统,所述训练系统包括训练数据集获取模块100、网络构建模块101、训练模块102。
训练数据集获取模块100用于获取训练数据集,训练数据集包括多个输入参数组,每一个输入参数组包括标准剂量图像、低剂量图像。本实施例中的训练数据集为:
D={(x
1,y
1),(x
2,y
2),......,(x
i,y
i),......,(x
n,y
n)},
其中,n表示训练数据集中输入参数组的个数,x
i表示第i个输入参数组中的低剂量图像,y
i表示第i个输入参数组中的标准剂量图像,n个低剂量图像{x
1,x
2,......,x
i,......,x
n}为不同解剖部位的低剂量CT图像,也可以为不同剂量等级下的同一解剖部位的CT图像,即n个低剂量图像{x
1,x
2,......,x
i,......,x
n}的剂量等级不完全相同,n个低剂量图像{x
1,x
2,......,x
i,......,x
n}与n个标准剂量图像{y
1,y
2,......,y
i,......,y
n}中下标相同的x
i和y
i表示同一解剖部位的低剂量CT图像和标准剂量CT图像或者同一剂量等级下的低剂量CT图像和标准剂量CT图像。其中,不同解剖部位可以包括头颅、眼眶、鼻窦、颈部、肺腔、腹部、盆腔(男)、盆腔(女)、膝盖和腰椎等部位。
这里需要说明的是,本实施例中用于训练的训练数据集中的低剂量图像和标准剂量图像是从本领域常用的样本数据集中选取的,这里不再具体限定。
网络构建模块101用于构建建立训练网络,训练网络包括低剂量图像去噪网络、低剂量图像生成网络;将训练数据集输入训练网络,低剂量图像去噪网络根据低剂量图像生成低剂量图像的剂量等级估计值及标准剂量估计图像;低剂量图像生成网络根据标准剂量估计图像和剂量等级估计值生成低剂量估计图像。
训练模块102用于根据低剂量图像、低剂量估计图像、标准剂量图像、标准剂量估计图像构建损失函数并对损失函数进行优化,获得低剂量图像去噪网络的参数并更新所述低剂量图像去噪网络。
实施例五
本实施例提供了一种低剂量CT图像的去噪方法,该去噪方法包括:将待去噪的低剂量图像输入到利用实施例一~实施例三所述的低剂量图像去噪网络的训练方法获得的低剂量图像去噪网络中,获得重建后的低剂量图像。
这里需要说明的是,本实施例中的去噪方法包括两种实施方式,第一种实施方式是将实施例一~实施例三训练好的低剂量图像去噪网络作为低剂量图像的去噪网络,将待去噪的低剂量图像输入到该低剂量图像去噪网络便可以获得重建后的低剂量图像。第二种实施方式是先利用实施例一~实施例三所述的低剂量图像去噪网络的训练方法对低剂量图像去噪网络进行训练,然后再将待去噪的低剂量图像输入到训练好的低剂量图像去噪网络,获得重建后的低剂量图像。
通过本实施例的去噪方法,可以更好地提取原始图像的细节,使得重建后的图像更清晰。
实施例六
参照图13,本实施例提供了一种计算机设备,包括处理器200及存储器201,存储在存储器201上的计算机程序,处理器200执行计算机程序以实现如实施例一~实施例三所述的训练方法。
存储器201可以包括高速随机存取存储器(Random Access Memory,RAM),也可能还包括非不稳定的存储器(non-volatile memory),例如至少一个磁盘存储器。
处理器200可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,实施例一~三所述的训练方法的各步骤可以通过处理器200中的硬件的集成逻辑电路或者软件形式的指令完成。处理器200也可以是通用处理器,包括中央处理器(Central Processing Unit,CPU)、网络处理器(Network Processor,NP)等,还可以是数字信号处理器(DSP)、专用集成电路(ASIC)、现成可编程门阵列(FPGA)或者其它可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。
存储器201用于存储计算机程序,处理器200在接收到执行指令后,执行该计算机程序以实现如实施例一~三所述的训练方法。
本实施例还提供了一种计算机存储介质,计算机存储介质中存储有计算机程序,处理器200用于读取并执行计算机存储介质中存储的计算机程序,以实现如实施例一~三所述的训练方法。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本发明实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机存储介质中,或者从一个计算机存储介质向另一个计算机存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。
本发明实施例是参照根据本发明实施例的方法、装置、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或 方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
以上所述仅是本申请的具体实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。
Claims (20)
- 一种低剂量图像去噪网络的训练方法,其中,所述训练方法包括:获取训练数据集,所述训练数据集包括多个输入参数组,每一个所述输入参数组包括标准剂量图像、低剂量图像;建立训练网络,所述训练网络包括低剂量图像去噪网络、低剂量图像生成网络;将所述训练数据集输入所述训练网络,所述低剂量图像去噪网络根据所述低剂量图像生成所述低剂量图像的剂量等级估计值及标准剂量估计图像;所述低剂量图像生成网络根据所述标准剂量估计图像和所述剂量等级估计值生成低剂量估计图像;根据所述低剂量图像、低剂量估计图像、标准剂量图像、标准剂量估计图像构建损失函数;对所述损失函数进行优化,获得所述低剂量图像去噪网络的参数并更新所述低剂量图像去噪网络。
- 根据权利要求1所述的训练方法,其中,所述低剂量图像去噪网络包括依次连接的第一特征提取模块、第一下采样模块、剂量等级生成模块、第一融合模块、第一下采样模块、第一重建模块,所述剂量等级生成模块用于生成剂量等级估计值,所述第一融合模块用于将所述剂量等级估计值与所述低剂量图像的特征信息进行融合。
- 根据权利要求2所述的训练方法,其中,所述低剂量图像生成网络包括依次连接的第二特征提取模块、第二下采样模块、第二融合模块、第二下采样模块、第二重建模块,所述第二融合模块用于将所述剂量等级估计值与所述标准剂量估计图像的特征信息进行融合。
- 根据权利要求1所述的训练方法,其中,每一个所述输入参数组还包括所述低剂量图像对应的剂量等级值;所述训练网络还包括低剂量图像判别网络,所述低剂量图像判别网络用于根据所述低剂量图像和所述低剂量估计图像生成剂量等级预测值;所述根据所述低剂量图像、低剂量估计图像、标准剂量图像、标准剂量估计图像构建损失函数,包括:根据所述低剂量图像、低剂量估计图像、标准剂量图像、标准剂量估计图像、剂量等级预测值构建损失函数。
- 根据权利要求2所述的训练方法,其中,每一个所述输入参数组还包括所述低剂量图像对应的剂量等级值;所述训练网络还包括低剂量图像判别网络,所述低剂量图像判别网络用于根据所述低剂量图像和所述低剂量估计图像生成剂量等级预测值;所述根据所述低剂量图像、低剂量估计图像、标准剂量图像、标准剂量估计图像构建损失函数,包括:根据所述低剂量图像、低剂量估计图像、标准剂量图像、标准剂量估计图像、剂量等级预测值构建损失函数。
- 根据权利要求3所述的训练方法,其中,每一个所述输入参数组还包括所述低剂量图像对应的剂量等级值;所述训练网络还包括低剂量图像判别网络,所述低剂量图像判别网络用于根据所述低剂量图像和所述低剂量估计图像生成剂量等级预测值;所述根据所述低剂量图像、低剂量估计图像、标准剂量图像、标准剂量估计图像构建损失函数,包括:根据所述低剂量图像、低剂量估计图像、标准剂量图像、标准剂量估计图像、剂量等级预测值构建损失函数。
- 根据权利要求4所述的训练方法,其中,所述低剂量图像判别网络包括多个第一数值约束层、第一平铺层、第一全连接层。
- 根据权利要求5所述的训练方法,其中,所述低剂量图像判别网络包括多个第一数值约束层、第一平铺层、第一全连接层。
- 根据权利要求6所述的训练方法,其中,所述低剂量图像判别网络包括多个第一数值约束层、第一平铺层、第一全连接层。
- 根据权利要求7所述的训练方法,其中,所述训练网络还包括标准剂量图像判别网络,所述低剂量图像判别网络还包括第二全连接层,所述低剂量图像判别网络还用于根据所述低剂量图像和所述低剂量估计图像生成第一真伪预测值;所述标准剂量图像判别网络用于根据所述标准剂量图像和所述标准剂量估计图像生成第二真伪预测值;所述根据所述低剂量图像、低剂量估计图像、标准剂量图像、标准剂量估计图像构建损失函数,包括:根据所述低剂量图像、低剂量估计图像、标准剂量图像、标准剂量估计图像、剂量等级预测值、第一真伪预测值、第二真伪预测值构建损失函数。
- 根据权利要求10所述的训练方法,其中,所述标准剂量图像判别网络包括多个第二数值约束层、第二平铺层、第三全连接层。
- 一种低剂量图像的去噪方法,其中,所述去噪方法包括:将待去噪的低剂量图像输入到低剂量图像去噪网络的训练方法获得的低剂量图像去噪网络中,获得重建后的低剂量图像,所述训练方法包括:获取训练数据集,所述训练数据集包括多个输入参数组,每一个所述输入参数组包括标准剂量图像、低剂量图像;建立训练网络,所述训练网络包括低剂量图像去噪网络、低剂量图像生成网络;将所述训练数据集输入所述训练网络,所述低剂量图像去噪网络根据所述低剂量图像生成所述低剂量图像的剂量等级估计值及标准剂量估计图像;所述低剂量图像生成网络根据所述标准剂量估计图像和所述剂量等级估计值生成低剂量估计图像;根据所述低剂量图像、低剂量估计图像、标准剂量图像、标准剂量估计图像构建损失函数;对所述损失函数进行优化,获得所述低剂量图像去噪网络的参数并更新所述低剂量图像去噪网络。
- 根据权利要求12所述的去噪方法,其中,所述低剂量图像去噪网络包括依次连接的第一特征提取模块、第一下采样模块、剂量等级生成模块、第一融合模块、第一下采样模块、第一重建模块,所述剂量等级生成模块用于生成剂量等级估计值,所述第一融合模块用于将所述剂量等级估计值与所述低剂量图像的特征信息进行融合。
- 根据权利要求13所述的去噪方法,其中,所述低剂量图像生成网络包括依次连接的第二特征提取模块、第二下采样模块、第二融合模块、第二下采样模块、第二重建模块,所述第二融合模块用于将所述剂量等级估计值与所述标准剂量估计图像的特征信息进行融合。
- 根据权利要求12所述的去噪方法,其中,每一个所述输入参数组还包括所述低剂量图像对应的剂量等级值;所述训练网络还包括低剂量图像判别网络,所述低剂量图像判别网络用于根据所述低剂量图像和所述低剂量估计图像生成剂量等级预测值;所述根据所述低剂量图像、低剂量估计图像、标准剂量图像、标准剂量估计图像构建损失函数,包括:根据所述低剂量图像、低剂量估计图像、标准剂量图像、标准剂量估计图像、剂量等级预测值构建损失函数。
- 根据权利要求15所述的去噪方法,其中,所述低剂量图像判别网络包括多个第一数值约束层、第一平铺层、第一全连接层。
- 根据权利要求16所述的去噪方法,其中,所述训练网络还包括标准剂量图像判别网络,所述低剂量图像判别网络还包括第二全连接层,所述低剂量图像判别网络还用于根据所述低剂量图像和所述低剂量估计图像生成第一真伪预测值;所述标准剂量图像判别网络用于根据所述标准剂量图像和所述标准剂量估计图像生成第二真伪预测值;所述根据所述低剂量图像、低剂量估计图像、标准剂量图像、标准剂量估计图像构建损失函数,包括:根据所述低剂量图像、低剂量估计图像、标准剂量图像、标准剂量估计图像、剂量等级预测值、第一真伪预测值、第二真伪预测值构建损失函数。
- 根据权利要求17所述的去噪方法,其中,所述标准剂量图像判别网络包括多个第二数值约束层、第二平铺层、第三全连接层。
- 一种计算机设备,包括存储器、处理器及存储在存储器上的计算机程序,其中,所述处理器执行所述计算机程序以实现低剂量图像去噪网络的训练方法,所述训练方法包括:获取训练数据集,所述训练数据集包括多个输入参数组,每一个所述输入参数组包括标准剂量图像、低剂量图像;建立训练网络,所述训练网络包括低剂量图像去噪网络、低剂量图像生成网络;将所述训练数据集输入所述训练网络,所述低剂量图像去噪网络根据所述低剂量图像生成所述低剂量图像的剂量等级估计值及标准剂量估计图像;所述 低剂量图像生成网络根据所述标准剂量估计图像和所述剂量等级估计值生成低剂量估计图像;根据所述低剂量图像、低剂量估计图像、标准剂量图像、标准剂量估计图像构建损失函数;对所述损失函数进行优化,获得所述低剂量图像去噪网络的参数并更新所述低剂量图像去噪网络。
- 根据权利要求19所述的计算机设备,其中,所述低剂量图像去噪网络包括依次连接的第一特征提取模块、第一下采样模块、剂量等级生成模块、第一融合模块、第一下采样模块、第一重建模块,所述剂量等级生成模块用于生成剂量等级估计值,所述第一融合模块用于将所述剂量等级估计值与所述低剂量图像的特征信息进行融合。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011430758.0A CN112488951B (zh) | 2020-12-07 | 2020-12-07 | 低剂量图像去噪网络的训练方法、低剂量图像的去噪方法 |
CN202011430758.0 | 2020-12-07 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022120694A1 true WO2022120694A1 (zh) | 2022-06-16 |
Family
ID=74940024
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/135188 WO2022120694A1 (zh) | 2020-12-07 | 2020-12-10 | 低剂量图像去噪网络的训练方法、低剂量图像的去噪方法 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112488951B (zh) |
WO (1) | WO2022120694A1 (zh) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109785243A (zh) * | 2018-11-28 | 2019-05-21 | 西安电子科技大学 | 基于对抗生成网络未配准低剂量ct的去噪方法、计算机 |
US20190221011A1 (en) * | 2018-01-12 | 2019-07-18 | Korea Advanced Institute Of Science And Technology | Method for processing x-ray computed tomography image using neural network and apparatus therefor |
CN110827216A (zh) * | 2019-10-23 | 2020-02-21 | 上海理工大学 | 图像去噪的多生成器生成对抗网络学习方法 |
CN110930318A (zh) * | 2019-10-31 | 2020-03-27 | 中山大学 | 一种低剂量ct图像修复去噪方法 |
CN110992290A (zh) * | 2019-12-09 | 2020-04-10 | 深圳先进技术研究院 | 低剂量ct图像去噪网络的训练方法及系统 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11580410B2 (en) * | 2018-01-24 | 2023-02-14 | Rensselaer Polytechnic Institute | 3-D convolutional autoencoder for low-dose CT via transfer learning from a 2-D trained network |
CN111179366B (zh) * | 2019-12-18 | 2023-04-25 | 深圳先进技术研究院 | 基于解剖结构差异先验的低剂量图像重建方法和系统 |
-
2020
- 2020-12-07 CN CN202011430758.0A patent/CN112488951B/zh active Active
- 2020-12-10 WO PCT/CN2020/135188 patent/WO2022120694A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190221011A1 (en) * | 2018-01-12 | 2019-07-18 | Korea Advanced Institute Of Science And Technology | Method for processing x-ray computed tomography image using neural network and apparatus therefor |
CN109785243A (zh) * | 2018-11-28 | 2019-05-21 | 西安电子科技大学 | 基于对抗生成网络未配准低剂量ct的去噪方法、计算机 |
CN110827216A (zh) * | 2019-10-23 | 2020-02-21 | 上海理工大学 | 图像去噪的多生成器生成对抗网络学习方法 |
CN110930318A (zh) * | 2019-10-31 | 2020-03-27 | 中山大学 | 一种低剂量ct图像修复去噪方法 |
CN110992290A (zh) * | 2019-12-09 | 2020-04-10 | 深圳先进技术研究院 | 低剂量ct图像去噪网络的训练方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
CN112488951B (zh) | 2022-05-20 |
CN112488951A (zh) | 2021-03-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200210767A1 (en) | Method and systems for analyzing medical image data using machine learning | |
CN111179366B (zh) | 基于解剖结构差异先验的低剂量图像重建方法和系统 | |
WO2022120883A1 (zh) | 低剂量图像去噪网络的训练方法、低剂量图像的去噪方法 | |
US11514621B2 (en) | Low-dose image reconstruction method and system based on prior anatomical structure difference | |
CN111325695B (zh) | 基于多剂量等级的低剂量图像增强方法、系统及存储介质 | |
WO2023044605A1 (zh) | 极端环境下脑结构的三维重建方法、装置及可读存储介质 | |
WO2023142781A1 (zh) | 图像三维重建方法、装置、电子设备及存储介质 | |
CN117036162B (zh) | 轻量级胸部ct图像超分辨率的残差特征注意力融合方法 | |
US20240185484A1 (en) | System and method for image reconstruction | |
WO2020118830A1 (zh) | 字典训练及图像超分辨重建方法、系统、设备及存储介质 | |
CN111292322B (zh) | 医学图像处理方法、装置、设备及存储介质 | |
Chen et al. | DuSFE: Dual-Channel Squeeze-Fusion-Excitation co-attention for cross-modality registration of cardiac SPECT and CT | |
Murmu et al. | Deep learning model-based segmentation of medical diseases from MRI and CT images | |
US11455755B2 (en) | Methods and apparatus for neural network based image reconstruction | |
Jiao et al. | Fast PET reconstruction using multi-scale fully convolutional neural networks | |
WO2022120694A1 (zh) | 低剂量图像去噪网络的训练方法、低剂量图像的去噪方法 | |
US10347014B2 (en) | System and method for image reconstruction | |
CN115908610A (zh) | 一种基于单模态pet图像获取衰减校正系数图像的方法 | |
US20230031910A1 (en) | Apriori guidance network for multitask medical image synthesis | |
CN115330600A (zh) | 一种基于改进srgan的肺部ct图像超分辨率方法 | |
Xie et al. | 3D few-view CT image reconstruction with deep learning | |
Wang et al. | Optimization algorithm of CT image edge segmentation using improved convolution neural network | |
Yang et al. | Enhanced AI for Science using Diffusion-based Generative AI-A Case Study on Ultrasound Computing Tomography | |
US12125198B2 (en) | Image correction using an invertable network | |
US20230079353A1 (en) | Image correction using an invertable network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20964632 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20964632 Country of ref document: EP Kind code of ref document: A1 |