WO2021114105A1 - 低剂量ct图像去噪网络的训练方法及系统 - Google Patents
低剂量ct图像去噪网络的训练方法及系统 Download PDFInfo
- Publication number
- WO2021114105A1 WO2021114105A1 PCT/CN2019/124377 CN2019124377W WO2021114105A1 WO 2021114105 A1 WO2021114105 A1 WO 2021114105A1 CN 2019124377 W CN2019124377 W CN 2019124377W WO 2021114105 A1 WO2021114105 A1 WO 2021114105A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- layer
- channel
- image
- dose
- convolution
- Prior art date
Links
- 238000012549 training Methods 0.000 title claims abstract description 137
- 238000000034 method Methods 0.000 title claims abstract description 53
- 230000004927 fusion Effects 0.000 claims abstract description 38
- 238000003860 storage Methods 0.000 claims description 17
- 238000004590 computer program Methods 0.000 claims description 15
- 238000010276 construction Methods 0.000 claims description 6
- 238000002591 computed tomography Methods 0.000 description 95
- 230000006870 function Effects 0.000 description 29
- 238000010586 diagram Methods 0.000 description 12
- 230000008569 process Effects 0.000 description 12
- 230000004913 activation Effects 0.000 description 11
- 238000012545 processing Methods 0.000 description 9
- 239000011159 matrix material Substances 0.000 description 8
- 238000005457 optimization Methods 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 5
- 230000005855 radiation Effects 0.000 description 5
- 230000006872 improvement Effects 0.000 description 4
- 238000013170 computed tomography imaging Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 2
- 230000003466 anti-cipated effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000001066 destructive effect Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Definitions
- the invention relates to the technical field of low-dose CT image reconstruction, in particular to a training method and system for a low-dose CT image denoising network.
- Computed tomography is an important imaging method to obtain information about the internal structure of objects through a non-destructive method. It has many advantages such as high resolution, high sensitivity, and multiple levels. It is one of the largest medical imaging diagnostic equipment in my country. It is widely used in various medical and clinical examination fields. However, due to the need to use X-rays in the CT scanning process, as people gradually understand the potential hazards of radiation, the issue of CT radiation dose has attracted more and more attention.
- the principle of As Low As Reasonably Achievable (ALARA) requires that the radiation dose to the patient be reduced as much as possible on the premise of satisfying the clinical diagnosis.
- the present invention provides a method and system for training a low-dose CT image denoising network, which can avoid multiple consecutive convolutions or multiple consecutive deconvolution operations after multiple consecutive convolutions. Information is missing and the details of the original image are better extracted.
- the specific technical solution proposed by the present invention is to provide a training method for a low-dose CT image denoising network, and the training method includes the steps:
- the training data set includes a plurality of training image block groups, each of the training image block groups includes a first image block and a second image block, the first image block and the second image block are respectively Image blocks located in the same position in low-dose CT image and standard-dose CT image blocks;
- a low-dose CT image denoising network is established.
- the low-dose CT image denoising network includes a first convolution layer, a convolution module, a first fusion layer, and a second convolution layer that are sequentially connected.
- the convolution module includes at least one convolution network, the at least one convolution network is connected in sequence, and each convolution module
- the network includes a channel layer, a third convolution layer, and a second fusion layer that are sequentially connected.
- the channel layer includes a first channel
- the first channel includes a fourth convolution layer and a first deconvolution layer.
- Four convolutional layers and a first deconvolutional layer are alternately connected, and the second fusion layer is used to fuse the input signal of the convolutional network and the output signal of the third convolutional layer;
- the training data set is used to train the low-dose CT image denoising network to obtain an updated low-dose CT image denoising network.
- the fourth convolutional layer and the first deconvolutional layer are alternately connected in sequence.
- first deconvolution layer and the fourth convolution layer are alternately connected in sequence.
- the channel layer further includes a second channel and a splicing layer, the second channel is connected in parallel with the first channel, and the splicing layer is connected between the channel layer and the third convolutional layer ,
- the splicing layer is used to splice the output signal of the first channel and the output signal of the second channel;
- the second channel includes a fifth convolutional layer and a second deconvolutional layer, the first The second deconvolution layer and the fifth convolution layer are alternately connected in turn.
- the channel layer further includes a second channel and a splicing layer, the second channel is connected in parallel with the first channel, and the splicing layer is connected between the channel layer and the third convolutional layer ,
- the splicing layer is used to splice the output signal of the first channel and the output signal of the second channel;
- the second channel includes a fifth convolutional layer and a second deconvolutional layer, the first The second deconvolution layer and the fifth convolution layer are alternately connected in turn.
- training data to train the low-dose CT image specifically includes:
- the optimized network parameters are used to update the low-dose CT image denoising network.
- loss( ⁇ ) represents the loss function
- n represents the number of training image block groups in the training data set
- G(X i ; ⁇ ) represents the i-th output image
- Y i represents the i-th training image block group.
- the present invention also provides a training system for a low-dose CT image denoising network, the training system includes:
- the training data set acquisition module is used to acquire a training data set, the training data set includes a plurality of training image block groups, each of the training image block groups includes a first image block and a second image block, the first image The block and the second image block are respectively the image blocks located at the same position in the low-dose CT image and the standard-dose CT image block;
- the network construction module is used to establish a low-dose CT image denoising network
- the low-dose CT image denoising network includes a first convolutional layer, a convolution module, a first fusion layer, and a second convolutional layer connected in sequence
- the first fusion layer is used to fuse the input signal of the convolution module and the output signal of the convolution module
- the convolution module includes at least one convolution network, and the at least one convolution network is connected in sequence
- Each of the convolutional networks includes a channel layer, a third convolution layer, and a second fusion layer that are sequentially connected, the channel layer includes a first channel, and the first channel includes a fourth convolution layer and a first deconvolution layer.
- a build-up layer, the fourth convolutional layer and the first deconvolutional layer are alternately connected, and the second fusion layer is used to fuse the input signal of the convolutional network and the output signal of the third convolutional layer ;
- the training module is used to train the low-dose CT image denoising network by using the training data set to obtain an updated low-dose CT image denoising network.
- the present invention also provides a low-dose CT image denoising method.
- the denoising method includes: inputting the low-dose CT image to be denoised into the training method using the low-dose CT image denoising network as described above. In the updated low-dose CT image denoising network, the low-dose CT image after denoising is obtained.
- the present invention also provides a computer storage medium in which a computer program is stored, and the computer program, when read and executed by one or more processors, realizes the low-dose CT image denoising network as described above Training method.
- the training method of the low-dose CT image denoising network provided by the present invention, by alternately connecting the fourth convolution layer and the first deconvolution layer, it is possible to avoid multiple consecutive convolutions or multiple consecutive convolutions and then multiple consecutive convolutions.
- Information loss caused by the deconvolution operation, and the input of each convolutional network will be fused with the output signal of the third convolutional layer in the convolutional network, so that the details of the original image can be better extracted, avoiding multiple times
- the problem of distortion after cascading makes the reconstructed image clearer.
- Fig. 1 is a flowchart of a training method of a low-dose CT image denoising network in the first embodiment
- FIG. 2 is a schematic diagram of the structure of the low-dose CT image denoising network in the first embodiment
- FIG. 3 is a flowchart of training a low-dose CT image denoising network using a training data set in the first embodiment
- FIG. 4 is a schematic diagram of the structure of a low-dose CT image denoising network in the second embodiment
- FIG. 5 is a schematic structural diagram of a low-dose CT image denoising network in the third embodiment
- FIG. 6 is a schematic diagram of the structure of a low-dose CT image denoising network in the fourth embodiment
- Fig. 7 is a schematic structural diagram of a training system for a low-dose CT image denoising network in the fifth embodiment
- Fig. 8 is a schematic diagram of a processor and a computer storage medium in the seventh embodiment.
- the training method of the low-dose CT image denoising network includes the steps:
- each training image block group includes a first image block and a second image block
- the first image block and the second image block are respectively a low-dose CT
- the image, a standard-dose CT image block located at the same position, the first image block and the second image block have the same size.
- the low-dose CT image denoising network includes a first convolutional layer, a convolution module, a first fusion layer, and a second convolutional layer that are connected in sequence.
- the first fusion layer is used to The input signal of the convolution module and the output signal of the convolution module are fused.
- the convolution module includes at least one convolution network, and at least one convolution network is connected in sequence, that is, the first convolution layer, at least one convolution network, and the first fusion
- the layers and the second convolutional layer are connected in series; each convolutional network includes a channel layer, a third convolutional layer, and a second fusion layer that are sequentially connected.
- the channel layer includes the first channel, and the first channel includes the fourth convolutional layer and The first deconvolution layer, the fourth convolution layer, and the first deconvolution layer are alternately connected, and the second fusion layer is used to fuse the input signal of the convolutional network and the output signal of the third convolution layer.
- the training method of the low-dose CT image denoising network in this embodiment includes the steps:
- each training image block group includes a first image block and a second image block
- the first image block in each training image block group The second image blocks are respectively the image blocks in the same position in a low-dose CT image and a standard-dose CT image.
- the first and second image blocks in the same training image block group have the same size, and different training images
- the size of the first image block in the block group may be equal or unequal.
- the training data set in this embodiment is:
- x i represents the first image block in the i-th training image block group
- y i represents the second image block in the i-th training image block group
- n represents the number of training image block groups in the training data set
- the first image blocks in the training image block groups are all different, that is, the first image block in the n training image block groups is selected from n image blocks at different positions in the same low-dose CT image, corresponding to The second image block in the n training image block groups is selected from n image blocks at different positions in the same standard dose CT image.
- the first image block in the n training image block groups An image can also be an image block selected from n different low-dose CT images at the same position.
- the second image block in the n training image block group is selected from n different standard-dose CT images
- the first image in the n training image block groups can also be image blocks in different locations selected from different low-dose CT images, and correspondingly, the second image in the n training image block groups
- the image blocks are image blocks selected from different positions in different standard dose CT images.
- the low-dose CT images and standard-dose CT images in the data set used for training in this embodiment are selected from existing sample sets, where the existing sample sets are commonly used samples in this field. Collection, here is no longer an example one by one.
- the first fusion layer 3 is used to fuse the input signal of the convolution module 2 and the output signal of the convolution module 2, so that the input signal of the second convolution layer 4 retains the input signal of the convolution module 2 Feature information, that is, the original low-level feature information is retained, so that the details of the original image can be better extracted.
- the convolution module 2 includes at least one convolutional network 20, and at least one convolutional network 20 is connected in sequence, that is, a first convolutional layer 1, at least one convolutional network 20, a first fusion layer 3, and a second convolutional layer 4 are connected in series .
- Fig. 1 exemplarily shows the case where the convolution module 2 includes three convolution networks 20. It should be noted that this is only shown as an example and is not used to limit the application.
- three convolutional networks 20 are connected in sequence, that is, three convolutional networks 20 are connected in series, and each convolutional network 20 includes a channel layer, a third convolution layer 23, and a second fusion layer 24 that are sequentially connected. .
- the second fusion layer 24 is used to fuse the input signal of the channel layer and the output signal of the third convolutional layer 23.
- the first fusion layer 3 and the second fusion layer 24 are respectively used for the convolution module 2
- the input signal and the output signal of the convolution module 2, the input signal of the channel layer, and the output signal of the third convolution layer 23 are summed.
- other fusion algorithms can also be used. In this embodiment, the calculation process is simplified. The fusion algorithm for summation, but it is not limited here.
- the channel layer in this embodiment includes a first channel
- the first channel includes a fourth convolution layer 21 and a first deconvolution layer 22
- the fourth convolution layer 21 and the first deconvolution layer 22 are alternately connected in turn, here It should be noted that the number of the fourth convolution layer 21 and the first deconvolution layer 22 can both be one (as shown in FIG.
- the first channel is the fourth convolution layer 21, the first deconvolution layer 21 and the first The deconvolution layer 22, if the number of the fourth convolution layer 21 and the first deconvolution layer 22 are both multiple, the first channel is the fourth convolution layer 21 and the first deconvolution layer 22 that are sequentially connected , The fourth convolution layer 21, the first deconvolution layer 22,..., the fourth convolution layer 21, the first deconvolution layer 22.
- the fourth convolutional layer 21 is used for downsampling
- the first deconvolutional layer 22 is used for upsampling
- the fourth convolutional layer 21 and the first deconvolutional layer 22 are alternately connected in turn, that is, downsampling and upsampling in turn Alternately, so as to avoid the lack of information caused by continuous multiple downsampling or continuous multiple downsampling and then continuous multiple upsampling.
- the size of the convolution kernel of the first convolution layer 1 in this embodiment is 3 ⁇ 3 ⁇ 3, the number of convolution kernels is 64, and the first image block can be divided by the first convolution layer 1. Converted to 64 channels. It should be noted here that the first convolution layer 1 also includes an activation function. After the first image block is convolved, the data after the convolution operation needs to be non-linearly processed through the activation function. .
- the size of the convolution kernel of the fourth convolution layer 21 is 3 ⁇ 3 ⁇ 64.
- the number of channels of the fourth convolution layer 21 can also be other values.
- the number of channels of the fourth convolution layer 21 is selected as 2 m
- the number of channels of the fourth convolution layer 21 is 8, 16, 32, etc.
- the size of the convolution kernel of the first deconvolution layer 22 is 3 ⁇ 3 ⁇ 64.
- the fourth convolution layer 21 also includes an activation function, and the fourth convolution layer 21 performs nonlinear processing on the data after the convolution operation through the activation function.
- the parameters of the fourth convolutional layer 21, the first deconvolutional layer 22, and the third convolutional layer 24 in the multiple convolutional networks 20 It should be noted here that when the number of convolutional networks 20 is multiple, the parameters of the fourth convolutional layer 21, the first deconvolutional layer 22, and the third convolutional layer 24 in the multiple convolutional networks 20 It can be the same or different.
- step S3 training a low-dose CT image denoising network using a training data set includes the steps:
- step S32 the formula for constructing the loss function according to the multiple output images and the second image block in the multiple training image block groups is:
- ⁇ represents the network parameters of the low-dose CT image denoising network
- loss( ⁇ ) represents the loss function
- n represents the number of training image block groups in the training data set
- G(X i ; ⁇ ) represents the i-th output image
- Y i represents the second image block in the i-th training image block group.
- the absolute value difference is used as the loss function, which can increase the differentiation between the various regions of the image, thereby making the boundaries between the various regions in the image clearer.
- step S33 the Adam optimization algorithm is used to optimize the minimum value of the loss function, and the optimized network parameters are obtained.
- the iterative process of the Adam optimization algorithm is:
- the number of termination iterations in this embodiment is 1000, and the number of iterations can be set according to actual needs, which is not limited here.
- an updated low-dose CT image denoising network can be obtained.
- the updated low-dose CT image denoising network obtained in this embodiment can avoid multiple consecutive convolutions or consecutive multiple convolutions. Information loss caused by multiple deconvolution operations, and the input of each convolutional network will be fused with the output signal of the third convolutional layer in the convolutional network, so as to better extract the details of the original image and avoid The problem of distortion after multiple secondary connections makes the reconstructed image clearer.
- the difference between this embodiment and the first embodiment is that the first deconvolution layer 22 and the fourth convolution layer 21 in the first channel in this embodiment are alternately connected in turn.
- the number of the first deconvolution layer 22 and the number of the fourth convolution layer 21 can both be one, and the first channel is the first deconvolution layer 22 and the fourth convolution layer 21 that are sequentially connected. If the first deconvolution layer 22 and the fourth convolution layer 21 The number of the buildup layer 22 and the fourth convolutional layer 21 are both multiple, and the first channel is the first deconvolution layer 22, the fourth convolution layer 21, the first deconvolution layer 22, and the fourth convolution layer 22, which are connected in sequence.
- Convolutional layer 21,..., first deconvolutional layer 22, fourth convolutional layer 21 The first deconvolution layer 22 is used for upsampling, the fourth convolution layer 21 is used for downsampling, and the first deconvolution layer 22 and the fourth convolution layer 21 are alternately connected in turn, that is, upsampling and downsampling in turn process alternately.
- the fourth convolution layer 21 in this embodiment does not include an activation function
- the first deconvolution layer 22 includes an activation function
- the first deconvolution layer 22 The activation function is used to perform non-linear processing on the data after the deconvolution operation.
- this embodiment can also avoid the loss of information caused by multiple consecutive downsampling or consecutive multiple upsampling after multiple consecutive downsampling, avoid the problem of distortion after multiple cascades, and make the reconstructed image clearer.
- the channel layer in this embodiment further includes a second channel and a splicing layer 25.
- the second channel is connected in parallel with the first channel.
- the splicing layer 25 is connected between the channel layer and the third convolutional layer 23.
- the splicing layer 25 is used to splice the output signal of the first channel and the output signal of the second channel.
- the layer 25 can use multiple splicing methods to splice the output signal of one channel and the output signal of the second channel.
- the splicing layer 25 adopts the simplest image splicing method, for example, the first
- the output signal of the channel is a 3 ⁇ 3 ⁇ 64 matrix
- the output signal of the second channel is also a 3 ⁇ 3 ⁇ 64 matrix
- the spliced image can be a 3 ⁇ 6 ⁇ 64 matrix, that is, left and right splicing, or 6 ⁇ 3 ⁇ 64 matrix, namely top and bottom splicing.
- the second channel includes a fifth convolution layer 26 and a second deconvolution layer 27, and the second deconvolution layer 27 and the fifth convolution layer 26 are alternately connected in sequence.
- the number of the second deconvolution layer 27 and the fifth convolution layer 26 can both be one, and the second channel is the second deconvolution layer 27 and the fifth convolution layer 26 that are sequentially connected. If the number of the second deconvolution layer 27 and the number of the fifth convolution layer 26 is multiple, the second channel is the second deconvolution layer 27, the fifth convolution layer 26, and the second deconvolution layer that are sequentially connected.
- the second deconvolution layer 27 is used for upsampling
- the fifth convolution layer 26 is used for downsampling
- the second deconvolution layer 27 and the fifth convolution layer 26 are alternately connected in turn, that is, upsampling and downsampling in turn Alternately, so as to avoid the lack of information caused by continuous multiple downsampling or continuous multiple downsampling and then continuous multiple upsampling.
- the second deconvolution layer 27 in this embodiment further includes an activation function, and the second deconvolution layer 27 performs nonlinear processing on the data after the deconvolution operation through the activation function.
- the size of the convolution kernel of the second deconvolution layer 27 is 3 ⁇ 3 ⁇ 64.
- the number of channels of the second deconvolution layer 27 can also be other values.
- the number of channels of the second deconvolution layer 27 is selected size of 2 m, for example, the number of channels of the second layer 27 is a deconvolution 8,16,32 etc., volume 26 laminated convolution kernel of 3 ⁇ 3 ⁇ 64.
- the parameters of the fourth convolutional layer 21, the first deconvolutional layer 22, and the third convolutional layer 24 in the multiple convolutional networks 20 may be the same or different; the parameters of the second deconvolution layer 27 and the fifth convolution layer 26 in the multiple convolutional networks 20 may be the same or different.
- the weight of the first channel is different from the weight of the second channel.
- the improvement of this embodiment is that the channel layer in this embodiment also includes a second channel.
- the feature information of the image is extracted through different channels, and then the output of the two channels is output through the stitching layer 25.
- the information is fused to further avoid the lack of image information and to further better extract the details of the original image.
- the channel layer in this embodiment further includes a second channel and a splicing layer 25.
- the second channel is connected in parallel with the first channel.
- the splicing layer 25 is connected between the channel layer and the third convolutional layer 23.
- the splicing layer 25 is used to splice the output signal of the first channel and the output signal of the second channel.
- the layer 25 can use multiple splicing methods to splice the output signal of one channel and the output signal of the second channel.
- the splicing layer 25 adopts the simplest image splicing method, for example, the first
- the output signal of the channel is a 3 ⁇ 3 ⁇ 64 matrix
- the output signal of the second channel is also a 3 ⁇ 3 ⁇ 64 matrix
- the spliced image can be a 3 ⁇ 6 ⁇ 64 matrix, that is, left and right splicing, or 6 ⁇ 3 ⁇ 64 matrix, namely top and bottom splicing.
- the second channel includes a fifth convolution layer 26 and a second deconvolution layer 27, and the fifth convolution layer 26 and the second deconvolution layer 27 are alternately connected in sequence.
- the number of the fifth convolutional layer 26 and the second deconvolutional layer 27 can both be one, and the second channel is the fifth convolutional layer 26 and the second deconvolutional layer 27 that are sequentially connected. If the number of the fifth convolutional layer 26 and the second deconvolutional layer 27 is multiple, the second channel is the fifth convolutional layer 26, the second deconvolutional layer 27, and the fifth convolutional layer that are sequentially connected. 26.
- the fifth convolutional layer 26 is used for downsampling
- the second deconvolutional layer 27 is used for upsampling
- the fifth convolutional layer 26 and the second deconvolutional layer 27 are alternately connected in turn, that is, downsampling and upsampling in turn Alternately, so as to avoid the lack of information caused by continuous multiple downsampling or continuous multiple downsampling and then continuous multiple upsampling.
- the fifth convolutional layer 26 in this embodiment further includes an activation function, and the fifth convolutional layer 26 performs nonlinear processing on the data after the convolution operation through the activation function.
- the size of the convolution kernel of the fifth convolution layer 26 is 3 ⁇ 3 ⁇ 64.
- the number of channels of the fifth convolution layer 26 can also be other values.
- the number of channels of the fifth convolution layer 26 is selected as 2 m
- the number of channels of the fifth convolution layer 26 is 8, 16, 32, etc.
- the size of the convolution kernel of the second deconvolution layer 27 is 3 ⁇ 3 ⁇ 64.
- the parameters of the fourth convolutional layer 21, the first deconvolutional layer 22, and the third convolutional layer 24 in the multiple convolutional networks 20 may be the same or different; the parameters of the fifth convolution layer 26 and the second deconvolution layer 27 in the multiple convolutional networks 20 may be the same or different.
- the weight of the first channel is different from the weight of the second channel.
- the improvement of this embodiment is that the channel layer in this embodiment also includes a second channel.
- the feature information of the image is extracted through different channels, and then the output of the two channels is output through the stitching layer 25.
- the information is fused to further avoid the lack of image information and to further better extract the details of the original image.
- this embodiment provides a training system for a low-dose CT image denoising network.
- the training system includes a training data set acquisition module 100, a network construction module 101, and a training module 102.
- the training data set acquisition module 100 is used to acquire a training data set, where the training data set includes a plurality of training image block groups, and each training image block group includes a first image block and a second image block.
- the image blocks are the image blocks located at the same position in the low-dose CT image and the standard-dose CT image block.
- the first image block and the second image block in the same training image block group have the same size, and the different training image block groups are in the same position.
- the size of the first image block can be equal or unequal.
- the network construction module 101 is used to build a low-dose CT image denoising network.
- the low-dose CT image denoising network includes a first convolutional layer, a convolution module, a first fusion layer, and a second convolutional layer that are connected in sequence.
- the first fusion The layer is used to fuse the input signal of the convolution module and the output signal of the convolution module.
- the convolution module includes at least one convolution network, at least one convolution network is connected in sequence, and each convolution network includes channels connected in sequence Layer, third convolutional layer and second fusion layer, the channel layer includes the first channel, the first channel includes the fourth convolutional layer and the first deconvolutional layer, the fourth convolutional layer and the first deconvolutional layer alternate Connected, the second fusion layer is used to fuse the input signal of the convolutional network and the output signal of the third convolutional layer.
- the channel layer includes the first channel
- the first channel includes the fourth convolutional layer and the first deconvolutional layer
- the fourth convolutional layer and the first deconvolutional layer alternate Connected
- the second fusion layer is used to fuse the input signal of the convolutional network and the output signal of the third convolutional layer.
- the training module 102 is used to train the low-dose CT image denoising network by using the training data set to obtain an updated low-dose CT image denoising network.
- the training module 102 includes an input unit, a loss function construction unit, an optimization unit, an update unit, and an output unit.
- the input unit is used to input the first image block x i of the multiple training image block groups into the low-dose CT image denoising network to obtain multiple output images;
- the output unit is used to output multiple output images;
- the loss function construction unit is used To construct the loss function according to the multiple output images and the second image block in the multiple training image block groups;
- the optimization unit is used to optimize the minimum value of the loss function to obtain the optimized network parameters;
- the update unit is used to optimize The latter network parameters update the low-dose CT image denoising network.
- This embodiment provides a low-dose CT image denoising method.
- the denoising method includes: inputting the low-dose CT image to be denoised into the low-dose CT image denoising described in the first to fourth embodiments.
- the denoised low-dose CT image is obtained.
- the denoising method in this embodiment includes two implementation modes.
- the first implementation mode uses the low-dose CT image denoising network trained in Examples 1 to 4 as the low-dose CT image.
- the denoising network input the low-dose CT image to be denoised into the low-dose CT image denoising network to obtain the denoised low-dose CT image.
- the second implementation mode is to use the training method of low-dose CT image denoising network described in Example 1 to Example 4 to train the low-dose CT image denoising network, and then to denoise the low-dose CT image Input to the trained low-dose CT image denoising network to obtain the denoised low-dose CT image.
- the denoising method of this embodiment it is possible to avoid information loss caused by consecutive multiple convolutions or consecutive multiple deconvolution operations after multiple consecutive convolutions, and to better extract the details of the original image, avoiding multiple consecutive convolutions.
- the problem of distortion after cascading makes the reconstructed image clearer.
- this embodiment provides a processor 200, the processor 200 is connected to a computer storage medium 201, the computer storage medium 201 stores a computer program, the processor 200 is used to read and execute the computer storage medium 201 The computer program stored in the computer program to realize the training method of the low-dose CT image denoising network as described in the first embodiment to the fourth embodiment.
- the computer program product includes one or more computer instructions.
- the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
- the computer instructions may be stored in a computer storage medium, or transmitted from one computer storage medium to another computer storage medium.
- the computer instructions may be transmitted from a website, computer, server, or data center through a cable (such as a coaxial cable).
- the computer storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or data center integrated with one or more available media.
- the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)).
- These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
- the device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
- These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
- the instructions provide steps for implementing the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
本发明提供了一种低剂量CT图像去噪网络的训练方法,包括步骤:获取训练数据集;建立低剂量CT图像去噪网络,低剂量CT图像去噪网络包括依次连接的第一卷积层、卷积模块、第一融合层以及第二卷积层,第一融合层用于对卷积模块的输入信号和卷积模块的输出信号进行融合,卷积模块包括至少一个卷积网络,至少一个卷积网络依次连接,每一个卷积网络包括依次连接的通道层、第三卷积层及第二融合层,通道层包括第一通道,第一通道包括第四卷积层和第一反卷积层,第四卷积层、第一反卷积层交替连接;利用训练数据集对低剂量CT图像去噪网络进行训练。本发明的低剂量CT图像去噪网络的训练方法中通过将第四卷积层、第一反卷积层交替连接,可以避免信息缺失。
Description
本发明涉及低剂量CT图像重建技术领域,尤其涉及一种低剂量CT图像去噪网络的训练方法及系统。
计算机断层成像(CT)是通过无损方式获取物体内部结构信息的一种重要成像手段,它拥有高分辨率、高灵敏度以及多层次等众多优点,是我国装机量最大的医疗影像诊断设备之一,被广泛应用于各个医疗临床检查领域。然而,由于CT扫描过程中需要使用X射线,随着人们对辐射潜在危害的逐步了解,CT辐射剂量问题越来越受到人们的重视。合理使用低剂量(As Low As Reasonably Achievable,ALARA)原则要求在满足临床诊断的前提下,尽量降低对患者的辐射剂量。因此,研究和开发新的低剂量CT成像方法,既能保证CT成像质量又减少有害的辐射剂量,对于医疗诊断领域具有重要的科学意义和应用前景。但是,现有的低剂量CT成像方法中在满足低剂量CT辐射的情况下很难获得清晰的CT图像。
发明内容
为了解决现有技术的不足,本发明提供一种低剂量CT图像去噪网络的训练方法及系统,可以避免连续多次卷积或连续多次卷积后再连续多次反卷积操作造成的信息缺失以及更好地提取原始图像的细节。
本发明提出的具体技术方案为:提供一种低剂量CT图像去噪网络的训练方法,所述训练方法包括步骤:
获取训练数据集,所述训练数据集包括多个训练图像块组,每一个所述训练图像块组包括第一图像块和第二图像块,所述第一图像块、第二图像块分别为低剂量CT图像、标准剂量CT图像块中位于同一位置的图像块;
建立低剂量CT图像去噪网络,所述低剂量CT图像去噪网络包括依次连接的第一卷积层、卷积模块、第一融合层以及第二卷积层,所述第一融合层用 于对所述卷积模块的输入信号和所述卷积模块的输出信号进行融合,所述卷积模块包括至少一个卷积网络,所述至少一个卷积网络依次连接,每一个所述卷积网络包括依次连接的通道层、第三卷积层及第二融合层,所述通道层包括第一通道,所述第一通道包括第四卷积层和第一反卷积层,所述第四卷积层、第一反卷积层交替连接,所述第二融合层用于对所述卷积网络的输入信号和所述第三卷积层的输出信号进行融合;
利用所述训练数据集对所述低剂量CT图像去噪网络进行训练,获得更新后的低剂量CT图像去噪网络。
进一步地,所述第四卷积层、第一反卷积层依次交替连接。
进一步地,所述第一反卷积层、第四卷积层依次交替连接。
进一步地,所述通道层还包括第二通道和拼接层,所述第二通道与所述第一通道并联连接,所述拼接层连接于所述通道层与所述第三卷积层之间,所述拼接层用于将所述第一通道的输出信号和所述第二通道的输出信号进行拼接;所述第二通道包括第五卷积层和第二反卷积层,所述第二反卷积层、第五卷积层依次交替连接。
进一步地,所述通道层还包括第二通道和拼接层,所述第二通道与所述第一通道并联连接,所述拼接层连接于所述通道层与所述第三卷积层之间,所述拼接层用于将所述第一通道的输出信号和所述第二通道的输出信号进行拼接;所述第二通道包括第五卷积层和第二反卷积层,所述第二反卷积层、第五卷积层依次交替连接。
进一步地,利用所述训练数据对所述低剂量CT图像进行训练具体包括:
将所述多个训练图像块组中的第一图像块输入所述低剂量CT图像去噪网络,获得多个输出图像;
根据所述多个输出图像分别和所述多个训练图像块组中的第二图像块构建损失函数;
对所述损失函数的最小值进行优化,获得优化后的网络参数;
利用优化后的网络参数对所述低剂量CT图像去噪网络进行更新。
进一步地,所述损失函数为:
其中,loss(θ)表示损失函数,n表示训练数据集中训练图像块组的个数,G(X
i;θ)表示第i个输出图像,Y
i表示第i个训练图像块组中的第二图像块。
本发明还提供了一种低剂量CT图像去噪网络的训练系统,所述训练系统包括:
训练数据集获取模块,用于获取训练数据集,所述训练数据集包括多个训练图像块组,每一个所述训练图像块组包括第一图像块和第二图像块,所述第一图像块、第二图像块分别为低剂量CT图像、标准剂量CT图像块中位于同一位置的图像块;
网络构建模块,用于建立低剂量CT图像去噪网络,所述低剂量CT图像去噪网络包括依次连接的第一卷积层、卷积模块、第一融合层以及第二卷积层,所述第一融合层用于对所述卷积模块的输入信号和所述卷积模块的输出信号进行融合,所述卷积模块包括至少一个卷积网络,所述至少一个卷积网络依次连接,每一个所述卷积网络包括依次连接的通道层、第三卷积层及第二融合层,所述通道层包括第一通道,所述第一通道包括第四卷积层和第一反卷积层,所述第四卷积层、第一反卷积层交替连接,所述第二融合层用于对所述卷积网络的输入信号和所述第三卷积层的输出信号进行融合;
训练模块,用于利用所述训练数据集对所述低剂量CT图像去噪网络进行训练,获得更新后的低剂量CT图像去噪网络。
本发明还提供了一种低剂量CT图像的去噪方法,所述去噪方法包括:将待去噪的低剂量CT图像输入到利用如上所述的低剂量CT图像去噪网络的训练方法得到的更新后的低剂量CT图像去噪网络中,获得去噪后的低剂量CT图像。
本发明还提供了一种计算机存储介质,所述计算机存储介质中存储计算机程序,所述计算机程序在被一个或多个处理器读取并执行时实现如上所述的低 剂量CT图像去噪网络的训练方法。
本发明提供的低剂量CT图像去噪网络的训练方法中通过将第四卷积层、第一反卷积层交替连接,可以避免连续多次卷积或连续多次卷积后再连续多次反卷积操作造成的信息缺失,且每一个卷积网络的输入都会与该卷积网络中的第三卷积层的输出信号进行融合,从而可以更好地提取原始图像的细节,避免多次级联后失真的问题,使得重建后的图像更清晰。
下面结合附图,通过对本发明的具体实施方式详细描述,将使本发明的技术方案及其它有益效果显而易见。
图1为实施例一中低剂量CT图像去噪网络的训练方法的流程图;
图2为实施例一中低剂量CT图像去噪网络的结构示意图;
图3为实施例一中利用训练数据集对低剂量CT图像去噪网络进行训练的流程图;
图4为实施例二中低剂量CT图像去噪网络的结构示意图;
图5为实施例三中低剂量CT图像去噪网络的结构示意图;
图6为实施例四中低剂量CT图像去噪网络的结构示意图;
图7为实施例五中低剂量CT图像去噪网络的训练系统的结构示意图;
图8为实施例七中处理器与计算机存储介质的示意图。
以下,将参照附图来详细描述本发明的实施例。然而,可以以许多不同的形式来实施本发明,并且本发明不应该被解释为限制于这里阐述的具体实施例。相反,提供这些实施例是为了解释本发明的原理及其实际应用,从而使本领域的其他技术人员能够理解本发明的各种实施例和适合于特定预期应用的各种修改。在附图中,相同的标号将始终被用于表示相同的元件。
本申请提供的低剂量CT图像去噪网络的训练方法包括步骤:
获取训练数据集,其中,训练数据集包括多个训练图像块组,每一个训练图像块组包括第一图像块和第二图像块,第一图像块、第二图像块分别为一低剂量CT图像、一标准剂量CT图像块中位于同一位置的图像块,第一图像块与第二图像块的尺寸相等。
建立低剂量CT图像去噪网络,其中,低剂量CT图像去噪网络包括依次连接的第一卷积层、卷积模块、第一融合层以及第二卷积层,第一融合层用于对卷积模块的输入信号和卷积模块的输出信号进行融合,卷积模块包括至少一个卷积网络,至少一个卷积网络依次连接,即第一卷积层、至少一个卷积网络、第一融合层以及第二卷积层串联连接;每一个卷积网络包括依次连接的通道层、第三卷积层及第二融合层,通道层包括第一通道,第一通道包括第四卷积层和第一反卷积层,第四卷积层、第一反卷积层交替连接,第二融合层用于对卷积网络的输入信号和第三卷积层的输出信号进行融合。
利用训练数据集对低剂量CT图像去噪网络进行训练,获得更新后的低剂量CT图像去噪网络。
本申请通过将第四卷积层、第一反卷积层交替连接,可以避免连续多次卷积或连续多次卷积后再连续多次反卷积操作造成的信息缺失,且每一个卷积网络的输入都会与该卷积网络中的第三卷积层的输出信号进行融合,从而可以更好地提取原始图像的细节,避免多次级联后失真的问题,使得重建后的图像更清晰。
下面通过几个具体的实施例并结合附图来对本申请中的低剂量CT图像去噪网络的训练方法进行详细的描述。
实施例一
参照图1~2,本实施例中的低剂量CT图像去噪网络的训练方法包括步骤:
S1、获取训练数据集,其中,训练数据集包括多个训练图像块组,每一个训练图像块组包括第一图像块和第二图像块,每一个训练图像块组中的第一图像块、第二图像块分别为一低剂量CT图像、一标准剂量CT图像中位于同一位置的图像块,同一个训练图像块组中的第一图像块与第二图像块的尺寸相等,不同的训练图像块组中的第一图像块的大小可以相等,也可以不相等。
例如,本实施例中的训练数据集为:
D={(x
1,y
1),(x
1,y
1),......,(x
i,y
i),......,(x
n,y
n)},
其中,x
i表示第i个训练图像块组中的第一图像块,y
i表示第i个训练图像块组中的第二图像块,n表示训练数据集中训练图像块组的个数,n个训练图像块组中的第一图像块均不相同,即n个训练图像块组中的第一图像块为选自同一个低剂量CT图像中的n个不同位置的图像块,对应的,n个训练图像块组中的第二图像块为选自同一个标准剂量CT图像中的n个不同位置的图像块,当然,为了得到更好的训练结果,n个训练图像块组中的第一图像也可以是选自n个不同的低剂量CT图像中的同一位置的图像块,对应的,n个训练图像块组中的第二图像块为选自n个不同的标准剂量CT图像中的同一位置的图像块,n个训练图像块组中的第一图像还可以是选自不同的低剂量CT图像中的不同位置的图像块,对应的,n个训练图像块组中的第二图像块为选自不同的标准剂量CT图像中的不同位置的图像块。
这里需要说明的是,本实施例中用于训练的数据集中的低剂量CT图像和标准剂量CT图像是从现有的样本集中选取的,其中,现有的样本集是本领域的常用的样本集,这里不再一一举例说明。
S2、建立低剂量CT图像去噪网络,如图2所示,其中,低剂量CT图像去噪网络包括依次连接的第一卷积层1、卷积模块2、第一融合层3以及第二卷积层4。
具体地,第一融合层3用于对卷积模块2的输入信号和卷积模块2的输出信号进行融合,使得第二卷极层4的输入信号中保留了卷积模块2的输入信号的特征信息,即保留了原始低层次特征信息,从而能够更好地提取原始图像的细节。
卷积模块2包括至少一个卷积网络20,至少一个卷积网络20依次连接,即第一卷积层1、至少一个卷积网络20、第一融合层3以及第二卷积层4串联连接。图1示例性的给出了卷积模块2包括3个卷积网络20的情况,需要说明的是,这里仅仅是作为示例示出,并不用作对本申请进行限定。如图1所示,3个卷积网络20依次连接,即3个卷积网络20串联连接,每一个卷积网络20包括依次连接的通道层、第三卷积层23及第二融合层24。第二融合层24用于对通道层的输入信号和第三卷积层23的输出信号进行融合,本实施例中的第一融合层3、第二融合层24分别用于对卷积模块2的输入信号和卷积模块2的输出信号、通道层的 输入信号和第三卷积层23的输出信号进行求和,当然,也可以采用其他的融合算法,本实施例为了简化计算过程直接采用求和的融合算法,但这里不不做限定。
本实施例中的通道层包括第一通道,第一通道包括第四卷积层21和第一反卷积层22,第四卷积层21、第一反卷积层22依次交替连接,这里需要说明的是,第四卷积层21和第一反卷积层22的数量可以均为一个(如图2所示),则第一通道为依次连接的第四卷积层21、第一反卷积层22,若第四卷积层21和第一反卷积层22的数量均为多个,则第一通道为依次连接的第四卷积层21、第一反卷积层22、第四卷积层21、第一反卷积层22、......、第四卷积层21、第一反卷积层22。第四卷积层21用于进行下采样,第一反卷积层22用于进行上采样,第四卷积层21、第一反卷积层22依次交替连接,即下采样、上采样依次交替进行,从而避免连续多次下采样或连续多次下采样后再连续多次上采样造成的信息缺失。
作为示例,本实施例中的第一卷积层1的卷积核的大小为3×3×3,卷积核的个数为64个,通过第一卷积层1可以将第一图像块转换为64通道,这里需要说明的是,第一卷积层1还包括一个激活函数,对第一图像块进行卷积操作后,还需要通过激活函数对卷积操作后的数据进行非线性处理。
第四卷积层21的卷积核的大小为3×3×64,这里第四卷积层21的通道数还可以为其他数值,一般将第四卷积层21的通道数选为2
m,例如,第四卷积层21的通道数为8、16、32等,第一反卷积层22的卷积核的大小为3×3×64。同样地,第四卷积层21还包括一个激活函数,第四卷积层21通过激活函数对卷积操作后的数据进行非线性处理。
这里需要说明的是,在卷积网络20的个数为多个时,多个卷积网络20中的第四卷积层21、第一反卷积层22、第三卷积层24的参数可以相同,也可以不同。
S3、利用训练数据集对低剂量CT图像去噪网络进行训练,获得更新后的低剂量CT图像去噪网络。
参照图3,具体地,在步骤S3中,利用训练数据集对低剂量CT图像去噪网络进行训练包括步骤:
S31、将多个训练图像块组中的第一图像块x
i输入低剂量CT图像去噪网络,获得多个输出图像。
S32、根据多个输出图像分别和多个训练图像块组中的第二图像块构建损失函数。
具体地,在步骤S32中,根据多个输出图像分别和多个训练图像块组中的第二图像块构建损失函数的公式为:
其中,θ表示低剂量CT图像去噪网络的网络参数,loss(θ)表示损失函数,n表示训练数据集中训练图像块组的个数,G(X
i;θ)表示第i个输出图像,Y
i表示第i个训练图像块组中的第二图像块。
本实施例将绝对值差值作为损失函数,可以增加图像各个区域之间的差异化,从而使得图像中各个区域之间的边界更清晰。
S33、对损失函数的最小值进行优化,获得优化后的网络参数。
在步骤S33中,采用Adam优化算法对损失函数的最小值进行优化,得到优化后的网络参数,Adam优化算法的迭代过程为:
有偏一阶矩估计:s(k+1)=ρ
1s(k)+(1-ρ
1)g;
有偏二阶矩估计:r(k+1)=ρ
2r(k)+(1-ρ
2)g⊙g;
更新网络参数:θ=θ+Δθ;
判断迭代次数是否等于预设的终止迭代次数,若是,则输出更新后的网络参数θ,若否,则继续进行下一次的迭代直到迭代次数等于预设的终止迭代次数。较佳地,本实施例中的终止迭代次数为1000,迭代次数可以根据实际需要来设定,这里不做限定。
在上面的优化算法中,第一次迭代的初始值条件为初始的网络参数θ,k=0,s(k)=0,r(k)=0;
表示梯度运算符,ρ
1的默认值为0.9,ρ
2的默认值为0.999,k为迭代次数,ε表示学习率,ε的默认值为0.0001;δ为小常数,δ的默认值为10
-8。
S34、利用优化后的网络参数对低剂量CT图像去噪网络进行更新。
经过上述优化之后便可以得到更新后的低剂量CT图像去噪网络,本实施例中得到的更新后的低剂量CT图像去噪网络能够避免连续多次卷积或连续多次卷积后再连续多次反卷积操作造成的信息缺失,且每一个卷积网络的输入都会与该卷积网络中的第三卷积层的输出信号进行融合,从而可以更好地提取原始图像的细节,避免多次级联后失真的问题,使得重建后的图像更清晰。
实施例二
参照图4,本实施例与实施例一的不同之处在于,本实施例中的第一通道中的第一反卷积层22、第四卷积层21依次交替连接,这里需要说明的是,第一反卷积层22和第四卷积层21的数量可以均为一个,则第一通道为依次连接的第一反卷积层22、第四卷积层21,若第一反卷积层22和第四卷积层21的数量均为多个,则第一通道为依次连接的第一反卷积层22、第四卷积层21、第一反卷积层22、第四卷积层21、……、第一反卷积层22、第四卷积层21。第一反卷积层22用于进行上采样,第四卷积层21用于进行下采样,第一反卷积层22、第四卷积层21依次交替连接,即上采样、下采样依次交替进行。
本实施例与实施例一的另一个不同之处在于,本实施例中的第四卷积层21不包括激活函数,第一反卷积层22包括一个激活函数,第一反卷积层22通过激活函数对反卷积操作后的数据进行非线性处理。
本实施例相对于实施例一,同样可以避免连续多次下采样或连续多次下采样后再连续多次上采样造成的信息缺失,避免多次级联后失真的问题,使得重建后的图像更清晰。
实施例三
参照图5,本实施例与实施例一的不同之处在于,本实施例中的通道层还包括第二通道和拼接层25。第二通道与第一通道并联连接,拼接层25连接于通道层与第三卷积层23之间,拼接层25用于将第一通道的输出信号和第二通道的输出信号进行拼接,拼接层25可以采用多种拼接方法来对一通道的输出信号和第二通道的输出信号进行拼接,本实施例为了降低计算复杂度,拼接层25采用最最简单的图像拼接方法,例如,第一通道的输出信号为3×3×64矩阵,第二通道的输出信号也为3×3×64的矩阵,则拼接后的图像可以为3×6×64矩阵,即左右拼接,也可以为6×3×64矩阵,即上下拼接。
具体地,第二通道包括第五卷积层26和第二反卷积层27,第二反卷积层27、第五卷积层26依次交替连接。这里需要说明的是,第二反卷积层27和第五卷积层26的数量可以均为一个,则第二通道为依次连接的第二反卷积层27、第五卷积层26,若第二反卷积层27和第五卷积层26的数量均为多个,则第二通道为依次连接的第二反卷积层27、第五卷积层26、第二反卷积层27、第五卷积层2622、......、第二反卷积层27、第五卷积层26。第二反卷积层27用于进行上采样,第五卷积层26用于进行下采样,第二反卷积层27、第五卷积层26依次交替连接,即上采样、下采样依次交替进行,从而避免连续多次下采样或连续多次下采样后再连续多次上采样造成的信息缺失。
本实施例中的第二反卷积层27还包括一个激活函数,第二反卷积层27通过激活函数对反卷积操作后的数据进行非线性处理。第二反卷积层27的卷积核的大小为3×3×64,这里第二反卷积层27的通道数还可以为其他数值,一般将第二反卷积层27的通道数选为2
m,例如,第二反卷积层27的通道数为8、16、32等,第五卷积层26的卷积核的大小为3×3×64。
这里需要说明的是,在卷积网络20的个数为多个时,多个卷积网络20中的第四卷积层21、第一反卷积层22、第三卷积层24的参数可以相同,也可以不同;多个卷积网络20中的第二反卷积层27、第五卷积层26的参数可以相同,也可以不同。较佳地,第一通道的权重与第二通道的权重不同。
本实施例相对于实施例一的改进之处在于,本实施例中的通道层还包括第二通道,通过不同的通道对图像的特征信息进行提取,再通过拼接层25将两个通道的输出信息进行融合,进一步地避免了图像信息的缺失以及进一步地更好地提取原始图像的细节。
实施例四
参照图6,本实施例与实施例二的不同之处在于,本实施例中的通道层还包括第二通道和拼接层25。第二通道与第一通道并联连接,拼接层25连接于通道层与第三卷积层23之间,拼接层25用于将第一通道的输出信号和第二通道的输出信号进行拼接,拼接层25可以采用多种拼接方法来对一通道的输出信号和第二通道的输出信号进行拼接,本实施例为了降低计算复杂度,拼接层25采用最最简单的图像拼接方法,例如,第一通道的输出信号为3×3×64矩阵,第二通道的输出信号也为3×3×64的矩阵,则拼接后的图像可以为3×6×64矩阵,即左右拼接,也可以为6×3×64矩阵,即上下拼接。
具体地,第二通道包括第五卷积层26和第二反卷积层27,第五卷积层26、第二反卷积层27依次交替连接。这里需要说明的是,第五卷积层26和第二反卷积层27的数量可以均为一个,则第二通道为依次连接的第五卷积层26、第二反卷积层27,若第五卷积层26和第二反卷积层27的数量均为多个,则第二通道为依次连接的第五卷积层26、第二反卷积层27、第五卷积层26、第二反卷积层27、......、第五卷积层26、第二反卷积层27。第五卷积层26用于进行下采样、第二反卷积层27用于进行上采样,第五卷积层26、第二反卷积层27依次交替连接,即下采样、上采样依次交替进行,从而避免连续多次下采样或连续多次下采样后再连续多次上采样造成的信息缺失。
本实施例中的第五卷积层26还包括一个激活函数,第五卷积层26通过激活函数对卷积操作后的数据进行非线性处理。第五卷积层26的卷积核的大小为3×3×64,这里第五卷积层26的通道数还可以为其他数值,一般将第五卷积层26 的通道数选为2
m,例如,第五卷积层26的通道数为8、16、32等,第二反卷积层27的卷积核的大小为3×3×64。
这里需要说明的是,在卷积网络20的个数为多个时,多个卷积网络20中的第四卷积层21、第一反卷积层22、第三卷积层24的参数可以相同,也可以不同;多个卷积网络20中的第五卷积层26、第二反卷积层27的参数可以相同,也可以不同。较佳地,第一通道的权重与第二通道的权重不同。
本实施例相对于实施例一的改进之处在于,本实施例中的通道层还包括第二通道,通过不同的通道对图像的特征信息进行提取,再通过拼接层25将两个通道的输出信息进行融合,进一步地避免了图像信息的缺失以及进一步地更好地提取原始图像的细节。
实施例五
参照图7,本实施例提供了一种低剂量CT图像去噪网络的训练系统,所述训练系统包括训练数据集获取模块100、网络构建模块101、训练模块102。
训练数据集获取模块100用于获取训练数据集,其中,训练数据集包括多个训练图像块组,每一个训练图像块组包括第一图像块和第二图像块,第一图像块、第二图像块分别为低剂量CT图像、标准剂量CT图像块中位于同一位置的图像块,同一个训练图像块组中的第一图像块与第二图像块的尺寸相等,不同的训练图像块组中的第一图像块的大小可以相等,也可以不相等。训练数据集的具体设置可以参见实施例一,这里不再赘述。
网络构建模块101用于建立低剂量CT图像去噪网络,低剂量CT图像去噪网络包括依次连接的第一卷积层、卷积模块、第一融合层以及第二卷积层,第一融合层用于对卷积模块的输入信号和卷积模块的输出信号进行融合,卷积模块包括至少一个卷积网络,至少一个卷积网络依次连接,每一个所述卷积网络包括依次连接的通道层、第三卷积层及第二融合层,通道层包括第一通道,第一通道包括第四卷积层和第一反卷积层,第四卷积层、第一反卷积层交替连接,第二融合层用于对所述卷积网络的输入信号和所述第三卷积层的输出信号进行融合。低剂量CT图像去噪网络的结构的具体结构可以参照实施一~实施例四,这里不再赘述。
训练模块102用于利用训练数据集对低剂量CT图像去噪网络进行训练,获得更新后的低剂量CT图像去噪网络。
具体地,训练模块102包括输入单元、损失函数构建单元、优化单元、更新单元以及输出单元。
输入单元用于将多个训练图像块组中的第一图像块x
i输入低剂量CT图像去噪网络,获得多个输出图像;输出单元用于将多个输出图像输出;损失函数构建单元用于根据多个输出图像分别和多个训练图像块组中的第二图像块构建损失函数;优化单元用于对损失函数的最小值进行优化,获得优化后的网络参数;更新单元用于利用优化后的网络参数对低剂量CT图像去噪网络进行更新。
通过本实施例的训练系统,可以避免连续多次卷积或连续多次卷积后再连续多次反卷积操作造成的信息缺失,且可以更好地提取原始图像的细节,避免多次级联后失真的问题,使得重建后的图像更清晰。
实施例六
本实施例提供了一种低剂量CT图像的去噪方法,该去噪方法包括:将待去噪的低剂量CT图像输入到利用实施例一~实施例四所述的低剂量CT图像去噪网络的训练方法得到的更新后的低剂量CT图像去噪网络中,获得去噪后的低剂量CT图像。
这里需要说明的是,本实施例中的去噪方法包括两种实施方式,第一种实施方式是将实施例一~实施例四训练好的低剂量CT图像去噪网络作为低剂量CT图像的去噪网络,将待去噪的低剂量CT图像输入到该低剂量CT图像去噪网络便可以获得去噪后的低剂量CT图像。第二种实施方式是先利用实施例一~实施例四所述的低剂量CT图像去噪网络的训练方法对低剂量CT图像去噪网络进行训练,然后再将待去噪的低剂量CT图像输入到训练好的低剂量CT图像去噪网络,获得去噪后的低剂量CT图像。
通过本实施例的去噪方法,可以避免连续多次卷积或连续多次卷积后再连续多次反卷积操作造成的信息缺失,且可以更好地提取原始图像的细节,避免多次级联后失真的问题,使得重建后的图像更清晰。
实施例七
参照图8,本实施例提供了一种处理器200,所述处理器200与计算机存储介质201相连,计算机存储介质201中存储有计算机程序,处理器200用于读取并执行计算机存储介质201中存储的计算机程序,以实现如实施例一~实施例四所述的低剂量CT图像去噪网络的训练方法。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本发明实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机存储介质中,或者从一个计算机存储介质向另一个计算机存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。
本发明实施例是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
以上所述仅是本申请的具体实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。
Claims (15)
- 一种低剂量CT图像去噪网络的训练方法,其中,包括步骤:获取训练数据集,所述训练数据集包括多个训练图像块组,每一个所述训练图像块组包括第一图像块和第二图像块,所述第一图像块、第二图像块分别为低剂量CT图像、标准剂量CT图像块中位于同一位置的图像块;建立低剂量CT图像去噪网络,所述低剂量CT图像去噪网络包括依次连接的第一卷积层、卷积模块、第一融合层以及第二卷积层,所述第一融合层用于对所述卷积模块的输入信号和所述卷积模块的输出信号进行融合,所述卷积模块包括至少一个卷积网络,所述至少一个卷积网络依次连接,每一个所述卷积网络包括依次连接的通道层、第三卷积层及第二融合层,所述通道层包括第一通道,所述第一通道包括第四卷积层和第一反卷积层,所述第四卷积层、第一反卷积层交替连接,所述第二融合层用于对所述卷积网络的输入信号和所述第三卷积层的输出信号进行融合;利用所述训练数据集对所述低剂量CT图像去噪网络进行训练,获得更新后的低剂量CT图像去噪网络。
- 根据权利要求1所述的训练方法,其中,所述第四卷积层、第一反卷积层依次交替连接。
- 根据权利要求1所述的训练方法,其中,所述第一反卷积层、第四卷积层依次交替连接。
- 根据权利要求2所述的训练方法,其中,所述通道层还包括第二通道和拼接层,所述第二通道与所述第一通道并联连接,所述拼接层连接于所述通道层与所述第三卷积层之间,所述拼接层用于将所述第一通道的输出信号和所述第二通道的输出信号进行拼接;所述第二通道包括第五卷积层和第二反卷积层,所述第二反卷积层、第五卷积层依次交替连接。
- 根据权利要求3所述的训练方法,其中,所述通道层还包括第二通道和拼接层,所述第二通道与所述第一通道并联连接,所述拼接层连接于所述通道层与所述第三卷积层之间,所述拼接层用于将所述第一通道的输出信号和所述 第二通道的输出信号进行拼接;所述第二通道包括第五卷积层和第二反卷积层,所述第二反卷积层、第五卷积层依次交替连接。
- 根据权利要求1所述的训练方法,其中,利用所述训练数据对所述低剂量CT图像进行训练具体包括:将所述多个训练图像块组中的第一图像块输入所述低剂量CT图像去噪网络,获得多个输出图像;根据所述多个输出图像分别和所述多个训练图像块组中的第二图像块构建损失函数;对所述损失函数的最小值进行优化,获得优化后的网络参数;利用优化后的网络参数对所述低剂量CT图像去噪网络进行更新。
- 一种低剂量CT图像去噪网络的训练系统,其中,包括:训练数据集获取模块,用于获取训练数据集,所述训练数据集包括多个训练图像块组,每一个所述训练图像块组包括第一图像块和第二图像块,所述第一图像块、第二图像块分别为低剂量CT图像、标准剂量CT图像块中位于同一位置的图像块;网络构建模块,用于建立低剂量CT图像去噪网络,所述低剂量CT图像去噪网络包括依次连接的第一卷积层、卷积模块、第一融合层以及第二卷积层,所述第一融合层用于对所述卷积模块的输入信号和所述卷积模块的输出信号进行融合,所述卷积模块包括至少一个卷积网络,所述至少一个卷积网络依次连接,每一个所述卷积网络包括依次连接的通道层、第三卷积层及第二融合层,所述通道层包括第一通道,所述第一通道包括第四卷积层和第一反卷积层,所 述第四卷积层、第一反卷积层交替连接,所述第二融合层用于对所述卷积网络的输入信号和所述第三卷积层的输出信号进行融合;训练模块,用于利用所述训练数据集对所述低剂量CT图像去噪网络进行训练,获得更新后的低剂量CT图像去噪网络。
- 一种计算机存储介质,其中,所述计算机存储介质中存储计算机程序,所述计算机程序在被一个或多个处理器读取并执行时实现低剂量CT图像去噪网络的训练方法,所述训练方法包括步骤:获取训练数据集,所述训练数据集包括多个训练图像块组,每一个所述训练图像块组包括第一图像块和第二图像块,所述第一图像块、第二图像块分别为低剂量CT图像、标准剂量CT图像块中位于同一位置的图像块;建立低剂量CT图像去噪网络,所述低剂量CT图像去噪网络包括依次连接的第一卷积层、卷积模块、第一融合层以及第二卷积层,所述第一融合层用于对所述卷积模块的输入信号和所述卷积模块的输出信号进行融合,所述卷积模块包括至少一个卷积网络,所述至少一个卷积网络依次连接,每一个所述卷积网络包括依次连接的通道层、第三卷积层及第二融合层,所述通道层包括第一通道,所述第一通道包括第四卷积层和第一反卷积层,所述第四卷积层、第一反卷积层交替连接,所述第二融合层用于对所述卷积网络的输入信号和所述第三卷积层的输出信号进行融合;利用所述训练数据集对所述低剂量CT图像去噪网络进行训练,获得更新后的低剂量CT图像去噪网络。
- 根据权利要求9所述的计算机存储介质,其中,所述第四卷积层、第一反卷积层依次交替连接。
- 根据权利要求9所述的计算机存储介质,其中,所述第一反卷积层、第四卷积层依次交替连接。
- 根据权利要求10所述的计算机存储介质,其中,所述通道层还包括第二通道和拼接层,所述第二通道与所述第一通道并联连接,所述拼接层连接于所述通道层与所述第三卷积层之间,所述拼接层用于将所述第一通道的输出信 号和所述第二通道的输出信号进行拼接;所述第二通道包括第五卷积层和第二反卷积层,所述第二反卷积层、第五卷积层依次交替连接。
- 根据权利要求11所述的计算机存储介质,其中,所述通道层还包括第二通道和拼接层,所述第二通道与所述第一通道并联连接,所述拼接层连接于所述通道层与所述第三卷积层之间,所述拼接层用于将所述第一通道的输出信号和所述第二通道的输出信号进行拼接;所述第二通道包括第五卷积层和第二反卷积层,所述第二反卷积层、第五卷积层依次交替连接。
- 根据权利要求9所述的计算机存储介质,其中,利用所述训练数据对所述低剂量CT图像进行训练具体包括:将所述多个训练图像块组中的第一图像块输入所述低剂量CT图像去噪网络,获得多个输出图像;根据所述多个输出图像分别和所述多个训练图像块组中的第二图像块构建损失函数;对所述损失函数的最小值进行优化,获得优化后的网络参数;利用优化后的网络参数对所述低剂量CT图像去噪网络进行更新。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911249569.0 | 2019-12-09 | ||
CN201911249569.0A CN110992290B (zh) | 2019-12-09 | 2019-12-09 | 低剂量ct图像去噪网络的训练方法及系统 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021114105A1 true WO2021114105A1 (zh) | 2021-06-17 |
Family
ID=70091260
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/124377 WO2021114105A1 (zh) | 2019-12-09 | 2019-12-10 | 低剂量ct图像去噪网络的训练方法及系统 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110992290B (zh) |
WO (1) | WO2021114105A1 (zh) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113610719A (zh) * | 2021-07-19 | 2021-11-05 | 河南大学 | 一种注意力和密集连接残差块卷积核神经网络图像去噪方法 |
CN113744149A (zh) * | 2021-08-31 | 2021-12-03 | 华中科技大学 | 一种解决低剂量ct图像过平滑的深度学习后处理方法 |
CN113822821A (zh) * | 2021-10-29 | 2021-12-21 | 山东第一医科大学附属肿瘤医院(山东省肿瘤防治研究院、山东省肿瘤医院) | 一种基于深度学习的兆伏级ct图像去噪方法及系统 |
CN114494057A (zh) * | 2022-01-23 | 2022-05-13 | 东南大学 | 一种基于可训练联合双边滤波器的数字x射线图像去噪方法 |
CN114742917A (zh) * | 2022-04-25 | 2022-07-12 | 桂林电子科技大学 | 一种基于卷积神经网络的ct图像分割方法 |
CN114757928A (zh) * | 2022-04-25 | 2022-07-15 | 东南大学 | 一种基于深度训练网络的一步式双能有限角ct重建方法 |
CN114970614A (zh) * | 2022-05-12 | 2022-08-30 | 中国科学院沈阳自动化研究所 | 一种基于自监督学习的低相干干涉信号去噪方法及系统 |
CN114972981A (zh) * | 2022-04-19 | 2022-08-30 | 国网江苏省电力有限公司电力科学研究院 | 一种电网输电环境观测图像去噪方法、终端及存储介质 |
CN116703772A (zh) * | 2023-06-15 | 2023-09-05 | 山东财经大学 | 一种基于自适应插值算法的图像去噪方法、系统及终端机 |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111968058B (zh) * | 2020-08-25 | 2023-08-04 | 北京交通大学 | 一种低剂量ct图像降噪方法 |
CN112488951B (zh) * | 2020-12-07 | 2022-05-20 | 深圳先进技术研究院 | 低剂量图像去噪网络的训练方法、低剂量图像的去噪方法 |
CN112541871B (zh) * | 2020-12-07 | 2024-07-23 | 深圳先进技术研究院 | 低剂量图像去噪网络的训练方法、低剂量图像的去噪方法 |
CN113256752B (zh) * | 2021-06-07 | 2022-07-26 | 太原理工大学 | 一种基于双域交织网络的低剂量ct重建方法 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108492269A (zh) * | 2018-03-23 | 2018-09-04 | 西安电子科技大学 | 基于梯度正则卷积神经网络的低剂量ct图像去噪方法 |
CN108520203A (zh) * | 2018-03-15 | 2018-09-11 | 上海交通大学 | 基于融合自适应多外围框与十字池化特征的多目标特征提取方法 |
CN108537824A (zh) * | 2018-03-15 | 2018-09-14 | 上海交通大学 | 基于交替反卷积与卷积的特征图增强的网络结构优化方法 |
CN109685717A (zh) * | 2018-12-14 | 2019-04-26 | 厦门理工学院 | 图像超分辨率重建方法、装置及电子设备 |
CN109886243A (zh) * | 2019-03-01 | 2019-06-14 | 腾讯科技(深圳)有限公司 | 图像处理方法、装置、存储介质、设备以及系统 |
-
2019
- 2019-12-09 CN CN201911249569.0A patent/CN110992290B/zh active Active
- 2019-12-10 WO PCT/CN2019/124377 patent/WO2021114105A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108520203A (zh) * | 2018-03-15 | 2018-09-11 | 上海交通大学 | 基于融合自适应多外围框与十字池化特征的多目标特征提取方法 |
CN108537824A (zh) * | 2018-03-15 | 2018-09-14 | 上海交通大学 | 基于交替反卷积与卷积的特征图增强的网络结构优化方法 |
CN108492269A (zh) * | 2018-03-23 | 2018-09-04 | 西安电子科技大学 | 基于梯度正则卷积神经网络的低剂量ct图像去噪方法 |
CN109685717A (zh) * | 2018-12-14 | 2019-04-26 | 厦门理工学院 | 图像超分辨率重建方法、装置及电子设备 |
CN109886243A (zh) * | 2019-03-01 | 2019-06-14 | 腾讯科技(深圳)有限公司 | 图像处理方法、装置、存储介质、设备以及系统 |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113610719A (zh) * | 2021-07-19 | 2021-11-05 | 河南大学 | 一种注意力和密集连接残差块卷积核神经网络图像去噪方法 |
CN113744149A (zh) * | 2021-08-31 | 2021-12-03 | 华中科技大学 | 一种解决低剂量ct图像过平滑的深度学习后处理方法 |
CN113822821A (zh) * | 2021-10-29 | 2021-12-21 | 山东第一医科大学附属肿瘤医院(山东省肿瘤防治研究院、山东省肿瘤医院) | 一种基于深度学习的兆伏级ct图像去噪方法及系统 |
CN114494057A (zh) * | 2022-01-23 | 2022-05-13 | 东南大学 | 一种基于可训练联合双边滤波器的数字x射线图像去噪方法 |
CN114972981A (zh) * | 2022-04-19 | 2022-08-30 | 国网江苏省电力有限公司电力科学研究院 | 一种电网输电环境观测图像去噪方法、终端及存储介质 |
CN114742917A (zh) * | 2022-04-25 | 2022-07-12 | 桂林电子科技大学 | 一种基于卷积神经网络的ct图像分割方法 |
CN114757928A (zh) * | 2022-04-25 | 2022-07-15 | 东南大学 | 一种基于深度训练网络的一步式双能有限角ct重建方法 |
CN114742917B (zh) * | 2022-04-25 | 2024-04-26 | 桂林电子科技大学 | 一种基于卷积神经网络的ct图像分割方法 |
CN114970614A (zh) * | 2022-05-12 | 2022-08-30 | 中国科学院沈阳自动化研究所 | 一种基于自监督学习的低相干干涉信号去噪方法及系统 |
CN116703772A (zh) * | 2023-06-15 | 2023-09-05 | 山东财经大学 | 一种基于自适应插值算法的图像去噪方法、系统及终端机 |
CN116703772B (zh) * | 2023-06-15 | 2024-03-15 | 山东财经大学 | 一种基于自适应插值算法的图像去噪方法、系统及终端机 |
Also Published As
Publication number | Publication date |
---|---|
CN110992290B (zh) | 2023-09-15 |
CN110992290A (zh) | 2020-04-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021114105A1 (zh) | 低剂量ct图像去噪网络的训练方法及系统 | |
WO2022120883A1 (zh) | 低剂量图像去噪网络的训练方法、低剂量图像的去噪方法 | |
US20170372193A1 (en) | Image Correction Using A Deep Generative Machine-Learning Model | |
WO2020125498A1 (zh) | 心脏磁共振图像分割方法、装置、终端设备及存储介质 | |
WO2024066049A1 (zh) | 一种pet图像去噪的方法、终端设备及可读存储介质 | |
WO2021102644A1 (zh) | 图像增强方法、装置及终端设备 | |
Meng et al. | Semi-supervised learned sinogram restoration network for low-dose CT image reconstruction | |
KR102428725B1 (ko) | 영상 개선 방법 및 이를 수행하는 컴퓨터 프로그램 | |
CN107909618A (zh) | 图像重建系统和方法 | |
WO2023142781A1 (zh) | 图像三维重建方法、装置、电子设备及存储介质 | |
Van Nieuwenhove et al. | MoVIT: a tomographic reconstruction framework for 4D-CT | |
CN110874855B (zh) | 一种协同成像方法、装置、存储介质和协同成像设备 | |
Li et al. | A task-informed model training method for deep neural network-based image denoising | |
Almalki et al. | Enhancement of medical images through an iterative McCann Retinex algorithm: a case of detecting brain tumor and retinal vessel segmentation | |
CN114004912A (zh) | 一种cbct图像去伪影方法 | |
CN110060314B (zh) | 一种基于人工智能的ct迭代重建加速方法及系统 | |
CN105608719B (zh) | 一种基于两阶段投影调整的快速ct图像重建方法 | |
Kudo et al. | Metal artifact reduction in CT using fault-tolerant image reconstruction | |
Ma et al. | A neural network with encoded visible edge prior for limited‐angle computed tomography reconstruction | |
Chen et al. | General rigid motion correction for computed tomography imaging based on locally linear embedding | |
Cierniak | Neural network algorithm for image reconstruction using the “grid-friendly” projections | |
Ma et al. | Low-dose CT with a deep convolutional neural network blocks model using mean squared error loss and structural similar loss | |
Huang et al. | Restoration of Missing Data in Limited Angle Tomography Based on Consistency Conditions | |
WO2022120694A1 (zh) | 低剂量图像去噪网络的训练方法、低剂量图像的去噪方法 | |
CN112529980A (zh) | 一种基于极大极小化的多目标有限角ct图像重建方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19955893 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19955893 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19955893 Country of ref document: EP Kind code of ref document: A1 |