WO2022120883A1 - 低剂量图像去噪网络的训练方法、低剂量图像的去噪方法 - Google Patents

低剂量图像去噪网络的训练方法、低剂量图像的去噪方法 Download PDF

Info

Publication number
WO2022120883A1
WO2022120883A1 PCT/CN2020/136210 CN2020136210W WO2022120883A1 WO 2022120883 A1 WO2022120883 A1 WO 2022120883A1 CN 2020136210 W CN2020136210 W CN 2020136210W WO 2022120883 A1 WO2022120883 A1 WO 2022120883A1
Authority
WO
WIPO (PCT)
Prior art keywords
low
dose
dose image
network
images
Prior art date
Application number
PCT/CN2020/136210
Other languages
English (en)
French (fr)
Inventor
郑海荣
梁栋
胡战利
黄振兴
刘新
Original Assignee
深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳先进技术研究院 filed Critical 深圳先进技术研究院
Publication of WO2022120883A1 publication Critical patent/WO2022120883A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10108Single photon emission computed tomography [SPECT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Definitions

  • the invention relates to the technical field of image reconstruction, in particular to a training method for a low-dose image denoising network, a low-dose image denoising method, computer equipment and a storage medium.
  • Computed tomography is an important imaging method to obtain the internal structure information of an object through a non-destructive method. It has many advantages such as high resolution, high sensitivity and multi-level. It is one of the medical imaging diagnostic equipment with the largest installed capacity in my country. It is widely used in various medical clinical examination fields. However, due to the need to use X-rays during CT scanning, with the gradual understanding of the potential hazards of radiation, the issue of CT radiation dose has received more and more attention.
  • the principle of rational use of low dose (As Low As Reasonably Achievable, ALARA) requires that the radiation dose to patients should be reduced as much as possible under the premise of satisfying clinical diagnosis. As a result, the imaging quality is poor.
  • the present invention provides a low-dose image denoising network training method, a low-dose image denoising method, computer equipment and a storage medium, which integrate the attributes of anatomical structures into the image reconstruction process, and improve the The robustness of the denoising method and the quality of the reconstructed image are improved.
  • the specific technical solution proposed by the present invention is to provide a training method for a low-dose image denoising network, the training method comprising:
  • the training data set including a plurality of input parameter groups, each of the input parameter groups including low-dose images, attributes, and standard-dose images of anatomical structures;
  • the low-dose image de-noising network including an attribute fusion module, a plurality of spatial information fusion modules and a generation module cascaded in sequence;
  • the low-dose image denoising network is trained using the training data set, the parameters of the low-dose image de-noising network are obtained, and the low-dose image de-noising network is updated.
  • the attribute fusion module includes a weight prediction unit, a first feature extraction unit and a first fusion unit, the weight prediction unit is used to obtain a weight mask corresponding to the anatomical structure according to the attribute, and the first feature extraction unit is used For extracting the feature of the low-dose image, the first fusion unit is configured to fuse the weight mask with the feature of the low-dose image to obtain a weight feature.
  • the weight prediction unit includes multiple convolution layers and multiple activation functions, and the multiple convolution layers and multiple activation functions are alternately cascaded in sequence.
  • the weight prediction unit further includes a splicing layer, and the splicing layer is used for splicing the outputs of the convolution layers with the same number of output channels among the multiple convolution layers.
  • the spatial information fusion module includes a second feature extraction unit, a third feature extraction unit and a second fusion unit, the second feature extraction unit is used to extract the spatial information of the weight feature, the third feature The extraction unit is configured to extract the image features of the weight feature, and the second fusion unit is configured to fuse the spatial information with the image features.
  • the training of the low-dose image denoising network by using the training data set, obtaining the parameters of the low-dose image denoising network and updating the low-dose image denoising network includes:
  • the loss function is optimized, parameters of the low-dose image denoising network are obtained, and the low-dose image denoising network is updated.
  • represents the network parameters of the low-dose image denoising network
  • loss( ⁇ ) represents the loss function
  • n represents the number of input parameter groups in the training data set
  • G(X i ; a i ; ⁇ ) represents the ith output image
  • Y i represents the standard dose image in the ith input parameter group.
  • the present invention also provides a low-dose image de-noising method, the de-noising method includes: inputting the low-dose image to be de-noised into the low-dose image obtained by using the above-mentioned training method of the low-dose image de-noising network In the image denoising network, the reconstructed low-dose image is obtained.
  • the present invention also provides a computer device, comprising a memory, a processor, and a computer program stored on the memory, the processor executing the computer program to implement the training method described in any one of the above.
  • the present invention also provides a computer-readable storage medium, where computer instructions are stored on the computer-readable storage medium, and when the computer instructions are executed by a processor, implement the training method described in any of the above.
  • the training method of the low-dose image denoising network uses the attributes of the anatomical structure as the input of the low-dose image denoising network, so as to integrate the attributes of the anatomical structure into the image reconstruction process, so that the low-dose image obtained by training can be denoised.
  • the network can be adapted to different anatomical structures, improving the robustness while maintaining the quality of the reconstructed images.
  • FIG. 1 is a flowchart of a training method for a low-dose image denoising network in Embodiment 1 of the present invention
  • FIG. 2 is a schematic structural diagram of a low-dose image denoising network in Embodiment 1 of the present invention.
  • FIG. 3 is a schematic structural diagram of a weight prediction unit in Embodiment 1 of the present invention.
  • step S3 is a schematic flowchart of step S3 in Embodiment 1 of the present invention.
  • 5a-5c are schematic diagrams of a standard dose image, a low dose image, and an output image in Embodiment 1 of the present invention.
  • FIG. 6 is a schematic structural diagram of a training system for a low-dose image denoising network in Embodiment 2 of the present invention.
  • FIG. 7 is a schematic structural diagram of a computer device in Embodiment 4 of the present invention.
  • the training method of the low-dose image denoising network proposed in this application includes:
  • the training data set including a plurality of input parameter groups, each of the input parameter groups including low-dose images, attributes, and standard-dose images of anatomical structures;
  • the low-dose image denoising network comprising an attribute fusion module, a spatial information fusion module and a generation module;
  • the low-dose image denoising network is trained using the training data set, the parameters of the low-dose image de-noising network are obtained, and the low-dose image de-noising network is updated.
  • the training method of the low-dose image denoising network uses the attributes of the anatomical structure as the input of the low-dose image denoising network, so that the attributes of the anatomical structure are fused into the image reconstruction process, so that the low-dose image obtained by training can be denoised.
  • the network can be adapted to different anatomical structures, improving the robustness while maintaining the quality of the reconstructed images.
  • CT image as an example, the training method of low-dose image denoising network, the denoising method of low-dose image, computer equipment and storage medium in this application will be described in detail through several specific embodiments and the accompanying drawings.
  • CT images as an example is not used to limit the application field of the present application, and the present application can also be applied to other medical imaging fields such as PET and SPECT.
  • the training method of the low-dose image denoising network in this embodiment includes the steps:
  • the low-dose image denoising network includes an attribute fusion module, a spatial information fusion module and a generation module;
  • step S1 the training data set in this embodiment is:
  • n represents the number of input parameter groups in the training data set
  • x i represents the low-dose image in the ith input parameter group
  • y i represents the standard dose image in the ith input parameter group
  • n low-dose images ⁇ x 1 ,x 2 ,..., xi ,...,x n ⁇ includes low-dose CT images of different anatomical parts, that is, n low-dose images ⁇ x 1 ,x 2 ,. 7-8,x i ,...,x n ⁇ have different properties, n low-dose images ⁇ x 1 ,x 2 ,...,x i , etc
  • the different anatomical parts may include the skull, the orbit, the sinuses, the neck, the lung cavity, the abdomen, the pelvic cavity (male), the pelvic cavity (female), the knee, and the lumbar spine.
  • low-dose images and standard-dose images in the training data set used for training in this embodiment are selected from sample data sets commonly used in the art, which are not specifically limited here.
  • the low-dose image denoising network constructed in this embodiment includes an attribute fusion module 1 , a plurality of spatial information fusion modules 2 and a generation module 3 that are cascaded in sequence.
  • the attribute fusion module 1 is used to fuse the attributes of the low-dose image with the features of the low-dose image to generate weighted features.
  • the plurality of spatial information fusion modules 2 are used to obtain the spatial information and image features of the weight feature and generate spatial information fusion features according to the spatial information and the image features.
  • the generating module 3 is used for generating standard dose images according to the spatial information fusion feature.
  • the attribute fusion module 1 includes a weight prediction unit 11 , a first feature extraction unit and a first fusion unit 13 .
  • the weight prediction unit 11 is used to generate a weight mask of the anatomical structure according to the attributes of the anatomical structure
  • the first feature extraction unit is used to extract the features of the low-dose image of the anatomical structure
  • the first fusion unit 13 is used to combine the weight mask with the low-dose image. The features of the image are fused.
  • the weight prediction unit 11 includes a plurality of convolution layers 111 and a plurality of activation functions 112, and the plurality of convolution layers 111 and the plurality of activation functions 112 are alternately cascaded in sequence.
  • the attributes of the anatomical structure are compressed and expanded on the channels through a plurality of convolutional layers 111, so as to obtain a weight mask with a predetermined number of channels.
  • the activation function 112 also needs to perform nonlinear processing on the data after the convolution operation.
  • the attributes of the anatomical structure in this embodiment are encoded using the one-hot encoding method. For the attribute of each anatomical structure, only the attribute bit corresponding to the anatomical structure is 1, and other attribute bits are 0. Including the skull, orbit, sinus, neck, and lung cavity as examples, then ⁇ 0,1,0,0,0 ⁇ represents the attributes of the anatomical structure of the orbit, and so on.
  • the weight prediction unit 11 in this embodiment further includes a splicing layer 113 .
  • the splicing layer 113 is used for splicing the output data of the convolutional layers 111 with the same number of channels among the multiple convolutional layers 111 .
  • Fig. 3 exemplarily shows that the weight prediction unit 11 includes 7 convolution layers 111, 7 activation functions 112 and 2 splicing layers 113, and the parameter settings of the weight prediction unit 11 are shown in the following table:
  • the number of output channels of the first convolutional layer 111 and the number of output channels of the fifth convolutional layer 111 are both 64. Therefore, the fifth activation function 112 and the sixth convolutional layer 111 are cascaded between A splicing layer 113, the splicing layer 113 can be spliced by using a variety of splicing methods. In this embodiment, in order to reduce the computational complexity, the splicing layer 113 adopts the simplest image splicing method.
  • the output signal of the first activation function 112 is: 512 ⁇ 512 ⁇ 64
  • the output signal of the fifth activation function 112 is also 512 ⁇ 512 ⁇ 64
  • the stitched image can be 512 ⁇ 512 ⁇ 128.
  • the number of output channels of the second convolutional layer 111 and the number of output channels of the fourth convolutional layer 111 are both 32, and one is cascaded between the fourth activation function 112 and the fifth convolutional layer 111 Splicing layer 113 .
  • the first to sixth activation functions 112 are all ReLU functions
  • the seventh activation function 112 is a Sigmod function, which finally generates a weight mask with 64 channels.
  • the first feature extraction unit includes a convolution layer 12, the size of the convolution kernel of the convolution layer 12 is 3 ⁇ 3, the number of input channels is 1, and the number of output channels is 64.
  • the size of the convolution kernel of the convolution layer 12 is 3 ⁇ 3
  • the number of input channels is 1
  • the number of output channels is 64.
  • the first fusion unit 13 includes a multiplier 131, a splicing layer 132 and a convolutional layer 133.
  • the multiplier 131 is used to perform point multiplication of the weight mask and the features of the low-dose image to obtain features carrying attribute information
  • the splicing layer 132 is used to The features carrying attribute information are spliced with the features of the low-dose image to better preserve the original image information and avoid the loss of image information.
  • the size of the convolution kernel of the convolutional layer 133 is 3 ⁇ 3, the number of input channels is 128, and the number of output channels is 64.
  • the output data of the splicing layer 132 is convolved through the convolutional layer 133 to obtain the weight mask and low value.
  • the weight feature after feature fusion of the dose image is 3 ⁇ 3
  • the number of input channels is 128, and the number of output channels is 64.
  • the output data of the splicing layer 132 is convolved through the convolutional layer 133 to obtain the weight mask and low value.
  • Each spatial information fusion module 2 includes a second feature extraction unit 21 , a third feature extraction unit 22 and a second fusion unit 23 .
  • the second feature extraction unit 21 is used to extract the spatial information of the weight feature
  • the third feature extraction unit 22 is used to extract the image feature of the weight feature
  • the second fusion unit 23 is used to fuse the spatial information with the image feature.
  • the second feature extraction unit 21 includes two convolution layers 211 and two activation functions 212 .
  • the two convolution layers 211 and the two activation functions 212 are alternately cascaded in turn, and the weight features are extracted through the two convolution layers 211 space information.
  • the activation function 212 also needs to perform nonlinear processing on the data after the convolution operation.
  • the first activation function 212 is a ReLU function
  • the second activation function 212 is a Sigmod function
  • the output of the second feature extraction unit 21 is constrained to be between 0 and 1 through the Sigmod function.
  • the third feature extraction unit 22 includes two convolution layers 221 and an activation function 222 , the activation functions 222 are respectively connected with the two convolution layers 221 , and the image features of the weight feature are extracted through the two convolution layers 221 .
  • the activation function 222 After the first convolution layer 221 performs the convolution operation, the activation function 222 also needs to perform nonlinear processing on the data after the convolution operation.
  • the activation function 222 is a ReLU function.
  • the second fusion unit 23 includes a multiplier 231 , a splicing layer 232 , a convolution layer 233 and an adder 234 .
  • the multiplier 231 is used to perform point multiplication between the spatial information and the image features output by the third feature extraction unit 22 to obtain the spatial information carrying
  • the splicing layer 132 is used for splicing the features carrying spatial information with the image features output by the third feature extraction unit 22, so as to better retain the original image information and avoid the loss of image information.
  • the convolution layer 133 performs convolution processing on the data spliced by the splicing layer 132, and the adder 234 fuses the data output from the convolution layer 133 and the data input to the multiplier 231, and finally obtains the image features of the fusion spatial information.
  • this embodiment builds a deeper network structure model by setting multiple spatial information fusion modules 2 , preferably, the number of spatial information fusion modules 2 in this embodiment is 15 It should be noted here that FIG. 2 only exemplarily shows the case where the low-dose image denoising network includes three spatial information fusion modules 2 , but it is not used to limit the number of spatial information fusion modules.
  • the parameters of the spatial information fusion module 2 in this embodiment are given in Table 2. Of course, it is only shown here as an example, and the specific parameters of the spatial information fusion module 2 can be set according to actual needs. Specifically, the parameters of the spatial information fusion module 2 are shown in the following table:
  • the generation module 3 includes an adder 31 and a convolution layer 32.
  • the adder 31 fuses the data output by the last spatial information fusion module 2 with the data output by the attribute fusion module 1, so as to better retain the original image information and avoid image Lack of information.
  • the convolutional layer 32 reconstructs the fused data to obtain a standard dose image.
  • step S3 the low-dose image denoising network is trained by using the training data set, the parameters of the low-dose image de-noising network are obtained, and the low-dose image de-noising network is updated.
  • the steps include:
  • step S32 the formula for constructing the loss function according to the multiple output images and the standard dose images in the multiple input parameter groups is as follows:
  • represents the network parameters of the low-dose image denoising network
  • loss( ⁇ ) represents the loss function
  • n represents the number of input parameter groups in the training data set
  • G(X i ; a i ; ⁇ ) represents the ith output image
  • Y i represents the standard dose image in the ith input parameter group.
  • the absolute value difference is used as the loss function, which can increase the differentiation between various regions of the image, thereby making the boundaries between the various regions in the image clearer.
  • step S33 the Adam optimization algorithm is used to optimize the minimum value of the loss function to obtain the optimized network parameters.
  • the iterative process of the Adam optimization algorithm is:
  • the updated network parameter ⁇ is output. If not, the next iteration is continued until the number of iterations is equal to the preset number of termination iterations.
  • the number of iterations can be set according to actual needs, which is not limited here.
  • the default value of ⁇ 1 is 0.9
  • the default value of ⁇ 2 is 0.999
  • k is the number of iterations
  • is the learning rate
  • the default value of ⁇ is 0.0001
  • is a small constant
  • the default value of ⁇ is 10 ⁇ 8 .
  • the loss function is constructed according to the mean square errors of the multiple output images and the standard dose images in the multiple input parameter groups.
  • this embodiment can also construct the loss function in other ways.
  • the loss function is constructed from the absolute value errors of the output images and standard dose images in multiple input parameter sets, respectively.
  • a corresponding optimization method can be selected to optimize the loss function according to the actual application. For example, if the low-dose image denoising network in this embodiment is applied to supervised learning, the Adam optimization method is used to optimize the loss function. Optimization; if the low-dose image denoising network in this embodiment is applied to the generative adversarial model, the SDG optimization method is used to optimize the loss function.
  • the updated low-dose image denoising network can be obtained.
  • the attributes of the anatomical structure are used as the input of the low-dose image denoising network, so that the attributes of the anatomical structure are fused into the image reconstruction process, so that the training
  • the resulting low-dose image denoising network can be applied to different anatomical structures, improving the robustness and ensuring the quality of the reconstructed images.
  • Figures 5a-5c exemplarily show the standard dose image, low-dose image, and output image in this embodiment, and it can be seen from the figures that the low-dose image denoising network in this embodiment is adopted
  • the reconstructed output image can preserve the image details well, and the reconstructed image has high definition.
  • this embodiment provides a training system for a low-dose image denoising network.
  • the training system includes a training data set acquisition module 100 , a network construction module 101 , and a training module 102 .
  • the training data set obtaining module 100 is used for obtaining a training data set, wherein the training data set includes a plurality of input parameter groups, and each input parameter group includes low-dose images of anatomical structures, attributes, and standard-dose images.
  • the training data set in this embodiment is:
  • n represents the number of input parameter groups in the training dataset
  • x i represents the low-dose image in the ith input parameter group
  • y i represents the standard dose image in the ith input parameter group
  • n low-dose images ⁇ x 1 ,x 2 ,..., xi ,...,x n ⁇ includes low-dose CT images of different anatomical parts, that is, n low-dose images ⁇ x 1 ,x 2 ,. 7-8,x i ,...,x n ⁇ have different properties, n low-dose images ⁇ x 1 ,x 2 ,...,x i , etc
  • the different anatomical parts may include the skull, the orbit, the sinuses, the neck, the lung cavity, the abdomen, the pelvic cavity (male), the pelvic cavity (female), the knee, and the lumbar spine.
  • low-dose images and standard-dose images in the training data set used for training in this embodiment are selected from sample data sets commonly used in the art, which are not specifically limited here.
  • the network building module 101 is used to establish a low-dose image denoising network, and the low-dose image denoising network includes an attribute fusion module, a spatial information fusion module and a generation module.
  • the training module 102 is used to train the low-dose image denoising network using the training data set, obtain parameters of the low-dose image denoising network, and update the low-dose image denoising network.
  • This embodiment provides a low-dose image denoising method, the denoising method includes: inputting the low-dose image to be denoised into a low-dose image obtained by using the training method of the low-dose image denoising network described in the first embodiment. In the dose image denoising network, the reconstructed low-dose image is obtained.
  • the denoising method in this embodiment includes two implementations.
  • the first implementation is to use the low-dose image de-noising network trained in Example 1 as the de-noising network for low-dose images.
  • the denoised low-dose image is input into the low-dose image denoising network to obtain the reconstructed low-dose image.
  • the second embodiment is to first use the training method of the low-dose image de-noising network described in the first embodiment to train the low-dose image de-noising network, and then input the low-dose image to be de-noised into the trained low-dose image.
  • Image denoising network to obtain reconstructed low-dose images.
  • the denoising method in this embodiment can be applied to different anatomical structures, and the details of the original image can be better extracted, so that the reconstructed image is clearer.
  • this embodiment provides a computer device including a processor 200 and a memory 201, a computer program stored on the memory 201, and the processor 200 executes the computer program to implement the training method described in the first embodiment.
  • the memory 201 may include a high-speed random access memory (Random Access Memory, RAM), and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory.
  • RAM Random Access Memory
  • non-volatile memory such as at least one disk memory.
  • the processor 200 may be an integrated circuit chip with signal processing capability. In the implementation process, each step of the training method described in Embodiment 1 may be completed by an integrated logic circuit of hardware in the processor 200 or an instruction in the form of software.
  • the processor 200 may also be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc., and may also be a digital signal processor (DSP), an application specific integrated circuit (ASIC) , off-the-shelf programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • CPU Central Processing Unit
  • NP Network Processor
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA off-the-shelf programmable gate array
  • the memory 201 is used to store a computer program. After the processor 200 receives the execution instruction, the computer program is executed to implement the training method described in the first embodiment.
  • This embodiment also provides a computer storage medium, where a computer program is stored in the computer storage medium, and the processor 200 is configured to read and execute the computer program stored in the computer storage medium, so as to implement the training method described in the first embodiment .
  • the above-mentioned embodiments it may be implemented in whole or in part by software, hardware, firmware or any combination thereof.
  • software it can be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, all or part of the processes or functions described in the embodiments of the present invention are generated.
  • the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer storage medium to another computer storage medium, for example, from a website site, computer, server, or data center over a wired (eg, coaxial) cable, fiber optic, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.) to another website site, computer, server, or data center.
  • the computer storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, a data center, or the like that includes an integration of one or more available media.
  • the usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVDs), or semiconductor media (eg, solid state disks (SSDs)), and the like.
  • Embodiments of the present invention are described with reference to flowcharts and/or block diagrams of methods, apparatuses, and computer program products according to embodiments of the present invention. It will be understood that each process and/or block in the flowchart illustrations and/or block diagrams, and combinations of processes and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to the processor of a general purpose computer, special purpose computer, embedded processor or other programmable data processing device to produce a machine such that the instructions executed by the processor of the computer or other programmable data processing device produce Means for implementing the functions specified in a flow or flow of a flowchart and/or a block or blocks of a block diagram.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions
  • the apparatus implements the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供了一种低剂量图像去噪网络的训练方法、低剂量图像的去噪方法、计算机设备及存储介质,包括:获取训练数据集,训练数据集包括多个输入参数组,每一个输入参数组包括解剖结构的低剂量图像、属性、标准剂量图像;建立低剂量图像去噪网络,低剂量图像去噪网络包括属性融合模块、空间信息融合模块及生成模块;利用训练数据集对低剂量图像去噪网络进行训练,获得低剂量图像去噪网络的参数。本发明提供的低剂量图像去噪网络的训练方法将解剖结构的属性作为低剂量图像去噪网络的输入,从而将解剖结构的属性融合到图像重建过程中,使得训练得到的低剂量图像去噪网络能够适用于不同的解剖结构,在提升了鲁棒性的同时保证重建图像的质量。

Description

低剂量图像去噪网络的训练方法、低剂量图像的去噪方法 技术领域
本发明涉及图像重建技术领域,尤其涉及一种低剂量图像去噪网络的训练方法、低剂量图像的去噪方法、计算机设备及存储介质。
背景技术
计算机断层成像(CT)是通过无损方式获取物体内部结构信息的一种重要成像手段,它拥有高分辨率、高灵敏度以及多层次等众多优点,是我国装机量最大的医疗影像诊断设备之一,被广泛应用于各个医疗临床检查领域。然而,由于CT扫描过程中需要使用X射线,随着人们对辐射潜在危害的逐步了解,CT辐射剂量问题越来越受到人们的重视。合理使用低剂量(As Low As Reasonably Achievable,ALARA)原则要求在满足临床诊断的前提下,尽量降低对患者的辐射剂量,而随着剂量的降低,更多的噪声会出现在成像过程中,从而导致成像质量较差,因此,研究和开发新的低剂量CT成像方法,既能保证CT成像质量又减少有害的辐射剂量,对于医疗诊断领域具有重要的科学意义和应用前景。由于不同的解剖部位在结构上存在很大差异,而现有的低剂量CT成像方法忽略了低剂量解剖结构的差异,鲁棒性较差。
发明内容
为了解决现有技术的不足,本发明提供一种低剂量图像去噪网络的训练方法、低剂量图像的去噪方法、计算机设备及存储介质,将解剖结构的属性融合到图像重建过程中,提升了去噪方法的鲁棒性以及重建图像的质量。
本发明提出的具体技术方案为:提供一种低剂量图像去噪网络的训练方法,所述训练方法包括:
获取训练数据集,所述训练数据集包括多个输入参数组,每一个所述输入参数组包括解剖结构的低剂量图像、属性、标准剂量图像;
建立低剂量图像去噪网络,所述低剂量图像去噪网络包括依次级联的属性融合模块、多个空间信息融合模块及生成模块;
利用所述训练数据集对所述低剂量图像去噪网络进行训练,获得所述低剂量图像去噪网络的参数并更新所述低剂量图像去噪网络。
进一步地,所述属性融合模块包括权重预测单元、第一特征提取单元及第一融合单元,所述权重预测单元用于根据属性获得解剖结构对应的权重掩码,所述第一特征提取单元用于提取所述低剂量图像的特征,所述第一融合单元用于将所述权重掩码与所述低剂量图像的特征进行融合获得权重特征。
进一步地,所述权重预测单元包括多个卷积层和多个激活函数,所述多个卷积层、多个激活函数依次交替级联。
进一步地,所述权重预测单元还包括拼接层,所述拼接层用于将所述多个卷积层中具有相同输出通道数的卷积层的输出进行拼接。
进一步地,所述空间信息融合模块包括第二特征提取单元、第三特征提取单元及第二融合单元,所述第二特征提取单元用于提取所述权重特征的空间信息,所述第三特征提取单元用于提取所述权重特征的图像特征,所述第二融合单元用于将所述空间信息与所述图像特征融合。
进一步地,所述利用所述训练数据集对所述低剂量图像去噪网络进行训练,获得所述低剂量图像去噪网络的参数并更新所述低剂量图像去噪网络,包括:
将所述多个输入参数组中的低剂量图像、属性输入所述低剂量图像去噪网络,获得多个输出图像;
根据所述多个输出图像分别和所述多个输入参数组中的标准剂量图像构建损失函数;
对所述损失函数进行优化,获得所述低剂量图像去噪网络的参数并更新所述低剂量图像去噪网络。
进一步地,所述损失函数为:
Figure PCTCN2020136210-appb-000001
其中,θ表示低剂量图像去噪网络的网络参数,loss(θ)表示损失函数,n表示训练数据集中输入参数组的个数,G(X i;a i;θ)表示第i个输出图像,Y i表示第i个输入参数组中的标准剂量图像。
本发明还提供了一种低剂量图像的去噪方法,所述去噪方法包括:将待去噪的低剂量图像输入到利用如上所述的低剂量图像去噪网络的训练方法获得的低剂量图像去噪网络中,获得重建后的低剂量图像。
本发明还提供了一种计算机设备,包括存储器、处理器及存储在存储器上的计算机程序,所述处理器执行所述计算机程序以实现如上任一项所述的训练方法。
本发明还提供了一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机指令,所述计算机指令被处理器执行时实现如上任一项所述的训练方法。
本发明提供的低剂量图像去噪网络的训练方法将解剖结构的属性作为低剂量图像去噪网络的输入,从而将解剖结构的属性融合到图像重建过程中,使得训练得到的低剂量图像去噪网络能够适用于不同的解剖结构,在提升了鲁棒性的同时保证重建图像的质量。
附图说明
图1为本发明实施例一中低剂量图像去噪网络的训练方法的流程图;
图2为本发明实施例一中低剂量图像去噪网络的结构示意图;
图3为本发明实施例一中权重预测单元的结构示意图;
图4为本发明实施例一中步骤S3的流程示意图;
图5a~5c为本发明实施例一中标准剂量图像、低剂量图像、输出图像的示意图;
图6为本发明实施例二中低剂量图像去噪网络的训练系统的结构示意图;
图7为本发明实施例四中计算机设备的结构示意图。
具体实施方式
以下,将参照附图来详细描述本发明的实施例。然而,可以以许多不同的形式来实施本发明,并且本发明不应该被解释为限制于这里阐述的具体实施例。相反,提供这些实施例是为了解释本发明的原理及其实际应用,从而使本领域的其他技术人员能够理解本发明的各种实施例和适合于特定预期应用的各种修改。在附图中,相同的标号将始终被用于表示相同的元件。
本申请提出的低剂量图像去噪网络的训练方法包括:
获取训练数据集,所述训练数据集包括多个输入参数组,每一个所述输入参数组包括解剖结构的低剂量图像、属性、标准剂量图像;
建立低剂量图像去噪网络,所述低剂量图像去噪网络包括属性融合模块、空间信息融合模块及生成模块;
利用所述训练数据集对所述低剂量图像去噪网络进行训练,获得所述低剂量图像去噪网络的参数并更新所述低剂量图像去噪网络。
本申请提供的低剂量图像去噪网络的训练方法将解剖结构的属性作为低剂量图像去噪网络的输入,从而将解剖结构的属性融合到图像重建过程中,使得训练得到的低剂量图像去噪网络能够适用于不同的解剖结构,在提升了鲁棒性的同时保证重建图像的质量。
下面以CT图像为例,通过几个具体的实施例并结合附图来对本申请中的低剂量图像去噪网络的训练方法、低剂量图像的去噪方法、计算机设备及存储介质进行详细的描述,需要说明的是,将CT图像作为示例并不用于对本申请的应用领域进行限定,本申请还可以应用到PET、SPECT等其他医学影像成像领域。
实施例一
参照图1,本实施例中的低剂量图像去噪网络的训练方法包括步骤:
S1、获取训练数据集,其中,训练数据集包括多个输入参数组,每一个输入参数组包括解剖结构的低剂量图像、属性、标准剂量图像;
S2、建立低剂量图像去噪网络,低剂量图像去噪网络包括属性融合模块、空间信息融合模块及生成模块;
S3、利用训练数据集对低剂量图像去噪网络进行训练,获得低剂量图像去噪网络的参数并更新低剂量图像去噪网络。
具体地,在步骤S1中,本实施例中的训练数据集为:
D={(x 1,y 1),(x 2,y 2),......,(x i,y i),......,(x n,y n)},
其中,n表示训练数据集中输入参数组的个数,x i表示第i个输入参数组中的低剂量图像,y i表示第i个输入参数组中的标准剂量图像,n个低剂量图像{x 1,x 2,......,x i,......,x n}包括不同解剖部位的低剂量CT图像,即n个低剂量图像{x 1,x 2,......,x i,......,x n}的属性不同,n个低剂量图像{x 1,x 2,......,x i,......,x n}与n个标准剂量图像{y 1,y 2,......,y i,......,y n}中下标相同的x i和y i表示同一解剖部位的低剂量CT图像和标准剂量CT图像。其中,不同解剖部位可以包括头颅、眼眶、鼻窦、颈部、肺腔、腹部、盆腔(男)、盆腔(女)、膝盖和腰椎等部位。
这里需要说明的是,本实施例中用于训练的训练数据集中的低剂量图像和标准剂量图像是从本领域常用的样本数据集中选取的,这里不再具体限定。
参照图2,本实施例构建的低剂量图像去噪网络包括依次级联的属性融合模块1、多个空间信息融合模块2及生成模块3。属性融合模块1用于将低剂量图像的属性与低剂量图像的特征进行融合生成权重特征。多个空间信息融合模块2用于获取权重特征的空间信息和图像特征并根据空间信息和图像特征生成空间信息融合特征。生成模块3用于根据空间信息融合特征生成标准剂量图像。
具体地,属性融合模块1包括权重预测单元11、第一特征提取单元和第一融合单元13。权重预测单元11用于根据解剖结构的属性生成解剖结构的权重掩码,第一特征提取单元用于提取解剖结构的低剂量图像的特征,第一融合单元13用于将权重掩码与低剂量图像的特征进行融合。
参照图3,权重预测单元11包括多个卷积层111、多个激活函数112,多个卷积层111和多个激活函数112依次交替级联。通过多个卷积层111对解剖结构的属性在通道上进行压缩与扩张,从而获得预定通道数的权重掩码。在卷积层 111进行卷积操作后,还需要通过激活函数112对卷积操作后的数据进行非线性处理。
本实施例中的解剖结构的属性采用one-hot编码方法进行编码,对于每个解剖结构的属性,只在该解剖结构对应的属性位为1,其他属性位均为0,例如,以解剖结构包括头颅、眼眶、鼻窦、颈部、肺腔为例,则{0,1,0,0,0}表示的是眼眶的解剖结构的属性,以此类推。
为了保留更多的上下文信息,本实施例中的权重预测单元11还包括拼接层113。拼接层113用于将多个卷积层111中具有相同通道数的卷积层111的输出的数据进行拼接。
图3中示例性的给出了权重预测单元11包括7个卷积层111、7个激活函数112和2个拼接层113,权重预测单元11的参数设置如下表所示:
表一 权重预测单元的参数
单元 卷积核 输入通道数 输出通道数
第一个卷积层 1x1 10 64
第二个卷积层 1x1 64 32
第三个卷积层 1x1 32 16
第四个卷积层 1x1 16 32
第五个卷积层 1x1 64 64
第六个卷积层 1x1 128 64
第七个卷积层 1x1 64 64
其中,第一个卷积层111的输出通道数与第五个卷积层111的输出通道数均为64,因此,在第五个激活函数112与第六个卷积层111之间级联一个拼接层113,拼接层113可以采用多种拼接方法进行拼接,本实施例为了降低计算复杂度,拼接层113采用最最简单的图像拼接方法,例如,第一个激活函数112的输出信号为512╳512╳64,第五个激活函数112的输出信号也为512╳512╳64,则拼接后的图像可以为512╳512╳128。同样的,第二个卷积层111的输出通道数与第四个卷积层111的输出通道数均为32,在第四个激活函数112与第五个卷积 层111之间级联一个拼接层113。其中,第一个~第六个激活函数112均为ReLU函数,第七个激活112函数为Sigmod函数,最终生成通道数为64的权重掩码。
再次参照图2,第一特征提取单元包括卷积层12,卷积层12的卷积核的大小为3╳3、输入通道数为1、输出通道数为64,通过卷积层12提取低剂量图像的特征。
第一融合单元13包括乘法器131、拼接层132及卷积层133,乘法器131用于将权重掩码与低剂量图像的特征进行点乘得到携带属性信息的特征,拼接层132用于将携带属性信息的特征与低剂量图像的特征进行拼接,以更好保留原图像信息,避免图像信息的缺失。
卷积层133的卷积核的大小为3╳3、输入通道数为128、输出通道数为64,通过卷积层133对拼接层132的输出数据进行卷积处理后获得权重掩码与低剂量图像的特征融合后的权重特征。
每一个空间信息融合模块2包括第二特征提取单元21、第三特征提取单元22及第二融合单元23。第二特征提取单元21用于提取权重特征的空间信息,第三特征提取单元22用于提取权重特征的图像特征,第二融合单元23用于将空间信息与图像特征融合。
具体地,第二特征提取单元21包括两个卷积层211和两个激活函数212,两个卷积层211和两个激活函数212依次交替级联,通过两个卷积层211提取权重特征的空间信息。在卷积层211进行卷积操作后,还需要通过激活函数212对卷积操作后的数据进行非线性处理。其中,第一个激活函数212为ReLU函数,第二个激活函数212为Sigmod函数,通过Sigmod函数将第二特征提取单元21的输出约束在0~1之间。
第三特征提取单元22包括两个卷积层221和一个激活函数222,激活函数222分别与两个卷积层221连接,通过两个卷积层221提取权重特征的图像特征。在第一个卷积层221进行卷积操作后,还需要通过激活函数222对卷积操作后的数据进行非线性处理。其中,激活函数222为ReLU函数。
第二融合单元23包括乘法器231、拼接层232、卷积层233及加法器234,乘法器231用于将空间信息与第三特征提取单元22输出的图像特征进行点乘获得携带空间信息的特征,拼接层132用于将携带空间信息的特征与第三特征提取 单元22输出的图像特征进行拼接,以更好保留原图像信息,避免图像信息的缺失。卷积层133对拼接层132拼接后的数据进行卷积处理,加法器234将卷积层133输出的数据与输入乘法器231的数据进行融合,最终获得融合空间信息的图像特征。
为了能够更好的获得低剂量图像的特征,本实施例通过设置多个空间信息融合模块2来构建更深的网络结构模型,较佳地,本实施例中的空间信息融合模块2的数目为15个,这里需要说明的是,图2中仅仅示例性的给出了低剂量图像去噪网络包括3个空间信息融合模块2的情形,但是并不用于对空间信息融合模块的数量进行限定。
表二中给出了本实施例中的空间信息融合模块2的参数,当然,这里也只是作为示例示出,空间信息融合模块2的具体参数可以根据实际需要进行设定。具体地,空间信息融合模块2的参数如下表所示:
表二 空间信息融合模块的参数
Figure PCTCN2020136210-appb-000002
通过多个空间信息融合模块2处理后得到空间信息融合特征,最后通过生成模块3生成标准剂量图像。其中,生成模块3包括加法器31和卷积层32,加法器31将最后一个空间信息融合模块2输出的数据与属性融合模块1输出的数据进行融合,以更好保留原图像信息,避免图像信息的缺失。卷积层32对融合后的数据进行重建获得标准剂量图像。
参照图4,在步骤S3中,利用训练数据集对低剂量图像去噪网络进行训练,获得低剂量图像去噪网络的参数并更新低剂量图像去噪网络具体包括步骤:
S31、将多个输入参数组中的低剂量图像、属性输入低剂量图像去噪网络,获得多个输出图像;
S32、根据多个输出图像分别和多个输入参数组中的标准剂量图像构建损失函数;
S33、对损失函数进行优化,获得所述低剂量图像去噪网络的参数并更新所述低剂量图像去噪网络。
具体地,在步骤S32中,根据多个输出图像分别和多个输入参数组中的标准剂量图像构建损失函数的公式如下:
Figure PCTCN2020136210-appb-000003
其中,θ表示低剂量图像去噪网络的网络参数,loss(θ)表示损失函数,n表示训练数据集中输入参数组的个数,G(X i;a i;θ)表示第i个输出图像,Y i表示第i个输入参数组中的标准剂量图像。
本实施例将绝对值差值作为损失函数,可以增加图像各个区域之间的差异化,从而使得图像中各个区域之间的边界更清晰。
S33、对损失函数的最小值进行优化,获得优化后的网络参数。
在步骤S33中,采用Adam优化算法对损失函数的最小值进行优化,得到优化后的网络参数,Adam优化算法的迭代过程为:
计算梯度:
Figure PCTCN2020136210-appb-000004
有偏一阶矩估计:s(k+1)=ρ 1s(k)+(1-ρ 1)g;
有偏二阶矩估计:r(k+1)=ρ 2r(k)+(1-ρ 2)ge g;
修正一阶矩:
Figure PCTCN2020136210-appb-000005
修正二阶矩:
Figure PCTCN2020136210-appb-000006
参数修正值:
Figure PCTCN2020136210-appb-000007
更新网络参数:θ=θ+Vθ;
判断迭代次数是否等于预设的终止迭代次数,若是,则输出更新后的网络参数θ,若否,则继续进行下一次的迭代直到迭代次数等于预设的终止迭代次数。其中,迭代次数可以根据实际需要来设定,这里不做限定。
在上面的优化算法中,第一次迭代的初始值条件为初始的网络参数θ,k=0,s(k)=0,r(k)=0;
Figure PCTCN2020136210-appb-000008
表示梯度运算符,ρ 1的默认值为0.9,ρ 2的默认值为0.999,k为迭代次数,ε表示学习率,ε的默认值为0.0001;δ为小常数,δ的默认值为10 -8
本实施例根据多个输出图像分别和多个输入参数组中的标准剂量图像的均方误差来构建损失函数,当然,本实施例也可以采用其他方式来构建损失函数,例如,可以根据多个输出图像分别和多个输入参数组中的标准剂量图像的绝对值误差来构建损失函数。
在步骤S33中,可以根据实际应用来选择相应的优化方法对损失函数进行优化,例如,将本实施例中的低剂量图像去噪网络应用到监督学习,则采用Adam优化方法来对损失函数进行优化;将本实施例中的低剂量图像去噪网络应用到生成对抗模型,则采用SDG优化方法来对损失函数进行优化。
经过上述优化之后便可以得到更新后的低剂量图像去噪网络,本实施例将解剖结构的属性作为低剂量图像去噪网络的输入,从而将解剖结构的属性融合到图像重建过程中,使得训练得到的低剂量图像去噪网络能够适用于不同的解剖结构,在提升了鲁棒性的同时保证重建图像的质量。参照图5a~5c,图5a~5c示例性的给出了本实施例中的标准剂量图像、低剂量图像、输出图像,从图中可以看出采用本实施例中的低剂量图像去噪网络重建后的输出图像能够很好的保留图像细节,重建图像清晰度也很高。
实施例二
参照图6,本实施例提供了一种低剂量图像去噪网络的训练系统,所述训练系统包括训练数据集获取模块100、网络构建模块101、训练模块102。
训练数据集获取模块100用于获取训练数据集,其中,训练数据集包括多个输入参数组,每一个输入参数组包括解剖结构的低剂量图像、属性、标准剂量图像。本实施例中的训练数据集为:
D={(x 1,y 1),(x 2,y 2),......,(x i,y i),......,(x n,y n)},
其中,n表示训练数据集中输入参数组的个数,x i表示第i个输入参数组中的低剂量图像,y i表示第i个输入参数组中的标准剂量图像,n个低剂量图像{x 1,x 2,......,x i,......,x n}包括不同解剖部位的低剂量CT图像,即n个低剂量图像{x 1,x 2,......,x i,......,x n}的属性不同,n个低剂量图像{x 1,x 2,......,x i,......,x n}与n个标准剂量图像{y 1,y 2,......,y i,......,y n}中下标相同的x i和y i表示同一解剖部位的低剂量CT图像和标准剂量CT图像。其中,不同解剖部位可以包括头颅、眼眶、鼻窦、颈部、肺腔、腹部、盆腔(男)、盆腔(女)、膝盖和腰椎等部位。
这里需要说明的是,本实施例中用于训练的训练数据集中的低剂量图像和标准剂量图像是从本领域常用的样本数据集中选取的,这里不再具体限定。
网络构建模块101用于建立低剂量图像去噪网络,低剂量图像去噪网络包括属性融合模块、空间信息融合模块及生成模块。
训练模块102用于利用训练数据集对低剂量图像去噪网络进行训练,获得低剂量图像去噪网络的参数并更新低剂量图像去噪网络。
实施例三
本实施例提供了一种低剂量图像的去噪方法,该去噪方法包括:将待去噪的低剂量图像输入到利用实施例一所述的低剂量图像去噪网络的训练方法获得的低剂量图像去噪网络中,获得重建后的低剂量图像。
这里需要说明的是,本实施例中的去噪方法包括两种实施方式,第一种实施方式是将实施例一训练好的低剂量图像去噪网络作为低剂量图像的去噪网络,将待去噪的低剂量图像输入到该低剂量图像去噪网络便可以获得重建后的低剂量图像。第二种实施方式是先利用实施例一所述的低剂量图像去噪网络的 训练方法对低剂量图像去噪网络进行训练,然后再将待去噪的低剂量图像输入到训练好的低剂量图像去噪网络,获得重建后的低剂量图像。
通过本实施例的去噪方法,能够适用于不同的解剖结构,可以更好地提取原始图像的细节,使得重建后的图像更清晰。
实施例四
参照图7,本实施例提供了一种计算机设备,包括处理器200及存储器201,存储在存储器201上的计算机程序,处理器200执行计算机程序以实现如实施例一所述的训练方法。
存储器201可以包括高速随机存取存储器(Random Access Memory,RAM),也可能还包括非不稳定的存储器(non-volatile memory),例如至少一个磁盘存储器。
处理器200可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,实施例一所述的训练方法的各步骤可以通过处理器200中的硬件的集成逻辑电路或者软件形式的指令完成。处理器200也可以是通用处理器,包括中央处理器(Central Processing Unit,CPU)、网络处理器(Network Processor,NP)等,还可以是数字信号处理器(DSP)、专用集成电路(ASIC)、现成可编程门阵列(FPGA)或者其它可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。
存储器201用于存储计算机程序,处理器200在接收到执行指令后,执行该计算机程序以实现如实施例一所述的训练方法。
本实施例还提供了一种计算机存储介质,计算机存储介质中存储有计算机程序,处理器200用于读取并执行计算机存储介质中存储的计算机程序,以实现如实施例一所述的训练方法。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本发明实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编 程装置。所述计算机指令可以存储在计算机存储介质中,或者从一个计算机存储介质向另一个计算机存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。
本发明实施例是参照根据本发明实施例的方法、装置、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
以上所述仅是本申请的具体实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。

Claims (20)

  1. 一种低剂量图像去噪网络的训练方法,其中,所述训练方法包括:
    获取训练数据集,所述训练数据集包括多个输入参数组,每一个所述输入参数组包括解剖结构的低剂量图像、属性、标准剂量图像;
    建立低剂量图像去噪网络,所述低剂量图像去噪网络包括依次级联的属性融合模块、多个空间信息融合模块及生成模块;
    利用所述训练数据集对所述低剂量图像去噪网络进行训练,获得所述低剂量图像去噪网络的参数并更新所述低剂量图像去噪网络。
  2. 根据权利要求1所述的训练方法,其中,所述属性融合模块包括权重预测单元、第一特征提取单元及第一融合单元,所述权重预测单元用于根据属性获得解剖结构对应的权重掩码,所述第一特征提取单元用于提取所述低剂量图像的特征,所述第一融合单元用于将所述权重掩码与所述低剂量图像的特征进行融合获得权重特征。
  3. 根据权利要求2所述的训练方法,其中,所述权重预测单元包括多个卷积层和多个激活函数,所述多个卷积层、多个激活函数依次交替级联。
  4. 根据权利要求3所述的训练方法,其中,所述权重预测单元还包括拼接层,所述拼接层用于将所述多个卷积层中具有相同输出通道数的卷积层的输出进行拼接。
  5. 根据权利要求3所述的训练方法,其中,所述空间信息融合模块包括第二特征提取单元、第三特征提取单元及第二融合单元,所述第二特征提取单元用于提取所述权重特征的空间信息,所述第三特征提取单元用于提取所述权重特征的图像特征,所述第二融合单元用于将所述空间信息与所述图像特征融合。
  6. 根据权利要求1所述的训练方法,其中,所述利用所述训练数据集对所述低剂量图像去噪网络进行训练,获得所述低剂量图像去噪网络的参数并更新所述低剂量图像去噪网络,包括:
    将所述多个输入参数组中的低剂量图像、属性输入所述低剂量图像去噪网络,获得多个输出图像;
    根据所述多个输出图像分别和所述多个输入参数组中的标准剂量图像构建损失函数;
    对所述损失函数进行优化,获得所述低剂量图像去噪网络的参数并更新所述低剂量图像去噪网络。
  7. 根据权利要求2所述的训练方法,其中,所述利用所述训练数据集对所述低剂量图像去噪网络进行训练,获得所述低剂量图像去噪网络的参数并更新所述低剂量图像去噪网络,包括:
    将所述多个输入参数组中的低剂量图像、属性输入所述低剂量图像去噪网络,获得多个输出图像;
    根据所述多个输出图像分别和所述多个输入参数组中的标准剂量图像构建损失函数;
    对所述损失函数进行优化,获得所述低剂量图像去噪网络的参数并更新所述低剂量图像去噪网络。
  8. 根据权利要求3所述的训练方法,其中,所述利用所述训练数据集对所述低剂量图像去噪网络进行训练,获得所述低剂量图像去噪网络的参数并更新所述低剂量图像去噪网络,包括:
    将所述多个输入参数组中的低剂量图像、属性输入所述低剂量图像去噪网络,获得多个输出图像;
    根据所述多个输出图像分别和所述多个输入参数组中的标准剂量图像构建损失函数;
    对所述损失函数进行优化,获得所述低剂量图像去噪网络的参数并更新所述低剂量图像去噪网络。
  9. 根据权利要求4所述的训练方法,其中,所述利用所述训练数据集对所述低剂量图像去噪网络进行训练,获得所述低剂量图像去噪网络的参数并更新所述低剂量图像去噪网络,包括:
    将所述多个输入参数组中的低剂量图像、属性输入所述低剂量图像去噪网络,获得多个输出图像;
    根据所述多个输出图像分别和所述多个输入参数组中的标准剂量图像构建损失函数;
    对所述损失函数进行优化,获得所述低剂量图像去噪网络的参数并更新所述低剂量图像去噪网络。
  10. 根据权利要求5所述的训练方法,其中,所述利用所述训练数据集对所述低剂量图像去噪网络进行训练,获得所述低剂量图像去噪网络的参数并更新所述低剂量图像去噪网络,包括:
    将所述多个输入参数组中的低剂量图像、属性输入所述低剂量图像去噪网络,获得多个输出图像;
    根据所述多个输出图像分别和所述多个输入参数组中的标准剂量图像构建损失函数;
    对所述损失函数进行优化,获得所述低剂量图像去噪网络的参数并更新所述低剂量图像去噪网络。
  11. 根据权利要求6所述的训练方法,其中,所述损失函数为:
    Figure PCTCN2020136210-appb-100001
    其中,θ表示低剂量图像去噪网络的网络参数,loss(θ)表示损失函数,n表示训练数据集中输入参数组的个数,G(X i;a i;θ)表示第i个输出图像,Y i表示第i个输入参数组中的标准剂量图像。
  12. 一种低剂量图像的去噪方法,其中,所述去噪方法包括:将待去噪的低剂量图像输入到低剂量图像去噪网络的训练方法获得的低剂量图像去噪网络中,获得重建后的低剂量图像,所述训练方法包括:
    获取训练数据集,所述训练数据集包括多个输入参数组,每一个所述输入参数组包括解剖结构的低剂量图像、属性、标准剂量图像;
    建立低剂量图像去噪网络,所述低剂量图像去噪网络包括依次级联的属性融合模块、多个空间信息融合模块及生成模块;
    利用所述训练数据集对所述低剂量图像去噪网络进行训练,获得所述低剂量图像去噪网络的参数并更新所述低剂量图像去噪网络。
  13. 根据权利要求12所述的去噪方法,其中,所述属性融合模块包括权重预测单元、第一特征提取单元及第一融合单元,所述权重预测单元用于根据属性获得解剖结构对应的权重掩码,所述第一特征提取单元用于提取所述低剂量图像的特征,所述第一融合单元用于将所述权重掩码与所述低剂量图像的特征进行融合获得权重特征。
  14. 根据权利要求13所述的去噪方法,其中,所述权重预测单元包括多个卷积层和多个激活函数,所述多个卷积层、多个激活函数依次交替级联。
  15. 根据权利要求14所述的去噪方法,其中,所述权重预测单元还包括拼接层,所述拼接层用于将所述多个卷积层中具有相同输出通道数的卷积层的输出进行拼接。
  16. 根据权利要求14所述的去噪方法,其中,所述空间信息融合模块包括第二特征提取单元、第三特征提取单元及第二融合单元,所述第二特征提取单元用于提取所述权重特征的空间信息,所述第三特征提取单元用于提取所述权重特征的图像特征,所述第二融合单元用于将所述空间信息与所述图像特征融合。
  17. 根据权利要求12所述的去噪方法,其中,所述利用所述训练数据集对所述低剂量图像去噪网络进行训练,获得所述低剂量图像去噪网络的参数并更新所述低剂量图像去噪网络,包括:
    将所述多个输入参数组中的低剂量图像、属性输入所述低剂量图像去噪网络,获得多个输出图像;
    根据所述多个输出图像分别和所述多个输入参数组中的标准剂量图像构建损失函数;
    对所述损失函数进行优化,获得所述低剂量图像去噪网络的参数并更新所述低剂量图像去噪网络。
  18. 根据权利要求13所述的去噪方法,其中,所述利用所述训练数据集对所述低剂量图像去噪网络进行训练,获得所述低剂量图像去噪网络的参数并更新所述低剂量图像去噪网络,包括:
    将所述多个输入参数组中的低剂量图像、属性输入所述低剂量图像去噪网络,获得多个输出图像;
    根据所述多个输出图像分别和所述多个输入参数组中的标准剂量图像构建损失函数;
    对所述损失函数进行优化,获得所述低剂量图像去噪网络的参数并更新所述低剂量图像去噪网络。
  19. 根据权利要求17所述的去噪方法,其中,所述损失函数为:
    Figure PCTCN2020136210-appb-100002
    其中,θ表示低剂量图像去噪网络的网络参数,loss(θ)表示损失函数,n表示训练数据集中输入参数组的个数,G(X i;a i;θ)表示第i个输出图像,Y i表示第i个输入参数组中的标准剂量图像。
  20. 一种计算机设备,包括存储器、处理器及存储在存储器上的计算机程序,其中,所述处理器执行所述计算机程序以实现训练方法低剂量图像去噪网络的训练方法,其中,所述训练方法包括:
    获取训练数据集,所述训练数据集包括多个输入参数组,每一个所述输入参数组包括解剖结构的低剂量图像、属性、标准剂量图像;
    建立低剂量图像去噪网络,所述低剂量图像去噪网络包括依次级联的属性融合模块、多个空间信息融合模块及生成模块;
    利用所述训练数据集对所述低剂量图像去噪网络进行训练,获得所述低剂量图像去噪网络的参数并更新所述低剂量图像去噪网络。
PCT/CN2020/136210 2020-12-07 2020-12-14 低剂量图像去噪网络的训练方法、低剂量图像的去噪方法 WO2022120883A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011437368.6 2020-12-07
CN202011437368.6A CN112541871B (zh) 2020-12-07 2020-12-07 低剂量图像去噪网络的训练方法、低剂量图像的去噪方法

Publications (1)

Publication Number Publication Date
WO2022120883A1 true WO2022120883A1 (zh) 2022-06-16

Family

ID=75019870

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/136210 WO2022120883A1 (zh) 2020-12-07 2020-12-14 低剂量图像去噪网络的训练方法、低剂量图像的去噪方法

Country Status (2)

Country Link
CN (1) CN112541871B (zh)
WO (1) WO2022120883A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116385319A (zh) * 2023-05-29 2023-07-04 中国人民解放军国防科技大学 一种基于场景认知的雷达图像相干斑滤波方法及装置
CN117541481A (zh) * 2024-01-09 2024-02-09 广东海洋大学 一种低剂量ct图像修复方法、系统及存储介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113298900B (zh) * 2021-04-30 2022-10-25 北京航空航天大学 一种基于低信噪比pet图像的处理方法
CN113256752B (zh) * 2021-06-07 2022-07-26 太原理工大学 一种基于双域交织网络的低剂量ct重建方法
CN115526857A (zh) * 2022-09-26 2022-12-27 深圳先进技术研究院 一种pet图像去噪的方法、终端设备及可读存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110992290A (zh) * 2019-12-09 2020-04-10 深圳先进技术研究院 低剂量ct图像去噪网络的训练方法及系统
CN111179366A (zh) * 2019-12-18 2020-05-19 深圳先进技术研究院 基于解剖结构差异先验的低剂量图像重建方法和系统
CN111325686A (zh) * 2020-02-11 2020-06-23 之江实验室 一种基于深度学习的低剂量pet三维重建方法
US20200357153A1 (en) * 2017-07-28 2020-11-12 Shanghai United Imaging Healthcare Co., Ltd. System and method for image conversion

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11580410B2 (en) * 2018-01-24 2023-02-14 Rensselaer Polytechnic Institute 3-D convolutional autoencoder for low-dose CT via transfer learning from a 2-D trained network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200357153A1 (en) * 2017-07-28 2020-11-12 Shanghai United Imaging Healthcare Co., Ltd. System and method for image conversion
CN110992290A (zh) * 2019-12-09 2020-04-10 深圳先进技术研究院 低剂量ct图像去噪网络的训练方法及系统
CN111179366A (zh) * 2019-12-18 2020-05-19 深圳先进技术研究院 基于解剖结构差异先验的低剂量图像重建方法和系统
CN111325686A (zh) * 2020-02-11 2020-06-23 之江实验室 一种基于深度学习的低剂量pet三维重建方法

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116385319A (zh) * 2023-05-29 2023-07-04 中国人民解放军国防科技大学 一种基于场景认知的雷达图像相干斑滤波方法及装置
CN116385319B (zh) * 2023-05-29 2023-08-15 中国人民解放军国防科技大学 一种基于场景认知的雷达图像相干斑滤波方法及装置
CN117541481A (zh) * 2024-01-09 2024-02-09 广东海洋大学 一种低剂量ct图像修复方法、系统及存储介质
CN117541481B (zh) * 2024-01-09 2024-04-05 广东海洋大学 一种低剂量ct图像修复方法、系统及存储介质

Also Published As

Publication number Publication date
CN112541871B (zh) 2024-07-23
CN112541871A (zh) 2021-03-23

Similar Documents

Publication Publication Date Title
WO2022120883A1 (zh) 低剂量图像去噪网络的训练方法、低剂量图像的去噪方法
WO2021114105A1 (zh) 低剂量ct图像去噪网络的训练方法及系统
JP2019194906A (ja) データ及びモデルを用いた画像再構成ならびに補正のためのシステム及び方法
Pain et al. Deep learning-based image reconstruction and post-processing methods in positron emission tomography for low-dose imaging and resolution enhancement
CN104182954B (zh) 一种实时的多模态医学图像融合方法
CN106846430B (zh) 一种图像重建方法
CN112508808A (zh) 基于生成对抗网络的ct双域联合金属伪影校正方法
CN105361901A (zh) 正电子发射断层扫描仪深度效应校正方法及其系统
US11514621B2 (en) Low-dose image reconstruction method and system based on prior anatomical structure difference
CN111325695B (zh) 基于多剂量等级的低剂量图像增强方法、系统及存储介质
US20220245868A1 (en) System and method for image reconstruction
CN111899315B (zh) 利用多尺度特征感知深度网络重建低剂量图像的方法
Chen et al. A new linearized split Bregman iterative algorithm for image reconstruction in sparse-view X-ray computed tomography
Takam et al. Spark architecture for deep learning-based dose optimization in medical imaging
Li et al. Sparse CT reconstruction based on multi-direction anisotropic total variation (MDATV)
CN109741254A (zh) 字典训练及图像超分辨重建方法、系统、设备及存储介质
CN110874855B (zh) 一种协同成像方法、装置、存储介质和协同成像设备
Trung et al. Dilated residual convolutional neural networks for low-dose CT image denoising
Cui et al. Image2Points: A 3D Point-based Context Clusters GAN for High-Quality PET Image Reconstruction
WO2022120692A1 (zh) Pet图像的重建方法及重建终端、计算机可读存储介质
US10347014B2 (en) System and method for image reconstruction
WO2022120694A1 (zh) 低剂量图像去噪网络的训练方法、低剂量图像的去噪方法
Zhao et al. A fast algorithm for high order total variation minimization based interior tomography
WO2021159234A1 (zh) 图像处理方法、装置及计算机可读存储介质
Luo et al. AMCNet: attention-based multiscale convolutional network for DCM MRI segmentation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20964816

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20964816

Country of ref document: EP

Kind code of ref document: A1