CN112541871A - Training method of low-dose image denoising network and denoising method of low-dose image - Google Patents

Training method of low-dose image denoising network and denoising method of low-dose image Download PDF

Info

Publication number
CN112541871A
CN112541871A CN202011437368.6A CN202011437368A CN112541871A CN 112541871 A CN112541871 A CN 112541871A CN 202011437368 A CN202011437368 A CN 202011437368A CN 112541871 A CN112541871 A CN 112541871A
Authority
CN
China
Prior art keywords
low
dose image
dose
denoising network
image denoising
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011437368.6A
Other languages
Chinese (zh)
Other versions
CN112541871B (en
Inventor
郑海荣
梁栋
胡战利
黄振兴
刘新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202011437368.6A priority Critical patent/CN112541871B/en
Priority to PCT/CN2020/136210 priority patent/WO2022120883A1/en
Publication of CN112541871A publication Critical patent/CN112541871A/en
Application granted granted Critical
Publication of CN112541871B publication Critical patent/CN112541871B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10108Single photon emission computed tomography [SPECT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a training method of a low-dose image denoising network, a denoising method of a low-dose image, computer equipment and a storage medium, wherein the training method comprises the following steps: acquiring a training data set, wherein the training data set comprises a plurality of input parameter sets, and each input parameter set comprises a low dose image, an attribute and a standard dose image of an anatomical structure; establishing a low-dose image denoising network, wherein the low-dose image denoising network comprises an attribute fusion module, a spatial information fusion module and a generation module; and training the low-dose image denoising network by using the training data set to obtain parameters of the low-dose image denoising network. The method for training the low-dose image denoising network provided by the invention takes the attributes of the anatomical structure as the input of the low-dose image denoising network, so that the attributes of the anatomical structure are fused in the image reconstruction process, the trained low-dose image denoising network can be suitable for different anatomical structures, the robustness is improved, and the quality of the reconstructed image is ensured.

Description

Training method of low-dose image denoising network and denoising method of low-dose image
Technical Field
The invention relates to the technical field of image reconstruction, in particular to a training method of a low-dose image denoising network, a denoising method of a low-dose image, computer equipment and a storage medium.
Background
Computed Tomography (CT) is an important imaging means for obtaining internal structural information of an object in a nondestructive manner, has many advantages of high resolution, high sensitivity, multiple levels and the like, is one of medical image diagnostic devices with the largest machine loading amount in China, and is widely applied to various medical clinical examination fields. However, as the use of X-rays is required during CT scanning, the problem of CT radiation dose is increasingly gaining attention as people become increasingly aware of the potential hazards of radiation. The principle of Reasonably using Low dose (As Low As reasonable Achievable, ALARA) requires that the radiation dose to a patient is reduced As much As possible on the premise of meeting clinical diagnosis, and more noise appears in the imaging process along with the reduction of the dose, so that the imaging quality is poorer, therefore, the research and development of a new Low-dose CT imaging method can ensure the CT imaging quality and reduce the harmful radiation dose, and has important scientific significance and application prospect in the field of medical diagnosis. Because different anatomical parts have great difference in structure, the existing low-dose CT imaging method ignores the difference of low-dose anatomical structures, and the robustness is poor.
Disclosure of Invention
In order to solve the defects of the prior art, the invention provides a training method of a low-dose image denoising network, a denoising method of a low-dose image, computer equipment and a storage medium, wherein the attributes of an anatomical structure are fused in the image reconstruction process, so that the robustness of the denoising method and the quality of the reconstructed image are improved.
The specific technical scheme provided by the invention is as follows: the training method for the low-dose image denoising network is provided, and comprises the following steps:
acquiring a training dataset comprising a plurality of input parameter sets, each input parameter set comprising a low dose image, attributes, a standard dose image of an anatomical structure;
establishing a low-dose image denoising network, wherein the low-dose image denoising network comprises an attribute fusion module, a plurality of spatial information fusion modules and a generation module which are sequentially cascaded;
and training the low-dose image denoising network by using the training data set to obtain parameters of the low-dose image denoising network and update the low-dose image denoising network.
Further, the attribute fusion module includes a weight prediction unit, a first feature extraction unit, and a first fusion unit, where the weight prediction unit is configured to obtain a weight mask corresponding to an anatomical structure according to an attribute, the first feature extraction unit is configured to extract a feature of the low-dose image, and the first fusion unit is configured to fuse the weight mask and the feature of the low-dose image to obtain a weight feature.
Further, the weight prediction unit comprises a plurality of convolution layers and a plurality of activation functions, and the plurality of convolution layers and the plurality of activation functions are sequentially and alternately cascaded.
Further, the weight prediction unit further includes a splicing layer, and the splicing layer is configured to splice outputs of convolutional layers having the same number of output channels in the plurality of convolutional layers.
Further, the spatial information fusion module includes a second feature extraction unit, a third feature extraction unit, and a second fusion unit, where the second feature extraction unit is configured to extract spatial information of the weighted features, the third feature extraction unit is configured to extract image features of the weighted features, and the second fusion unit is configured to fuse the spatial information and the image features.
Further, the training the low-dose image denoising network by using the training data set to obtain parameters of the low-dose image denoising network and update the low-dose image denoising network includes:
inputting the low-dose images and attributes in the plurality of input parameter sets into the low-dose image denoising network to obtain a plurality of output images;
constructing a loss function according to the plurality of output images and the standard dose images in the plurality of input parameter sets respectively;
and optimizing the loss function to obtain parameters of the low-dose image denoising network and update the low-dose image denoising network.
Further, the loss function is:
Figure BDA0002821283050000021
wherein theta represents the network parameter of the low-dose image denoising network, loss (theta) represents the loss function, n represents the number of input parameter groups in the training data set, and G (X)i;ai(ii) a θ) represents the ith output image, YiRepresenting the standard dose image in the ith input parameter set.
The invention also provides a denoising method of the low-dose image, which comprises the following steps: and inputting the low-dose image to be denoised into the low-dose image denoising network obtained by the training method of the low-dose image denoising network to obtain the reconstructed low-dose image.
The invention also provides a computer device comprising a memory, a processor and a computer program stored on the memory, the processor executing the computer program to implement the training method as described in any one of the above.
The invention also provides a computer readable storage medium having stored thereon computer instructions which, when executed by a processor, implement a training method as defined in any one of the above.
The method for training the low-dose image denoising network provided by the invention takes the attributes of the anatomical structure as the input of the low-dose image denoising network, so that the attributes of the anatomical structure are fused in the image reconstruction process, the trained low-dose image denoising network can be suitable for different anatomical structures, the robustness is improved, and the quality of the reconstructed image is ensured.
Drawings
The technical solution and other advantages of the present invention will become apparent from the following detailed description of specific embodiments of the present invention, which is to be read in connection with the accompanying drawings.
FIG. 1 is a flowchart of a training method for a medium/low dose image denoising network according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a medium-low dose image denoising network according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a weight prediction unit according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating the step S3 according to the first embodiment of the present invention;
FIGS. 5 a-5 c are schematic diagrams of a standard dose image, a low dose image, and an output image according to a first embodiment of the present invention;
FIG. 6 is a schematic structural diagram of a training system of a medium/low dose image denoising network according to a second embodiment of the present invention;
fig. 7 is a schematic structural diagram of a computer device according to a fourth embodiment of the present invention.
Detailed Description
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings. This invention may, however, be embodied in many different forms and should not be construed as limited to the specific embodiments set forth herein. Rather, these embodiments are provided to explain the principles of the invention and its practical application to thereby enable others skilled in the art to understand the invention for various embodiments and with various modifications as are suited to the particular use contemplated. In the drawings, like reference numerals will be used to refer to like elements throughout.
The training method of the low-dose image denoising network provided by the invention comprises the following steps:
acquiring a training dataset comprising a plurality of input parameter sets, each input parameter set comprising a low dose image, attributes, a standard dose image of an anatomical structure;
establishing a low-dose image denoising network, wherein the low-dose image denoising network comprises an attribute fusion module, a spatial information fusion module and a generation module;
and training the low-dose image denoising network by using the training data set to obtain parameters of the low-dose image denoising network and update the low-dose image denoising network.
The method for training the low-dose image denoising network provided by the invention takes the attributes of the anatomical structure as the input of the low-dose image denoising network, so that the attributes of the anatomical structure are fused in the image reconstruction process, the trained low-dose image denoising network can be suitable for different anatomical structures, the robustness is improved, and the quality of the reconstructed image is ensured.
In the following, a CT image is taken as an example, and a training method of a low-dose image denoising network, a denoising method of a low-dose image, a computer device and a storage medium in the present application are described in detail through several specific embodiments and with reference to the accompanying drawings, it should be noted that the CT image is taken as an example and is not used to limit the application field of the present application, and the present application may also be applied to other medical imaging fields such as PET and SPECT.
Example one
Referring to fig. 1, the training method of the low-dose image denoising network in the embodiment includes the steps of:
s1, acquiring a training data set, wherein the training data set comprises a plurality of input parameter sets, and each input parameter set comprises a low dose image, an attribute and a standard dose image of the anatomical structure;
s2, establishing a low-dose image denoising network, wherein the low-dose image denoising network comprises an attribute fusion module, a spatial information fusion module and a generation module;
s3, training the low-dose image denoising network by using the training data set, obtaining parameters of the low-dose image denoising network and updating the low-dose image denoising network.
Specifically, in step S1, the training data set in the present embodiment is:
D={(x1,y1),(x2,y2),......,(xi,yi),......,(xn,yn)},
where n denotes the number of input parameter sets in the training data set, xiRepresenting low dose images in the ith input parameter set, yiRepresenting the standard dose image, n low dose images { x ] in the ith input parameter set1,x2,......,xi,......,xnComprises low-dose CT images of different anatomical sites, i.e. n low-dose images { x }1,x2,......,xi,......,xnDifferent in nature, n low dose images { x }1,x2,......,xi,......,xnAnd n standard dose images y1,y2,......,yi,......,ynX with the same subscript iniAnd yiA low dose CT image and a standard dose CT image representing the same anatomical region. Wherein, different anatomical parts can comprise skull, orbit, nasal sinuses, neck, lung cavity, abdomen, pelvic cavity (male), pelvic cavity (female), knee, lumbar and other parts.
It should be noted that the low dose images and the standard dose images in the training data set for training in the present embodiment are selected from sample data sets commonly used in the art, and are not limited herein.
Referring to fig. 2, the low-dose image denoising network constructed in this embodiment includes an attribute fusion module 1, a plurality of spatial information fusion modules 2, and a generation module 3, which are sequentially cascaded. The attribute fusion module 1 is used for fusing the attributes of the low-dose image and the features of the low-dose image to generate weight features. The plurality of spatial information fusion modules 2 are used for acquiring the spatial information and the image characteristics of the weight characteristics and generating spatial information fusion characteristics according to the spatial information and the image characteristics. The generating module 3 is used for generating a standard dose image according to the spatial information fusion characteristics.
Specifically, the attribute fusion module 1 includes a weight prediction unit 11, a first feature extraction unit, and a first fusion unit 13. The weight prediction unit 11 is configured to generate a weight mask of the anatomical structure according to the attribute of the anatomical structure, the first feature extraction unit is configured to extract a feature of a low-dose image of the anatomical structure, and the first fusion unit 13 is configured to fuse the weight mask with the feature of the low-dose image.
Referring to fig. 3, the weight prediction unit 11 includes a plurality of convolutional layers 111 and a plurality of activation functions 112, and the plurality of convolutional layers 111 and the plurality of activation functions 112 are alternately cascaded in sequence. The attributes of the anatomical structure are compressed and expanded over the channels by the plurality of convolutional layers 111, thereby obtaining a weight mask for a predetermined number of channels. After convolution layer 111 performs the convolution operation, the data after the convolution operation needs to be subjected to nonlinear processing by activating function 112.
In this embodiment, the attributes of the anatomical structures are encoded by a one-hot encoding method, and for the attribute of each anatomical structure, only the attribute bit corresponding to the anatomical structure is 1, and all other attribute bits are 0, for example, taking the anatomical structure including skull, orbit, sinus, neck, and lung cavity as an example, the attribute of the anatomical structure of the orbit is represented by {0, 1, 0, 0, 0}, and so on.
In order to retain more context information, the weight prediction unit 11 in the present embodiment further includes a concatenation layer 113. The splicing layer 113 is used to splice data output from the convolutional layers 111 having the same number of channels among the plurality of convolutional layers 111.
Fig. 3 exemplarily shows that the weight prediction unit 11 includes 7 convolutional layers 111, 7 activation functions 112, and 2 concatenation layers 113, and the parameter settings of the weight prediction unit 11 are shown in the following table:
table-weight prediction unit parameters
Unit cell Convolution kernel Number of input channels Number of output channels
The first convolutional layer 1x1 10 64
The second convolution layer 1x1 64 32
The third convolutional layer 1x1 32 16
The fourth convolution layer 1x1 16 32
The fifth convolutional layer 1x1 64 64
The sixth convolutional layer 1x1 128 64
The seventh convolutional layer 1x1 64 64
The number of output channels of the first convolutional layer 111 and the number of output channels of the fifth convolutional layer 111 are both 64, so that one splicing layer 113 is cascaded between the fifth activation function 112 and the sixth convolutional layer 111, the splicing layer 113 can be spliced by using a plurality of splicing methods, in this embodiment, in order to reduce the computational complexity, the simplest image splicing method is used for the splicing layer 113, for example, the output signal of the first activation function 112 is 512 × 512 × 64, the output signal of the fifth activation function 112 is also 512 × 512 × 64, and the spliced image may be 512 × 512 × 128. Similarly, the number of output channels of the second convolutional layer 111 and the number of output channels of the fourth convolutional layer 111 are both 32, and a splice layer 113 is cascaded between the fourth activation function 112 and the fifth convolutional layer 111. The first to sixth activation functions 112 are all ReLU functions, the seventh activation function 112 is a Sigmod function, and finally, a weight mask with 64 channels is generated.
Referring again to fig. 2, the first feature extraction unit includes the convolutional layer 12, the convolutional kernel of the convolutional layer 12 has a size of 3 × 3, the number of input channels is 1, the number of output channels is 64, and features of the low-dose image are extracted by the convolutional layer 12.
The first fusion unit 13 includes a multiplier 131, a stitching layer 132, and a convolution layer 133, where the multiplier 131 is configured to perform dot multiplication on the weight mask and the features of the low-dose image to obtain features carrying attribute information, and the stitching layer 132 is configured to stitch the features carrying attribute information and the features of the low-dose image, so as to better retain original image information and avoid missing image information.
The convolution kernel of the convolutional layer 133 has a size of 3 × 3, the number of input channels is 128, and the number of output channels is 64, and the convolutional layer 133 performs convolution processing on the output data of the splicing layer 132 to obtain a weight feature in which the weight mask is fused with the feature of the low-dose image.
Each spatial information fusion module 2 includes a second feature extraction unit 21, a third feature extraction unit 22, and a second fusion unit 23. The second feature extraction unit 21 is used for extracting spatial information of the weight features, the third feature extraction unit 22 is used for extracting image features of the weight features, and the second fusion unit 23 is used for fusing the spatial information with the image features.
Specifically, the second feature extraction unit 21 includes two convolution layers 211 and two activation functions 212, and the two convolution layers 211 and the two activation functions 212 are alternately cascaded in sequence to extract spatial information of the weight feature by the two convolution layers 211. After convolution layer 211 performs the convolution operation, the convolution-operated data also needs to be non-linearly processed by activation function 212. The first active function 212 is a ReLU function, the second active function 212 is a Sigmod function, and the output of the second feature extraction unit 21 is constrained to be between 0 and 1 by the Sigmod function.
The third feature extraction unit 22 includes two convolution layers 221 and an activation function 222, and the activation function 222 is connected to the two convolution layers 221, respectively, and extracts the image features of the weight features through the two convolution layers 221. After the convolution operation is performed on the first convolution layer 221, the data after the convolution operation needs to be subjected to a non-linear processing by the activation function 222. Wherein the activation function 222 is a ReLU function.
The second fusion unit 23 includes a multiplier 231, a stitching layer 232, a convolution layer 233, and an adder 234, where the multiplier 231 is configured to perform dot multiplication on the spatial information and the image features output by the third feature extraction unit 22 to obtain features carrying the spatial information, and the stitching layer 132 is configured to stitch the features carrying the spatial information and the image features output by the third feature extraction unit 22, so as to better retain original image information and avoid missing of the image information. The convolution layer 133 performs convolution processing on the data spliced by the splicing layer 132, and the adder 234 fuses the data output by the convolution layer 133 and the data input to the multiplier 231, thereby finally obtaining the image characteristics of the fused spatial information.
In order to better obtain the characteristics of the low-dose image, the present embodiment constructs a deeper network structure model by setting a plurality of spatial information fusion modules 2, preferably, the number of the spatial information fusion modules 2 in the present embodiment is 15, and it should be noted that fig. 2 only exemplarily shows a case where the low-dose image denoising network includes 3 spatial information fusion modules 2, but is not used to limit the number of the spatial information fusion modules.
The parameters of the spatial information fusion module 2 in this embodiment are given in table two, but this is also shown as an example, and the specific parameters of the spatial information fusion module 2 may be set according to actual needs. Specifically, the parameters of the spatial information fusion module 2 are shown in the following table:
parameters of table two-space information fusion module
Figure BDA0002821283050000081
The spatial information fusion characteristics are obtained after the processing of the plurality of spatial information fusion modules 2, and finally, a standard dose image is generated through a generation module 3. The generating module 3 includes an adder 31 and a convolutional layer 32, and the adder 31 fuses the data output by the last spatial information fusion module 2 and the data output by the attribute fusion module 1, so as to better retain the original image information and avoid the loss of the image information. The convolution layer 32 reconstructs the fused data to obtain a standard dose image.
Referring to fig. 4, in step S3, training the low-dose image denoising network with the training data set to obtain parameters of the low-dose image denoising network and update the low-dose image denoising network specifically includes the steps of:
s31, inputting the low-dose images and the attributes in the multiple input parameter groups into a low-dose image denoising network to obtain multiple output images;
s32, constructing a loss function according to the plurality of output images and the standard dose images in the plurality of input parameter sets respectively;
s33, optimizing the loss function, obtaining parameters of the low-dose image denoising network and updating the low-dose image denoising network.
Specifically, in step S32, the formula for constructing the loss function from the plurality of output images and the standard dose images in the plurality of input parameter sets respectively is as follows:
Figure BDA0002821283050000082
wherein theta represents the network parameter of the low-dose image denoising network, loss (theta) represents the loss function, n represents the number of input parameter groups in the training data set, and G (X)i;ai(ii) a θ) represents the ith output image, YiRepresenting the standard dose image in the ith input parameter set.
The embodiment takes the absolute value difference as the loss function, so that the difference between each area of the image can be increased, and the boundary between each area in the image is clearer.
And S33, optimizing the minimum value of the loss function to obtain the optimized network parameters.
In step S33, an Adam optimization algorithm is used to optimize the minimum value of the loss function to obtain an optimized network parameter, and an iterative process of the Adam optimization algorithm is as follows:
calculating the gradient:
Figure BDA0002821283050000091
biased first moment estimation: s (k +1) ═ ρ1s(k)+(1-ρ1)g;
Biased second moment estimation: r (k +1) ═ p2r(k)+(1-ρ2)g⊙g;
Correcting the first moment:
Figure BDA0002821283050000092
correcting the second moment:
Figure BDA0002821283050000093
parameter correction value:
Figure BDA0002821283050000094
updating the network parameters: θ + Δ θ;
and judging whether the iteration times are equal to the preset termination iteration times, if so, outputting the updated network parameter theta, and if not, continuing to perform the next iteration until the iteration times are equal to the preset termination iteration times. The number of iterations may be set according to actual needs, and is not limited herein.
In the above optimization algorithm, the initial value conditions of the first iteration are initial network parameters θ, k is 0, s (k) is 0, and r (k) is 0;
Figure BDA0002821283050000095
representing the gradient operator, p1Is 0.9, p2The default value of (a) is 0.999, k is the iteration number, epsilon represents the learning rate, and the default value of epsilon is 0.0001; delta is a small constant, delta has a default value of 10-8
The loss function is constructed according to the mean square errors of the plurality of output images and the standard dose images in the plurality of input parameter sets, but the loss function may be constructed in other manners, for example, the loss function may be constructed according to the absolute value errors of the plurality of output images and the standard dose images in the plurality of input parameter sets.
In step S33, a corresponding optimization method may be selected according to actual application to optimize the loss function, for example, if the low-dose image denoising network in this embodiment is applied to supervised learning, an Adam optimization method is adopted to optimize the loss function; when the low-dose image denoising network in the embodiment is applied to the generation countermeasure model, the loss function is optimized by adopting an SDG optimization method.
The updated low-dose image denoising network can be obtained after the optimization, and the attributes of the anatomical structure are used as the input of the low-dose image denoising network, so that the attributes of the anatomical structure are fused into the image reconstruction process, the trained low-dose image denoising network can be suitable for different anatomical structures, the robustness is improved, and the quality of the reconstructed image is guaranteed. Referring to fig. 5a to 5c, fig. 5a to 5c exemplarily show a standard dose image, a low dose image, and an output image in this embodiment, and it can be seen from the figures that the output image reconstructed by using the low dose image denoising network in this embodiment can well retain image details, and the reconstructed image has high definition.
Example two
Referring to fig. 6, the present embodiment provides a training system for a low-dose image denoising network, where the training system includes a training data set acquisition module 100, a network construction module 101, and a training module 102.
The training data set acquisition module 100 is configured to acquire a training data set, wherein the training data set includes a plurality of input parameter sets, each input parameter set including a low dose image, an attribute, and a standard dose image of an anatomical structure. The training data set in this embodiment is:
D={(x1,y1),(x2,y2),......,(xi,yi),......,(xn,yn)},
where n denotes the number of input parameter sets in the training data set, xiRepresenting low dose images in the ith input parameter set, yiRepresenting the standard dose image, n low dose images { x ] in the ith input parameter set1,x2,......,xi,......,xnComprises low-dose CT images of different anatomical sites, i.e. n low-dose images { x }1,x2,......,xi,......,xnDifferent in nature, n low dose images { x }1,x2,......,xi,......,xnAnd n standard dose images y1,y2,......,yi,......,ynX with the same subscript iniAnd yiA low dose CT image and a standard dose CT image representing the same anatomical region. Wherein, different anatomical parts can comprise skull, orbit, nasal sinuses, neck, lung cavity, abdomen, pelvic cavity (male), pelvic cavity (female), knee, lumbar and other parts.
It should be noted that the low dose images and the standard dose images in the training data set for training in the present embodiment are selected from sample data sets commonly used in the art, and are not limited herein.
The network construction module 101 is used for establishing a low-dose image denoising network, and the low-dose image denoising network includes an attribute fusion module, a spatial information fusion module, and a generation module.
The training module 102 is configured to train the low-dose image denoising network by using a training data set, obtain parameters of the low-dose image denoising network, and update the low-dose image denoising network.
EXAMPLE III
The embodiment provides a denoising method of a low-dose image, which comprises the following steps: and inputting the low-dose image to be denoised into a low-dose image denoising network obtained by using the training method of the low-dose image denoising network described in the first embodiment to obtain a reconstructed low-dose image.
It should be noted here that the denoising method in this embodiment includes two implementation manners, the first implementation manner is to use the low-dose image denoising network trained in the first embodiment as a denoising network for a low-dose image, and the low-dose image to be denoised is input to the low-dose image denoising network, so that a reconstructed low-dose image can be obtained. The second implementation manner is that the training method of the low-dose image denoising network described in the first embodiment is firstly utilized to train the low-dose image denoising network, and then the low-dose image to be denoised is input into the trained low-dose image denoising network to obtain the reconstructed low-dose image.
The denoising method of the embodiment can be suitable for different anatomical structures, and can better extract the details of the original image, so that the reconstructed image is clearer.
Example four
Referring to fig. 7, the present embodiment provides a computer device, which includes a processor 200 and a memory 201, and a computer program stored on the memory 201, wherein the processor 200 executes the computer program to implement the training method according to the first embodiment.
The Memory 201 may include a Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory.
The processor 200 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the training method according to the first embodiment may be implemented by hardware integrated logic circuits or instructions in the form of software in the processor 200. The Processor 200 may also be a general-purpose Processor including a Central Processing Unit (CPU), a Network Processor (NP), etc., and may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component.
The memory 201 is used for storing a computer program, and the processor 200 executes the computer program to implement the training method according to the first embodiment after receiving the execution instruction.
The embodiment also provides a computer storage medium, a computer program is stored in the computer storage medium, and the processor 200 is configured to read and execute the computer program stored in the computer storage medium to implement the training method according to the first embodiment.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer storage medium or transmitted from one computer storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer storage media may be any available media that can be accessed by a computer or a data storage device, such as a server, data center, etc., that incorporates one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus, and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is directed to embodiments of the present application and it is noted that numerous modifications and adaptations may be made by those skilled in the art without departing from the principles of the present application and are intended to be within the scope of the present application.

Claims (10)

1. A training method of a low-dose image denoising network is characterized by comprising the following steps:
acquiring a training dataset comprising a plurality of input parameter sets, each input parameter set comprising a low dose image, attributes, a standard dose image of an anatomical structure;
establishing a low-dose image denoising network, wherein the low-dose image denoising network comprises an attribute fusion module, a plurality of spatial information fusion modules and a generation module which are sequentially cascaded;
and training the low-dose image denoising network by using the training data set to obtain parameters of the low-dose image denoising network and update the low-dose image denoising network.
2. The training method according to claim 1, wherein the attribute fusion module includes a weight prediction unit, a first feature extraction unit and a first fusion unit, the weight prediction unit is configured to obtain a weight mask corresponding to the anatomical structure according to the attributes, the first feature extraction unit is configured to extract features of the low-dose image, and the first fusion unit is configured to fuse the weight mask and the features of the low-dose image to obtain weighted features.
3. The training method according to claim 2, wherein the weight prediction unit includes a plurality of convolutional layers and a plurality of activation functions, and the plurality of convolutional layers and the plurality of activation functions are alternately cascaded in sequence.
4. The training method of claim 3, wherein the weight prediction unit further comprises a stitching layer for stitching outputs of convolutional layers having the same number of output channels in the plurality of convolutional layers.
5. The training method according to claim 3, wherein the spatial information fusion module includes a second feature extraction unit, a third feature extraction unit and a second fusion unit, the second feature extraction unit is configured to extract spatial information of the weighted features, the third feature extraction unit is configured to extract image features of the weighted features, and the second fusion unit is configured to fuse the spatial information with the image features.
6. The training method according to any one of claims 1 to 5, wherein the training the low-dose image denoising network by using the training data set, obtaining parameters of the low-dose image denoising network, and updating the low-dose image denoising network comprises:
inputting the low-dose images and attributes in the plurality of input parameter sets into the low-dose image denoising network to obtain a plurality of output images;
constructing a loss function according to the plurality of output images and the standard dose images in the plurality of input parameter sets respectively;
and optimizing the loss function to obtain parameters of the low-dose image denoising network and update the low-dose image denoising network.
7. Training method according to claim 6, characterized in that the loss function is:
Figure FDA0002821283040000021
wherein theta represents a network parameter of the low-dose image denoising network,loss (θ) represents the loss function, n represents the number of input parameter sets in the training data set, and G (X)i;ai(ii) a θ) represents the ith output image, YiRepresenting the standard dose image in the ith input parameter set.
8. A denoising method of a low-dose image, the denoising method comprising: inputting the low-dose image to be denoised into a low-dose image denoising network obtained by using the training method of the low-dose image denoising network according to any one of claims 1-7, and obtaining a reconstructed low-dose image.
9. A computer device comprising a memory, a processor and a computer program stored on the memory, wherein the processor executes the computer program to implement the training method of any one of claims 1 to 7.
10. A computer readable storage medium having computer instructions stored thereon, wherein the computer instructions, when executed by a processor, implement the training method of any one of claims 1 to 7.
CN202011437368.6A 2020-12-07 2020-12-07 Training method of low-dose image denoising network and denoising method of low-dose image Active CN112541871B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011437368.6A CN112541871B (en) 2020-12-07 2020-12-07 Training method of low-dose image denoising network and denoising method of low-dose image
PCT/CN2020/136210 WO2022120883A1 (en) 2020-12-07 2020-12-14 Training method for low-dose image denoising network and denoising method for low-dose image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011437368.6A CN112541871B (en) 2020-12-07 2020-12-07 Training method of low-dose image denoising network and denoising method of low-dose image

Publications (2)

Publication Number Publication Date
CN112541871A true CN112541871A (en) 2021-03-23
CN112541871B CN112541871B (en) 2024-07-23

Family

ID=75019870

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011437368.6A Active CN112541871B (en) 2020-12-07 2020-12-07 Training method of low-dose image denoising network and denoising method of low-dose image

Country Status (2)

Country Link
CN (1) CN112541871B (en)
WO (1) WO2022120883A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113256752A (en) * 2021-06-07 2021-08-13 太原理工大学 Low-dose CT reconstruction method based on dual-domain interleaved network
CN113298900A (en) * 2021-04-30 2021-08-24 北京航空航天大学 Processing method based on low signal-to-noise ratio PET image
WO2024066049A1 (en) * 2022-09-26 2024-04-04 深圳先进技术研究院 Pet image denoising method, terminal device, and readable storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116385319B (en) * 2023-05-29 2023-08-15 中国人民解放军国防科技大学 Radar image speckle filtering method and device based on scene cognition
CN117541481B (en) * 2024-01-09 2024-04-05 广东海洋大学 Low-dose CT image restoration method, system and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110992290A (en) * 2019-12-09 2020-04-10 深圳先进技术研究院 Training method and system for low-dose CT image denoising network
CN111179366A (en) * 2019-12-18 2020-05-19 深圳先进技术研究院 Low-dose image reconstruction method and system based on anatomical difference prior
US20200349449A1 (en) * 2018-01-24 2020-11-05 Rensselaer Polytechnic Institute 3-d convolutional autoencoder for low-dose ct via transfer learning from a 2-d trained network

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019019199A1 (en) * 2017-07-28 2019-01-31 Shenzhen United Imaging Healthcare Co., Ltd. System and method for image conversion
CN111325686B (en) * 2020-02-11 2021-03-30 之江实验室 Low-dose PET three-dimensional reconstruction method based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200349449A1 (en) * 2018-01-24 2020-11-05 Rensselaer Polytechnic Institute 3-d convolutional autoencoder for low-dose ct via transfer learning from a 2-d trained network
CN110992290A (en) * 2019-12-09 2020-04-10 深圳先进技术研究院 Training method and system for low-dose CT image denoising network
CN111179366A (en) * 2019-12-18 2020-05-19 深圳先进技术研究院 Low-dose image reconstruction method and system based on anatomical difference prior

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113298900A (en) * 2021-04-30 2021-08-24 北京航空航天大学 Processing method based on low signal-to-noise ratio PET image
CN113298900B (en) * 2021-04-30 2022-10-25 北京航空航天大学 Processing method based on low signal-to-noise ratio PET image
CN113256752A (en) * 2021-06-07 2021-08-13 太原理工大学 Low-dose CT reconstruction method based on dual-domain interleaved network
WO2024066049A1 (en) * 2022-09-26 2024-04-04 深圳先进技术研究院 Pet image denoising method, terminal device, and readable storage medium

Also Published As

Publication number Publication date
CN112541871B (en) 2024-07-23
WO2022120883A1 (en) 2022-06-16

Similar Documents

Publication Publication Date Title
CN112541871B (en) Training method of low-dose image denoising network and denoising method of low-dose image
CN107527359B (en) PET image reconstruction method and PET imaging equipment
US12039637B2 (en) Low dose Sinogram denoising and PET image reconstruction method based on teacher-student generator
CN110992290B (en) Training method and system for low-dose CT image denoising network
Liu et al. Deep learning with noise‐to‐noise training for denoising in SPECT myocardial perfusion imaging
CN111179366B (en) Anatomical structure difference priori based low-dose image reconstruction method and system
WO2021253722A1 (en) Medical image reconstruction technology method and apparatus, storage medium and electronic device
CN111340903B (en) Method and system for generating synthetic PET-CT image based on non-attenuation correction PET image
US11514621B2 (en) Low-dose image reconstruction method and system based on prior anatomical structure difference
CN111899315B (en) Method for reconstructing low-dose image by using multi-scale feature perception depth network
Huang et al. U‐net‐based deformation vector field estimation for motion‐compensated 4D‐CBCT reconstruction
Takam et al. Spark architecture for deep learning-based dose optimization in medical imaging
CN111325695A (en) Low-dose image enhancement method and system based on multi-dose grade and storage medium
CN109741254A (en) Dictionary training and Image Super-resolution Reconstruction method, system, equipment and storage medium
CN110874855B (en) Collaborative imaging method and device, storage medium and collaborative imaging equipment
CN111489406A (en) Training and generating method, device and storage medium for generating high-energy CT image model
CN113989110A (en) Lung image registration method and device, computer equipment and storage medium
Ye et al. Momentum-net for low-dose CT image reconstruction
CN111626964B (en) Optimization method and optimization device for scanned image and medical scanning system
CN112488951B (en) Training method of low-dose image denoising network and denoising method of low-dose image
CN117541481B (en) Low-dose CT image restoration method, system and storage medium
CN112509089B (en) CT local reconstruction method based on truncated data extrapolation network
US20240029324A1 (en) Method for image reconstruction, computer device and storage medium
CN112634147B (en) PET image noise reduction method, system, device and medium for self-supervision learning
US20230026961A1 (en) Low-dimensional manifold constrained disentanglement network for metal artifact reduction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant