CN114219820A - Neural network generation method, denoising method and device - Google Patents

Neural network generation method, denoising method and device Download PDF

Info

Publication number
CN114219820A
CN114219820A CN202111491312.3A CN202111491312A CN114219820A CN 114219820 A CN114219820 A CN 114219820A CN 202111491312 A CN202111491312 A CN 202111491312A CN 114219820 A CN114219820 A CN 114219820A
Authority
CN
China
Prior art keywords
ray
neural network
noise
images
net neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111491312.3A
Other languages
Chinese (zh)
Inventor
翁梓乔
程志威
姚青松
周少华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Industrial Park Zhizai Tianxia Technology Co ltd
Original Assignee
Suzhou Industrial Park Zhizai Tianxia Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Industrial Park Zhizai Tianxia Technology Co ltd filed Critical Suzhou Industrial Park Zhizai Tianxia Technology Co ltd
Priority to CN202111491312.3A priority Critical patent/CN114219820A/en
Publication of CN114219820A publication Critical patent/CN114219820A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering

Abstract

The invention discloses a neural network generation method, a denoising method and a device thereof, wherein the generation method comprises the following steps: simulating medical imaging based on Geant4 to obtain N CT noise images and N CT clear images corresponding to the N CT noise images one by one, wherein N is a natural number and is more than or equal to 2; and training the U-net neural network by using N CT noise images and N CT clear images. Thereby generating a neural network capable of denoising the CT image.

Description

Neural network generation method, denoising method and device
Technical Field
The invention relates to the technical field of X-ray images, in particular to a neural network generation method, a denoising method and a device thereof.
Background
Optical images, including x-rays, magnetic resonance imaging, computed tomography, ultrasound, etc., are susceptible to noise. The reasons vary from the use of different image acquisition techniques to the attempt to reduce the exposure of the patient to radiation. However, as the amount of radiation and the acquisition time are reduced, the noise of the X-ray imaging increases. Excessive noise can seriously affect the visual quality of the image, making it difficult for the physician to observe useful detailed information, affecting the final diagnosis.
Disclosure of Invention
In view of the above, the main objective of the present invention is to provide a neural network generation method, a denoising method and a device thereof.
In order to achieve the purpose, the technical scheme of the invention is realized as follows: a method of generating a neural network, comprising the steps of: simulating medical imaging based on Geant4 to obtain N x-ray noise images IαAnd with N x-ray noise images IαN x-ray sharp images I in one-to-one correspondencegtWherein N is a natural number and is more than or equal to 2; creating a U-Net neural network, wherein the loss function of the U-Net neural network is as follows:
Figure BDA0003399474320000011
Figure BDA0003399474320000012
wherein L1 is an L1 loss function, U is a U-Net neural network, and Laplacian is a Laplacian operator used for extracting edge information; using N noisy x-ray images IαAnd N x-ray sharp images IgtAnd training the U-net neural network.
As an improvement of the embodiment of the invention, the simulation medical imaging based on Geant4 obtains N x-ray noise images IαThe method specifically comprises the following steps: simulating medical imaging based on Geant4 to obtain N x-ray noise images IαAnd generating a CT phantom by using the MDCT and preset CT data when the simulation medical imaging is carried out.
As an improvement of the embodiment of the invention, when the simulated medical imaging is carried out, Num1 noise images to be processed are obtained for the same three-dimensional CT phantom, and Num2 noise images to be processed are carried outSuperimposed to obtain a sharp x-ray image IgtSelecting a noise image to be processed as an x-ray noise image IαTo obtain a one-to-one corresponding x-ray noise image IαAnd x-ray sharp image IgtWherein Num1 and Num2 are natural numbers, and Num2 is not more than Num 1.
As an improvement of an embodiment of the present invention, N number of x-ray sharp images IgtThe corresponding doses are not all the same.
As an improvement of an embodiment of the invention, the "use N x-ray noise images IαAnd N x-ray sharp images IgtThe training of the U-net neural network specifically comprises the following steps: for each x-ray noise image IαThe following treatments were all carried out:
Figure BDA0003399474320000021
wherein the content of the first and second substances,
Figure BDA0003399474320000022
is a gaussian filter with a convolution kernel size of 41,
Figure BDA0003399474320000023
is a Gaussian filter with a convolution kernel size of 41, FBIs a two-sided filtering, the two-sided filtering,
Figure BDA0003399474320000024
is a convolution operation, inputting an image
Figure BDA0003399474320000025
Figure BDA0003399474320000026
Thereafter, N Input images Input and N x-ray sharp images I are usedgtAnd training the U-net neural network.
The embodiment of the invention also provides a device for generating the neural network, which comprises the following modules: a first data acquisition module for simulating medical imaging based on Geant4 to obtain N x-ray noise images IαAnd N x-ray noiseImage IαN x-ray sharp images I in one-to-one correspondencegtWherein N is a natural number and is more than or equal to 2; a first neural network creation module configured to create a U-Net neural network, a loss function of the U-Net neural network being:
Figure BDA0003399474320000027
Figure BDA0003399474320000028
Figure BDA0003399474320000029
wherein L1 is an L1 loss function, U is a U-Net neural network, and Laplacian is a Laplacian operator used for extracting edge information; a first training module for using N x-ray noise images IαAnd N x-ray sharp images IgtAnd training the U-net neural network.
The embodiment of the invention also provides a denoising method of the CT image, which comprises the following steps: executing the generating method for creating the neural network to generate the U-Net neural network; and inputting the CT image to be processed into the U-Net neural network to obtain the denoised CT image.
The embodiment of the invention also provides a generation method of the neural network, which comprises the following steps: simulating medical imaging based on Geant4 to obtain M x-ray noise videos VαAnd with M x-ray noise videos VαM x-ray clear videos V in one-to-one correspondenceα gtWherein each x-ray noise video VαAll contain L consecutive x-ray noise video frames
Figure BDA00033994743200000210
Figure BDA00033994743200000211
Per x-ray noise video Vα gtEach video frame comprises L continuous x-ray noise video frames, M and L are natural numbers, M is more than or equal to 2, and L is more than or equal to 2; creating a U-Net neural network having a loss function ofLoss(U(input),Vα gt),
Figure BDA00033994743200000212
Figure BDA00033994743200000213
Wherein the content of the first and second substances,
Figure BDA00033994743200000214
is a gaussian filter with a convolution kernel size of 41,
Figure BDA00033994743200000215
is a Gaussian filter with a convolution kernel size of 41, FBIs a two-sided filtering, the two-sided filtering,
Figure BDA0003399474320000031
is a convolution operation, U is a U-Net neural network; using M inputs and M x-ray sharp videos Vα gtAnd training the U-net neural network.
The embodiment of the invention also provides a device for generating the neural network, which comprises the following modules: a second data acquisition module for simulating medical imaging based on Geant4 to obtain M x-ray noise videos VαAnd with M x-ray noise videos VαM x-ray clear videos V in one-to-one correspondenceα gtWherein each x-ray noise video VαAll contain L consecutive x-ray noise video frames
Figure BDA0003399474320000032
Per x-ray noise video Vα gtEach video frame comprises L continuous x-ray noise video frames, M and L are natural numbers, M is more than or equal to 2, and L is more than or equal to 2; a second neural network creating module for creating a U-Net neural network, wherein the Loss function of the U-Net neural network is Loss (U (input), V)α gt),
Figure BDA0003399474320000033
Figure BDA0003399474320000034
Wherein the content of the first and second substances,
Figure BDA0003399474320000035
is a gaussian filter with a convolution kernel size of 41,
Figure BDA0003399474320000036
is a Gaussian filter with a convolution kernel size of 41, FBIs a two-sided filtering, the two-sided filtering,
Figure BDA0003399474320000037
is a convolution operation, U is a U-Net neural network; a second training module for using M inputs and M x-ray clear videos Vα gtAnd training the U-net neural network.
The embodiment of the invention also provides a denoising method of the CT video, which comprises the following steps: executing the generating method for creating the neural network to generate the U-Net neural network; and inputting the CT video to be processed into the U-Net neural network to obtain the denoised CT video.
The drug library provided by the embodiment of the invention has the following advantages: the embodiment of the invention discloses a neural network generation method, a denoising method and a device thereof, wherein the generation method comprises the following steps: simulating medical imaging based on Geant4 to obtain N CT noise images and N CT clear images corresponding to the N CT noise images one by one, wherein N is a natural number and is more than or equal to 2; and training the U-net neural network by using N CT noise images and N CT clear images. Thereby generating a neural network capable of denoising the CT image.
Drawings
Fig. 1 is a schematic flow chart of a method for generating a neural network according to an embodiment of the present invention.
Detailed Description
The present invention will be described in detail below with reference to embodiments shown in the drawings. The present invention is not limited to the embodiment, and structural, methodological, or functional changes made by one of ordinary skill in the art according to the embodiment are included in the scope of the present invention.
The following description and the drawings sufficiently illustrate specific embodiments herein to enable those skilled in the art to practice them. Portions and features of some embodiments may be included in or substituted for those of others. The scope of the embodiments herein includes the full ambit of the claims, as well as all available equivalents of the claims. The terms "first," "second," and the like, herein are used solely to distinguish one element from another without requiring or implying any actual such relationship or order between such elements. In practice, a first element can also be referred to as a second element, and vice versa. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a structure, apparatus, or device that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such structure, apparatus, or device. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a structure, device or apparatus that comprises the element. The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
The terms "longitudinal," "lateral," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like herein, as used herein, are defined as orientations or positional relationships based on the orientation or positional relationship shown in the drawings, and are used for convenience in describing and simplifying the description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus should not be construed as limiting the present invention. In the description herein, unless otherwise specified and limited, the terms "mounted," "connected," and "connected" are to be construed broadly, and may include, for example, mechanical or electrical connections, communications between two elements, direct connections, and indirect connections via intermediary media, where the specific meaning of the terms is understood by those skilled in the art as appropriate.
An embodiment of the present invention provides a method for generating a neural network, as shown in fig. 1, including the following steps:
step 101: simulated medical imaging based on Geant4 to obtain N CT (Computed Tomography) noise images IαAnd with N x-ray noise images IαN x-ray sharp images I in one-to-one correspondencegtWherein N is a natural number and is more than or equal to 2; here, Geant4(Geometry And Tracking 4) is a monte carlo application software package developed by CERN (European Organization for Nuclear Research) based on C + + object-oriented technology for simulating physical processes of particle transport in matter.
When simulating medical imaging based on Geant4, a light source and a detector are arranged in software, then a three-dimensional CT motif is placed between the light source and the detector, and then the medical imaging can be simulated, so that N noise images I are obtainedαAnd obtaining N noise images IαN x-ray sharp images I in one-to-one correspondencegtIn the simulation of medical imaging, the dose (i.e. the number of simulation cases) can be set to change the definition of the CT image, i.e. the larger the dose, the clearer the CT image is obtained. Thus, for the same three-dimensional CT phantom, the x-ray noise image I can be obtained by reducing the doseαThen, by increasing the dose, a sharp x-ray image I can be obtainedgtIt will be appreciated that the x-ray noise image IαAnd x-ray sharp image IgtIs in a one-to-one correspondence. Optionally, in simulated medical imaging, the dose of each three-dimensional CT phantom is 5 hundred million.
Optionally, the detector is a flat panel detector, and the three-dimensional CT phantom is a voxelized three-dimensional CT phantom.
Step 102: creating a U-Net neural networkAnd the loss function of the U-Net neural network is as follows:
Figure BDA0003399474320000051
Figure BDA0003399474320000052
Figure BDA0003399474320000053
wherein L1 is an L1 loss function, U is a U-Net neural network, and Laplacian is a Laplacian operator used for extracting edge information;
the loss function mainly comprises two parts, wherein the first part is edge loss (EdgeLoss), so that the network can pay more attention to edge information, more edge information is reserved, the blur of the edge part is reduced, and the visual effect of the finally denoised image is better; considering that the dynamic range of the pixel value of the real X-ray image is smaller, in order to make the smaller pixel value, namely the area with low brightness, have good denoising effect. Therefore, the normalized loss (NormLoss) of the second part is designed, and the response of the region with low pixel value can be improved.
Step 103: using N noisy x-ray images IαAnd N x-ray sharp images IgtAnd training the U-net neural network.
Here, different degrees of noisy images and corresponding high definition images are generated with monte carlo simulations based on real physical processes. The obtained noise distribution is more consistent with the noise distribution of a real image, so that the neural network can be trained better.
In this embodiment, the "Geant 4-based simulated medical imaging" obtains N x-ray noise images IαThe method specifically comprises the following steps: simulating medical imaging based on Geant4 to obtain N x-ray noise images IαWherein, in performing the simulated medical imaging, a CT phantom is generated using MDCT (Multi Detector Computed Tomography) and preset CT data.
In this embodiment, during the simulated medical imaging, Num1 to-be-processed noise images are obtained from the same three-dimensional CT phantom, and the images are to be processedThe Num2 noise images to be processed are superposed to obtain an x-ray clear image IgtSelecting a noise image to be processed as an x-ray noise image IαTo obtain a one-to-one corresponding x-ray noise image IαAnd x-ray sharp image IgtWherein Num1 and Num2 are natural numbers, and Num2 is not more than Num 1.
Here, in practice, when simulating medical imaging, multiple times of simulation imaging can be performed on the same three-dimensional CT phantom to obtain multiple noise images to be processed, and then, from selecting some noise images to be processed to perform superposition processing, it can be understood that the sharpness of the superposed images is greater than that of the noise images to be processed, and therefore, the superposed images can be used as x-ray sharp images IgtFurthermore, the x-ray sharp image IgtAnd the image processing method is in one-to-one correspondence with any noise image to be processed. A large amount of pairing data is needed in the training of the neural network, and a simulation process is needed to be added when a data set is added, so that huge calculation and time resources are consumed. In the embodiment of the invention, training data can be greatly increased, and the noise of each data is different.
In this embodiment, N clear x-ray images IgtThe corresponding doses are not all the same. Here, in practice, the noise level of the medical image is complicated and varied, so it is very critical to train using pictures of different doses (i.e., noise sizes).
Optionally, Num1 to-be-processed noise images are obtained from the same three-dimensional CT phantom, and Num is randomly selected from Num1 to-be-processed noise images2Each noise image to be processed is subjected to superposition processing, Num3Noise images to be processed are subjected to superposition processingQThe noise images to be processed are superposed, thereby obtaining Q-1 clear x-ray images IgtThen selecting Q-1 x-ray noise images IαThus, Q-1 pairs of x-ray sharpness corresponding to each other are obtainedImage IgtAnd x-ray noisy image IαWherein, Numi≠NumjI is more than or equal to 2, j is less than or equal to Q, i is not equal to j.
For example, Num2=5,Num3=10,Num4=15,Num5=20。
In this embodiment, the "using N x-ray noise images IαAnd N x-ray sharp images IgtThe training of the U-net neural network specifically comprises the following steps: for each x-ray noise image IαThe following treatments were all carried out:
Figure BDA0003399474320000061
Figure BDA0003399474320000062
wherein the content of the first and second substances,
Figure BDA0003399474320000063
is a gaussian filter with a convolution kernel size of 41,
Figure BDA0003399474320000064
is a Gaussian filter with a convolution kernel size of 41, FBIs a two-sided filtering, the two-sided filtering,
Figure BDA0003399474320000065
is a convolution operation, inputting an image
Figure BDA0003399474320000066
Thereafter, N Input images Input and N x-ray sharp images I are usedgtAnd training the U-net neural network.
Here, since the input of the U-Net network includes different noise levels, in order to enable the U-Net network to cope with real data with different noise magnitudes, the generation method in this embodiment automatically estimates the noise level of the input X-ray image, and then trains the neural network.
Here, the x-ray noise image I is first filtered using bilateral filtering and Gaussian filteringαDenoised, then summed with the original x-ray noisy image IαAnd (4) making a difference value to obtain the estimated noise, and then blurring the noise of the difference image by using a Gaussian filter kernel with a large window, namely the noise estimation of adjacent pixels tends to be consistent, so that the deviation of the whole noise estimation is reduced.
The embodiment of the invention provides a generation device of a neural network, which comprises the following modules:
a first data acquisition module for simulating medical imaging based on Geant4 to obtain N x-ray noise images IαAnd with N x-ray noise images IαN x-ray sharp images I in one-to-one correspondencegtWherein N is a natural number and is more than or equal to 2;
a first neural network creation module configured to create a U-Net neural network, a loss function of the U-Net neural network being:
Figure BDA0003399474320000067
Figure BDA0003399474320000068
wherein L1 is an L1 loss function, U is a U-Net neural network, and Laplacian is a Laplacian operator used for extracting edge information;
a first training module for using N x-ray noise images IαAnd N x-ray sharp images IgtTraining the U-net neural network
The third embodiment of the invention provides a CT image denoising method, which comprises the following steps: executing the generation method for creating the neural network in the first embodiment to generate the U-Net neural network; and inputting the CT image to be processed into the U-Net neural network to obtain the denoised CT image.
The fourth embodiment of the invention provides a method for generating a neural network, which comprises the following steps:
step 201: simulating medical imaging based on Geant4 to obtain M x-ray noise videos VαAnd with M x-ray noise videos VαM x-ray clear videos V in one-to-one correspondenceα gtWherein each x-ray noise video VαAll compriseWith L successive x-ray noisy video frames
Figure BDA0003399474320000071
Per x-ray noise video Vα gtEach video frame comprises L continuous x-ray noise video frames, M and L are natural numbers, M is more than or equal to 2, and L is more than or equal to 2;
step 202: creating a U-Net neural network, wherein the Loss function of the U-Net neural network is Loss (U (input), V)α gt),
Figure BDA0003399474320000072
Figure BDA0003399474320000073
Wherein the content of the first and second substances,
Figure BDA0003399474320000074
is a gaussian filter with a convolution kernel size of 41,
Figure BDA0003399474320000075
is a Gaussian filter with a convolution kernel size of 41, FBIs a two-sided filtering, the two-sided filtering,
Figure BDA0003399474320000076
is a convolution operation, U is a U-Net neural network;
step 203: using M inputs and M x-ray sharp videos Vα gtAnd training the U-net neural network.
The fifth embodiment of the present invention provides a device for generating a neural network, including the following modules:
a second data acquisition module for simulating medical imaging based on Geant4 to obtain M x-ray noise videos VαAnd with M x-ray noise videos VαM x-ray clear videos V in one-to-one correspondenceα gtWherein each x-ray noise video VαAll contain L consecutive x-ray noise video frames
Figure BDA0003399474320000077
Per x-ray noise video Vα gtEach video frame comprises L continuous x-ray noise video frames, M and L are natural numbers, M is more than or equal to 2, and L is more than or equal to 2;
a second neural network creating module for creating a U-Net neural network, wherein the Loss function of the U-Net neural network is Loss (U (input), V)α gt),
Figure BDA0003399474320000078
Figure BDA0003399474320000079
Wherein the content of the first and second substances,
Figure BDA00033994743200000710
is a gaussian filter with a convolution kernel size of 41,
Figure BDA00033994743200000711
is a Gaussian filter with a convolution kernel size of 41, FBIs a two-sided filtering, the two-sided filtering,
Figure BDA00033994743200000712
is a convolution operation, U is a U-Net neural network;
a second training module for using M inputs and M x-ray clear videos Vα gtAnd training the U-net neural network.
The sixth embodiment of the invention provides a CT video denoising method, which comprises the following steps: executing the generation method for creating the neural network in the fifth embodiment to generate the U-Net neural network; and inputting the CT video to be processed into the U-Net neural network to obtain the denoised CT video.
It should be understood that although the present description refers to embodiments, not every embodiment contains only a single technical solution, and such description is for clarity only, and those skilled in the art should make the description as a whole, and the technical solutions in the embodiments can also be combined appropriately to form other embodiments understood by those skilled in the art.
The above-listed detailed description is only a specific description of a possible embodiment of the present invention, and they are not intended to limit the scope of the present invention, and equivalent embodiments or modifications made without departing from the technical spirit of the present invention should be included in the scope of the present invention.

Claims (10)

1. A method for generating a neural network, comprising the steps of:
simulating medical imaging based on Geant4 to obtain N x-ray noise images IαAnd with N x-ray noise images IαN x-ray sharp images I in one-to-one correspondencegtWherein N is a natural number and is more than or equal to 2;
creating a U-Net neural network, wherein the loss function of the U-Net neural network is as follows:
Figure FDA0003399474310000011
Figure FDA0003399474310000012
wherein L1 is an L1 loss function, U is a U-Net neural network, and Laplacian is a Laplacian operator used for extracting edge information;
using N noisy x-ray images IαAnd N x-ray sharp images IgtAnd training the U-net neural network.
2. The generation method of claim 1, wherein "simulating medical imaging based on Geant4, obtains N x-ray noise images IαThe method specifically comprises the following steps:
simulating medical imaging based on Geant4 to obtain N x-ray noise images IαAnd generating a CT phantom by using the MDCT and preset CT data when the simulation medical imaging is carried out.
3. The generation method according to claim 1,
in the process of moldingWhen the quasi-medical imaging is carried out, Num1 noise images to be processed are obtained from the same three-dimensional CT motif, and Num2 noise images to be processed are superposed to obtain an x-ray clear image IgtSelecting a noise image to be processed as an x-ray noise image IαTo obtain a one-to-one corresponding x-ray noise image IαAnd x-ray sharp image IgtWherein Num1 and Num2 are natural numbers, and Num2 is not more than Num 1.
4. The generation method according to claim 3,
n clear x-ray images IgtThe corresponding doses are not all the same.
5. Method of generation according to claim 4, characterized in that said "using N x-ray noise images IαAnd N x-ray sharp images IgtThe training of the U-net neural network specifically comprises the following steps:
for each x-ray noise image IαThe following treatments were all carried out:
Figure FDA0003399474310000013
wherein the content of the first and second substances,
Figure FDA0003399474310000014
is a gaussian filter with a convolution kernel size of 41,
Figure FDA0003399474310000015
is a Gaussian filter with a convolution kernel size of 41, FBIs a two-sided filtering, the two-sided filtering,
Figure FDA0003399474310000016
is a convolution operation, inputting an image
Figure FDA0003399474310000017
Thereafter, N Input images Input and N x-ray sharp images I are usedgtTraining the U-net neural network。
6. An apparatus for generating a neural network, comprising:
a first data acquisition module for simulating medical imaging based on Geant4 to obtain N x-ray noise images IαAnd with N x-ray noise images IαN x-ray sharp images I in one-to-one correspondencegtWherein N is a natural number and is more than or equal to 2;
a first neural network creation module configured to create a U-Net neural network, a loss function of the U-Net neural network being:
Figure FDA0003399474310000021
Figure FDA0003399474310000022
wherein L1 is an L1 loss function, U is a U-Net neural network, and Laplacian is a Laplacian operator used for extracting edge information;
a first training module for using N x-ray noise images IαAnd N x-ray sharp images IgtAnd training the U-net neural network.
7. A denoising method of a CT image is characterized by comprising the following steps:
executing the method of generating a neural network according to any one of claims 1 to 5, generating a U-Net neural network;
and inputting the CT image to be processed into the U-Net neural network to obtain the denoised CT image.
8. A method for generating a neural network, comprising the steps of:
simulating medical imaging based on Geant4 to obtain M x-ray noise videos VαAnd with M x-ray noise videos VαM x-ray clear videos V in one-to-one correspondenceα gtWherein each x-ray noise video VαAll compriseL consecutive x-ray noisy video frames
Figure FDA0003399474310000023
Per x-ray noise video Vα gtEach video frame comprises L continuous x-ray noise video frames, M and L are natural numbers, M is more than or equal to 2, and L is more than or equal to 2;
creating a U-Net neural network, wherein the Loss function of the U-Net neural network is Loss (U (input), V)α gt),
Figure FDA0003399474310000024
Figure FDA0003399474310000025
Wherein the content of the first and second substances,
Figure FDA0003399474310000026
is a gaussian filter with a convolution kernel size of 41,
Figure FDA0003399474310000027
is a Gaussian filter with a convolution kernel size of 41, FBIs a two-sided filtering, the two-sided filtering,
Figure FDA0003399474310000028
is a convolution operation, U is a U-Net neural network;
using M inputs and M x-ray sharp videos Vα gtAnd training the U-net neural network.
9. An apparatus for generating a neural network, comprising:
a second data acquisition module for simulating medical imaging based on Geant4 to obtain M x-ray noise videos VαAnd with M x-ray noise videos VαM x-ray clear videos V in one-to-one correspondenceα gtWherein each x-ray noise video VαAll contain L consecutive x-ray noise video frames
Figure FDA0003399474310000029
Per x-ray noise video Vα gtEach video frame comprises L continuous x-ray noise video frames, M and L are natural numbers, M is more than or equal to 2, and L is more than or equal to 2;
a second neural network creation module for creating a U-Net neural network having a loss function of
Figure FDA00033994743100000210
Figure FDA0003399474310000031
Wherein the content of the first and second substances,
Figure FDA0003399474310000032
is a gaussian filter with a convolution kernel size of 41,
Figure FDA0003399474310000033
is a Gaussian filter with a convolution kernel size of 41, FBIs a two-sided filtering, the two-sided filtering,
Figure FDA0003399474310000034
is a convolution operation, U is a U-Net neural network;
a second training module for using M inputs and M x-ray clear videos Vα gtAnd training the U-net neural network.
10. A denoising method of a CT video is characterized by comprising the following steps:
executing the method of generating a neural network according to claim 8, generating a U-Net neural network;
and inputting the CT video to be processed into the U-Net neural network to obtain the denoised CT video.
CN202111491312.3A 2021-12-08 2021-12-08 Neural network generation method, denoising method and device Pending CN114219820A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111491312.3A CN114219820A (en) 2021-12-08 2021-12-08 Neural network generation method, denoising method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111491312.3A CN114219820A (en) 2021-12-08 2021-12-08 Neural network generation method, denoising method and device

Publications (1)

Publication Number Publication Date
CN114219820A true CN114219820A (en) 2022-03-22

Family

ID=80700235

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111491312.3A Pending CN114219820A (en) 2021-12-08 2021-12-08 Neural network generation method, denoising method and device

Country Status (1)

Country Link
CN (1) CN114219820A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116188619A (en) * 2023-04-26 2023-05-30 北京唯迈医疗设备有限公司 Method, apparatus and medium for generating X-ray image pair for training

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106408522A (en) * 2016-06-27 2017-02-15 深圳市未来媒体技术研究院 Image de-noising method based on convolution pair neural network
CN106683145A (en) * 2014-12-22 2017-05-17 上海联影医疗科技有限公司 Method for acquiring CT images
CN109949215A (en) * 2019-03-29 2019-06-28 浙江明峰智能医疗科技有限公司 A kind of low-dose CT image simulation method
CN109978778A (en) * 2019-03-06 2019-07-05 浙江工业大学 Convolutional neural networks medicine CT image denoising method based on residual error study
CN110246105A (en) * 2019-06-15 2019-09-17 南京大学 A kind of video denoising method based on actual camera noise modeling
CN110858391A (en) * 2018-08-23 2020-03-03 通用电气公司 Patient-specific deep learning image denoising method and system
CN111047524A (en) * 2019-11-13 2020-04-21 浙江工业大学 Low-dose CT lung image denoising method based on deep convolutional neural network
CN111882503A (en) * 2020-08-04 2020-11-03 深圳高性能医疗器械国家研究院有限公司 Image noise reduction method and application thereof
CN112019704A (en) * 2020-10-15 2020-12-01 电子科技大学 Video denoising method based on prior information and convolutional neural network
CN113570586A (en) * 2021-08-02 2021-10-29 苏州工业园区智在天下科技有限公司 Method and device for creating and processing CT image of neural network system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106683145A (en) * 2014-12-22 2017-05-17 上海联影医疗科技有限公司 Method for acquiring CT images
CN106408522A (en) * 2016-06-27 2017-02-15 深圳市未来媒体技术研究院 Image de-noising method based on convolution pair neural network
CN110858391A (en) * 2018-08-23 2020-03-03 通用电气公司 Patient-specific deep learning image denoising method and system
CN109978778A (en) * 2019-03-06 2019-07-05 浙江工业大学 Convolutional neural networks medicine CT image denoising method based on residual error study
CN109949215A (en) * 2019-03-29 2019-06-28 浙江明峰智能医疗科技有限公司 A kind of low-dose CT image simulation method
CN110246105A (en) * 2019-06-15 2019-09-17 南京大学 A kind of video denoising method based on actual camera noise modeling
CN111047524A (en) * 2019-11-13 2020-04-21 浙江工业大学 Low-dose CT lung image denoising method based on deep convolutional neural network
CN111882503A (en) * 2020-08-04 2020-11-03 深圳高性能医疗器械国家研究院有限公司 Image noise reduction method and application thereof
CN112019704A (en) * 2020-10-15 2020-12-01 电子科技大学 Video denoising method based on prior information and convolutional neural network
CN113570586A (en) * 2021-08-02 2021-10-29 苏州工业园区智在天下科技有限公司 Method and device for creating and processing CT image of neural network system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116188619A (en) * 2023-04-26 2023-05-30 北京唯迈医疗设备有限公司 Method, apparatus and medium for generating X-ray image pair for training
CN116188619B (en) * 2023-04-26 2023-09-01 北京唯迈医疗设备有限公司 Method, apparatus and medium for generating X-ray image pair for training

Similar Documents

Publication Publication Date Title
Huang et al. Metal artifact reduction on cervical CT images by deep residual learning
Xu et al. A practical cone-beam CT scatter correction method with optimized Monte Carlo simulations for image-guided radiation therapy
Wang et al. Metal artifact reduction in CT using fusion based prior image
CN107481297B (en) CT image reconstruction method based on convolutional neural network
Kalra et al. Low-dose CT of the abdomen: evaluation of image improvement with use of noise reduction filters—pilot study
Arndt et al. Deep learning CT image reconstruction in clinical practice
Lyu et al. Encoding metal mask projection for metal artifact reduction in computed tomography
CN112822982B (en) Image forming apparatus, image forming method, and method for forming learning model
CN103136731B (en) A kind of parameter imaging method of dynamic PET images
Al‐Ameen et al. A new algorithm for improving the low contrast of computed tomography images using tuned brightness controlled single‐scale Retinex
Sahu et al. Using virtual digital breast tomosynthesis for de-noising of low-dose projection images
CN110070510A (en) A kind of CNN medical image denoising method for extracting feature based on VGG-19
Levakhina Three-Dimensional Digital Tomosynthesis: Iterative reconstruction, artifact reduction and alternative acquisition geometry
Zhang et al. Noise2Context: context‐assisted learning 3D thin‐layer for low‐dose CT
Zhang et al. Reduction of metal artifacts in x-ray CT images using a convolutional neural network
Bier et al. Scatter correction using a primary modulator on a clinical angiography C‐arm CT system
CN103971349B (en) Computed tomography images method for reconstructing and ct apparatus
JP2019519270A (en) Device and method for denoising vector-valued images
CN114219820A (en) Neural network generation method, denoising method and device
Badal et al. Virtual clinical trial for task-based evaluation of a deep learning synthetic mammography algorithm
Frosio et al. Enhancing digital cephalic radiography with mixture models and local gamma correction
Kim et al. Convolutional neural network–based metal and streak artifacts reduction in dental CT images with sparse‐view sampling scheme
Hariharan et al. Learning-based X-ray image denoising utilizing model-based image simulations
Vieira et al. Feasibility study of dose reduction in digital breast tomosynthesis using non-local denoising algorithms
Wang et al. Inner-ear augmented metal artifact reduction with simulation-based 3D generative adversarial networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination