CN110930318A - Low-dose CT image repairing and denoising method - Google Patents

Low-dose CT image repairing and denoising method Download PDF

Info

Publication number
CN110930318A
CN110930318A CN201911049867.5A CN201911049867A CN110930318A CN 110930318 A CN110930318 A CN 110930318A CN 201911049867 A CN201911049867 A CN 201911049867A CN 110930318 A CN110930318 A CN 110930318A
Authority
CN
China
Prior art keywords
dose
network
image
low
net
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911049867.5A
Other languages
Chinese (zh)
Other versions
CN110930318B (en
Inventor
王国利
方媛
郭雪梅
戴宪华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National Sun Yat Sen University
Original Assignee
National Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National Sun Yat Sen University filed Critical National Sun Yat Sen University
Priority to CN201911049867.5A priority Critical patent/CN110930318B/en
Publication of CN110930318A publication Critical patent/CN110930318A/en
Application granted granted Critical
Publication of CN110930318B publication Critical patent/CN110930318B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A30/00Adapting or protecting infrastructure or their operation
    • Y02A30/60Planning or developing urban green infrastructure

Abstract

The invention discloses a method for repairing and denoising a low-dose CT image, which comprises the following steps: simulation generation of low dose CT images required for training: and carrying out fan-shaped beam projection transformation on the high-dose CT image to obtain projection data of a projection domain, carrying out exponential operation on the obtained projection matrix, adding Poisson noise, then taking logarithm, and converting the simulated projection data back to the image domain through a back projection function carried by MATLAB to obtain a simulated low-dose CT image. The method for repairing and denoising the low-dose CT image efficiently realizes the conversion from the low-dose CT image to the high-dose CT image, effectively recovers the detail part on the CT image, can reduce the complexity of a network, quickens network training and improves reconstruction efficiency, and can effectively reduce the harm of the CT technology to patients under the condition of not influencing the diagnosis of doctors.

Description

Low-dose CT image repairing and denoising method
Technical Field
The invention relates to the technical field of image processing, in particular to a low-dose CT image repairing and denoising method.
Background
In recent years, with the continuous development of medical imaging technology and the continuous highlighting of the practical application advantages in the medical field, many new medical imaging technologies, such as CT imaging technology, can provide important basis for clinical diagnosis through the reasonable application thereof, and have become indispensable auxiliary diagnostic means in the current medical field. CT images are primarily tools generated by CT imaging techniques for medical diagnosis to assist in physician diagnosis. The basic principle of CT imaging technology is that X-ray beam is used to scan the layer of human body to be examined in certain thickness, the detector receives the X-ray transmitted through the layer, the X-ray is converted into visible light, the visible light is converted into electric signal by photoelectric converter, the electric signal is converted into digital signal by analog/digital converter, and the digital signal is input into computer for processing to form CT image.
However, the radiation generated during the CT scanning process is potentially dangerous to human body, and may even cause serious cancer. The higher the radiation dose in the CT scan, the better the CT image effect, which is more helpful for the diagnosis of the doctor, but at the same time, the higher the radiation dose, which may cause some harm to the patient. There is therefore a need to reduce the radiation dose while ensuring image quality to meet the clinical diagnostic requirements. In 1990, Naidich et al proposed the concept of low dose CT, i.e., reducing the radiation dose by reducing the tube current without changing other scan parameters. When the tube current is reduced, the number of photons received by the detector is also reduced, so that a 'photon starvation' effect is generated, projection data are polluted by noise, and a CT image reconstructed from the projection data not only has obvious noise, but also generates streak artifacts, so that the clinical diagnosis is adversely affected. Aiming at the problems, a plurality of algorithms for improving the quality of the low-dose CT image are provided, and can be divided into a projection domain denoising algorithm, an image reconstruction algorithm, an image domain denoising algorithm and the like, but the algorithms are still insufficient, the projection domain algorithms generally need to use original projection data, are difficult to obtain in practice, and are high in algorithm complexity and long in time consumption. Meanwhile, most of the existing methods can perform denoising and artifact removal on the low-dose CT image, but are difficult to recover some detail parts on the CT image.
Disclosure of Invention
The invention provides a low-dose CT image restoration and denoising method which can improve the image quality and help doctors to make more accurate medical judgment.
In order to achieve the technical effects, the technical scheme of the invention is as follows:
a low-dose CT image restoration denoising method comprises the following steps:
s1: simulating to generate a low-dose CT image required by training;
s2: extracting and segmenting the interest region of the low-dose CT image obtained in the S1;
s3: the image obtained in S2 is input to the model to be trained to obtain a restored CT image.
Further, the specific process of step S1 is:
s11: carrying out fan-shaped beam projection transformation on the high-dose CT image to obtain projection data of a projection domain;
s12: performing exponential operation on the obtained projection matrix, adding Poisson noise, and then taking logarithm;
s13: and (3) converting the simulated projection data back to an image domain through a MATLAB self-contained back projection function to obtain a simulated low-dose CT image.
Further, the specific process of step S2 is:
reading the simulated low-dose CT image, regularly removing the black background area at the periphery, extracting the outline of the tissue area, and dividing the target area into a plurality of image blocks.
Further, the training process of the model trained in step S3 is:
training U-net to generate confrontation network to reconstruct high-dose CT image, training U-net to generate confrontation network, storing the generated confrontation network model, inputting the image extracted and segmented in S2 into the confrontation network model, and outputting the corresponding repaired high-dose CT image.
Further, in the step of training the U-net to generate the confrontation network model, training the generated confrontation network model includes: training the U-net to generate a network; training a discrimination network; and the loss function propagates backwards.
Further, the step of training the U-net generation network includes:
inputting the extracted and segmented low-dose CT image blocks into a U-net generation network in a generation countermeasure network, and then generating high-dose CT image blocks through the U-net generation network, wherein the size specifications of the low-dose CT image blocks and the high-dose CT image blocks need to be kept consistent.
Further, the step of training the authentication network comprises:
simultaneously inputting the low-dose CT image block and the high-dose CT image block into a U-net generation countermeasure network identification network; simultaneously inputting a low-dose CT image block and a high-dose CT label image block into a U-net generation countermeasure network identification network; the identification network judges the probability of the high-dose CT label image block being a real sample; and judging the probability of the high-dose CT image block output by the U-net generating network to be a real sample, wherein the fault tolerance rate of the identification network is improved by inputting the low-dose CT image block to the identification network.
Further, the step of back-propagating the loss function comprises:
calculating a loss function of a high-dose CT image block generated by the U-net generation network relative to a high-dose CT label image block; minimizing a loss function; optimizing the U-net according to the loss function to generate network parameters of a U-net generation network in the countermeasure network, and generating the optimized network parameters of the U-net generation network;
the process of optimizing the network parameters of the U-net generation network is carried out for multiple times; generating a high-dose CT image block after each sub-optimal U-net generates network parameters of a network and generates the network parameters of the optimized U-net generation network; generating new high-dose CT image blocks each time is to generate network parameters of the network by using the newly optimized U-net; the loss function of the countermeasure network is generated through the U-net and is propagated reversely, and the error of the reconstructed high-dose CT image block relative to the high-dose CT label image block is minimized through continuous optimization; the U-net coding-decoding network structure in the algorithm network enables the reconstructed high-dose CT image to have a finer texture structure.
Further, the training U-net generates the training targets of the countermeasure network as follows:
the error rate of high-dose CT image blocks generated by the U-net generation network and high-dose CT label image blocks is minimized; the accuracy of the probability of judging the high-dose CT image block to be the real high-dose CT label image block by the identification network is maximized; the minimum maximization optimization formula of the generation network and the authentication network for generating the countermeasure network is as follows:
minGmaxDV(D,G)=E[log(D(x))]+E[log(1-D(G(z)))]
wherein E is an expected value, G (z) is a sample of the image low-dose CT image block z output, i.e. a high-dose CT image block, D (x) is a probability that the sample input high-dose CT label image block x of the authentication network is from a real high-resolution label image block, and V (D, G) is an error rate of the generated network and an accuracy of the authentication network;
the minimum maximization optimization formulas of the generation network and the identification network comprise a maximization optimization formula of the accuracy rate of the identification network and a minimization optimization formula of the error rate of the generation network;
the maximum optimization formula for the accuracy rate of the identification network is as follows:
maxDV(D,G)=E[log(D(x))]
the optimization formula for minimizing the error rate of the generated network is as follows:
minGV(D,G)=E[log(1-D(G(z)))]
the loss function is composed of content loss, countermeasure loss and loss regularization terms and is calculated by the following formula:
floss=lcontent+k*lGen+lTV
lContent=lMSE+lVGG/i,j
Figure BDA0002255069200000031
Figure BDA0002255069200000041
and
Figure BDA0002255069200000044
Figure BDA0002255069200000042
wherein f islossAs a function of the total loss,/ContentFor content loss items,/GenTo combat the perceptual loss term, k is the weight to combat perceptual loss, lMSEIs the minimum mean square error, l, of the pixel spaceVGG/i,jIs a minimum mean square error, I, based on a feature spaceLFor low dose CT image blocks, IHFor high dose CT labeled image blocks, GθG(IL) For a high dose CT image block generated for a network, W is the width of the image block, H is the height of the image block, DθD(I) Generates the probability that the image I judged by the discrimination network of the countermeasure network belongs to the real high-dose CT label image block,
Figure BDA0002255069200000043
is a representative feature map in the VGG19 network, i, j is the jth convolution before the ith maximum pooling layer (after activation), and N is the total number of training samples.
Further, in the training process of the U-net countermeasure generation network, the steps after the extracted and segmented CT image blocks are input into the countermeasure network model are sequentially and cyclically performed:
initial parameters for generating the countermeasure network may be set to:
image block size 50 x 50;
the optimizer is selected as an adam optimizer;
the initial learning rate was 0.0001, set to 0.0005 after 50 iterations, and set to 0.00001 after 100 iterations; the number of loop iterations is 200; a batch process value of 16; the step length is less than 1; the number of channels is 4.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
1. according to the method, the quality of the low-dose CT image is improved by an image post-processing method, the quality of the low-dose CT image is close to or even exceeds that of a CT image with a common dose, the image quality can be improved while the physical damage of a patient caused by a CT technology is reduced, and doctors are helped to carry out more accurate medical judgment;
2. the method is realized by generating the countermeasure network, so that the reconstruction error can be minimized, particularly, a U-net coding-decoding network structure applied in the network can effectively reconstruct a fine texture structure in an image, and meanwhile, the extraction and segmentation operation of an interest region of the image before training can effectively improve the training efficiency of the whole network;
3. the training data adopted by the method is simulated from the existing high-dose CT image according to the imaging principle of the CT technology, a large amount of low-dose CT data does not need to be actually acquired, the influence of secondary scanning on a patient is avoided, and meanwhile, the diversification of the training data can be ensured, so that the algorithm can be better migrated to other medical images; compared with the prior method, the method is simpler and more convenient to use, is stable, and has stronger robustness and robustness.
Drawings
FIG. 1 is a flowchart illustrating steps of a method for performing repairing and denoising on a low-dose CT image according to the present invention;
FIG. 2 is a flowchart illustrating specific steps of a method for performing repairing and denoising on a low-dose CT image according to the present invention;
FIG. 3 is a flowchart of the steps of training the generation of a countermeasure network;
FIG. 4 is a schematic diagram illustrating conversion of a high-dose CT image to a simulated low-dose CT image, in accordance with one embodiment;
FIG. 5 is a schematic diagram illustrating region of interest extraction and segmentation, according to an embodiment;
FIG. 6 is a schematic diagram illustrating a comparison of a low-dose CT image, a high-dose CT image output by a U-net generation network, and a high-dose CT label image according to an embodiment.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product;
it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Example 1
Fig. 1 is a flowchart illustrating steps of a method for restoring and denoising a low-dose CT image according to the present invention. The method for realizing restoration and denoising of the low-dose CT image at least comprises the following steps:
step S1: the simulation generates low dose CT images required for training.
Step S2: and extracting and segmenting the interest region.
Step S3: the U-net is trained to generate a confrontation network reconstructed high-dose CT image.
Step S4: the trained model is applied to the true low-dose CT image.
Referring to fig. 2 and fig. 3, and the following detailed description, a detailed flowchart of the method for repairing and denoising a low-dose CT image according to the present invention is shown:
step S1
The specific operation is as follows: the high-dose CT image is subjected to fan-shaped beam projection transformation to obtain projection data of a projection domain, an obtained projection matrix is subjected to exponential operation, Poisson noise is added, logarithm is taken, and the simulated projection data is converted back to the image domain through a back projection function carried by MATLAB to obtain a simulated low-dose CT image, wherein FIG. 4 is a conversion schematic diagram in a specific implementation mode.
The step is actually an image preprocessing operation before algorithm training, for example, most of CT images which are normally easy to obtain are two-dimensional medical images in a dcm format, in order to facilitate the image processing operation, the format of the CT images needs to be converted into a common image format, such as jpg, bmp, etc., and then, in order to ensure the diversity of samples, CT images with higher dose of multiple organs and multiple parts are selected as much as possible, and the number of the images is preferably 1500.
The high-dose CT images are processed in batches, projection data obtained by performing projection transformation on the high-dose CT images are projection data acquired by a simulated CT machine, and the larger the noise parameters (between 0 and 1) can be modified when Poisson noise is added, the larger the noise influence is, and the lower the simulated CT dose is. It is preferred to mix the parameters here in order to obtain a training set of low-dose CT images that is more realistic.
Step S2
The specific operation is as follows: reading a simulated low-dose CT image, regularly removing a black background area at the periphery, extracting the outline of a tissue area, and dividing a target area into a plurality of image blocks, wherein fig. 5 is an operation schematic diagram in a specific embodiment.
Specifically, the low dose CT image was 512 x 512 in size, and 20% of the periphery of the image was a black background portion. For example, in one embodiment, the low dose CT image is first cut into large image blocks of 440 × 440 surrounding the central region, i.e., the width and height cut pixel regions are 37: 476. In this embodiment, the final cut size is set to 55 × 55, and the resulting large image block may be further cut to 64 small image blocks (440/55) × (440/55). However, the present invention is not limited to this, and the size of the background portion and the size of the image block to be cut finally are set according to the actual size.
Step S3
The specific operation is as follows: training the U-net to generate an antagonistic network, storing the generated antagonistic network model, and checking the difference between the reconstructed model and the actual high-dose CT image.
After steps S1 and S2, a training set of low-dose CT images (1500 × 64 — 96000 image blocks) is obtained, and then training of U-net generation of an anti-network model is performed.
Specifically, the idea of generating a competing network is derived from two-player zero-sum games in the game theory, corresponding to two game players in the game of generating network G and discriminating network D in the competing network, but with the two networks having respective functions. The generation network can be likened to a sample generator, inputting a noise/sample, and then the goal is to input a realistic sample (which can refer to the actual sample data). The discriminant network can be compared with a two-classifier (like a 0-1 classifier) to determine whether the input sample is true or false, and if the sample is from real training data, the discriminant network will output a large probability (greater than 0.5), otherwise, a small probability (less than 0.5) will be output. The result of the final game is that nash equilibrium is achieved. Specifically, the training targets of the U-net generation countermeasure network related to the invention are as follows: training the generated network G, and continuously optimizing parameters to ensure that the output image is enough to be spurious, namely, the identification network D cannot judge whether the sample is a real sample, namely, the output result of the identification network D is 0.5 no matter what sample is input.
In addition, the U-net encoding-decoding structure is a process of simulating human brain's occupational memory. The process that a human understands and abstracts received external information can be visually represented as that an encoding process is used for memorizing and processing the information, the information is abstracted to form a low-rank vector, and some priori knowledge can be added in the process to assist in the abstraction of the information. Each coding unit is equivalent to each link of human memory. The decoding process corresponds to a recall process, and the process utilizes the priori knowledge to decode and process the stored low-rank information so as to recover and obtain a required form. In summary, the training of the encoding-decoding model parameters corresponds to the acquisition of the information comprehensive processing capability by the human brain.
The specific process of generating the confrontation network training is shown in fig. 3.
First, initial parameters for generating the countermeasure network need to be set. For example, the initial parameters in one embodiment may be set as: the number of loop iterations is 200; the initial learning rate was 0.0001, set to 0.0005 after 50 iterations, and set to 0.00001 after 100 iterations; batch process value is 16; the step length is 0.1; the number of channels is 4. However, the present invention is not limited thereto, and the specific setting of the initial parameters can be performed by those skilled in the art according to the specific practical requirements and the convergence condition of the target loss function.
The training set samples of the countermeasure network are generated into an image pair consisting of a high-dose CT label image block obtained by the step S2 of the high-dose CT image and a low-dose CT image block obtained by the steps S1 and S2 of the high-dose CT image.
In one embodiment, the low-dose CT image patches may be sequentially input into a U-net generation network G in a U-net generation countermeasure network, generating high-dose CT image patches. For example, the lot may be divided into 300 lots. However, the present invention is not limited thereto, and the number of the divided batches and the size of the image blocks should be set by those skilled in the art according to the actual needs of the specific time and the subject.
Immediately after the U-net generated network output high-dose CT patch is acquired, it can be stitched with a corresponding low-dose CT patch into the discrimination network D. And simultaneously splicing and inputting the corresponding high-dose CT label image block and the corresponding low-dose CT image block into an identification network D. Comparing the two results, calculating the probability that the high-dose CT image block is the high-dose CT label image block, and outputting the probability.
Next, the generation countermeasure network needs to be trained in such a way that the back propagation is minimized by the loss function. The training targets for generating network G and discriminating network D may be: the image error rate generated by the generation network G is minimized, i.e. the high-dose CT image blocks output by the generation network are closer and closer to the high-dose CT label image blocks; the accuracy of the judgment of the identification network D is maximized, that is, the identification network can more and more accurately judge whether the input image block is a real high-dose CT label image block. In general, the minimum maximization optimization formula corresponding to the generation countermeasure network training target is as follows:
minGmaxDV(D,G)=E[log(D(x))]+E[log(1-D(G(z)))]
where E is the expected value, G (z) is the sample (high-dose CT image block) output by image z (low-dose CT image block), D (x) is the probability that sample x (high-dose CT label image block input to the discrimination network) is from a true high-resolution label image block, and V (D, G) is the error rate of the generated network and the accuracy of the discrimination network.
More specifically, the minimum maximization optimization formulas of the generation network and the discrimination network include a maximization optimization formula of the accuracy rate of the discrimination network and a minimization optimization formula of the error rate of the generation network, and the maximization optimization formula of the accuracy rate of the discrimination network is as follows:
maxDV(D,G)=E[log(D(x))]
the optimization formula for minimizing the error rate of the generated network is as follows:
minGV(D,G)=E[log(1-D(G(z)))]
specifically, the training sequence in one iteration may be to train the generated network to be minimized first and then train the identified network to be maximized, or may be to train the generated network to be minimized first and then train the identified network to be maximized in the opposite sequence, which is not limited in the present invention.
Specifically, the training targets are embodied in a specific algorithm as: the overall loss function of the generation countermeasure network is minimized. In particular, a loss function is developed for a high-dose CT image patch output by the network relative to a high-dose CT label image patch. The loss function in the algorithm provided by the invention consists of three items of content loss, countermeasure loss and loss regularization, and is calculated by the following formula:
floss=lContent+k*lGen+lTV
wherein f islossAs a function of the total loss,/ContentFor content loss items,/GenTo combat the loss term, k is the weighting parameter to combat perceptual loss, lTVIs a loss regularization term. More specifically, the weight parameter k is used for adjusting the influence proportion of two different losses, namely content loss and counter loss, on the final experimental result. When the parameter is larger, the weight of the countermeasure loss can be increased, and the effect of the U-net generating the discrimination network in the countermeasure network is more fully considered; when the parameter is smaller, the learning of the pixel difference of the image block itself is more considered. In one embodiment, the weight parameter k may be 10-3However, the method is not limited thereto, and there should be a specific degree of difference of image block pixels according to actual network error and actual training according to model by those skilled in the artSetting and adjusting the exercise effect.
In addition, the content loss item lContentIncluding the minimum mean square error l of the pixel spaceMSEIs a minimum mean square error l based on and based on a feature spaceVGG/i,j,ILFor low dose CT image blocks, IHFor high dose CT labeled image blocks, GθG(IL) For a high dose CT image block generated for a network, W is the width of the image block, H is the height of the image block, DθD(I) Generates the probability that the image I judged by the discrimination network of the countermeasure network belongs to the real high-dose CT label image block,
Figure BDA0002255069200000095
is representative of the feature map in a VGG19 network, and i, j is the jth convolution before the ith maximum pooling layer (after activation). The specific formula is as follows:
lContent=lMSE+lVGG/i,j
Figure BDA0002255069200000091
Figure BDA0002255069200000092
and the countermeasure loss item is mainly the information of the identification network fed back, and the specific calculation formula in the invention is as follows:
Figure BDA0002255069200000093
n provides an error correction module for the total number of training samples and a loss regular term, and the term only relates to calculating a high-dose CT image block generated by a generating network, and the specific formula is as follows:
Figure BDA0002255069200000094
in summary, after each pass of generating the network minimization and identifying the network maximization and calculating the loss function under the current iteration, the parameters of the optimized generation function can be updated by minimizing the loss function and performing back propagation, and finally the training is terminated after the specified number of iterations is reached. Wherein the adam optimizer is used in the present invention to minimize the loss function, which is not a limitation of the present invention.
After all iterations are completed, the trained generative confrontation network model and its parameters may be saved.
Step S4
The specific operation is as follows: and inputting the target low-dose CT image into the model, and outputting a corresponding high-dose CT image.
The step is a final practical step, a confrontation network model generated by the U-net and stored in the previous step and parameters thereof are led into a U-net generation network, then a low-dose CT image (after being converted into a common image format) needing to be repaired and denoised is input into the generation network, and the output high-dose CT image is the target high-dose CT image. FIG. 6 is a schematic illustration of a process in one embodiment. It should be noted here that the algorithm does not need to divide the image extraction into image blocks when the algorithm is actually applied.
The same or similar reference numerals correspond to the same or similar parts;
the positional relationships depicted in the drawings are for illustrative purposes only and are not to be construed as limiting the present patent;
it should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (10)

1. A low-dose CT image restoration and denoising method is characterized by comprising the following steps:
s1: simulating to generate a low-dose CT image required by training;
s2: extracting and segmenting the interest region of the low-dose CT image obtained in the S1;
s3: the image obtained in S2 is input to the model to be trained to obtain a restored CT image.
2. The method for restoring and denoising a low-dose CT image according to claim 1, wherein the specific process of step S1 is:
s11: carrying out fan-shaped beam projection transformation on the high-dose CT image to obtain projection data of a projection domain;
s12: performing exponential operation on the obtained projection matrix, adding Poisson noise, and then taking logarithm;
s13: and (3) converting the simulated projection data back to an image domain through a MATLAB self-contained back projection function to obtain a simulated low-dose CT image.
3. The method for restoring and denoising a low-dose CT image according to claim 2, wherein the specific process of step S2 is:
reading the simulated low-dose CT image, regularly removing the black background area at the periphery, extracting the outline of the tissue area, and dividing the target area into a plurality of image blocks.
4. The method for restoring and denoising low-dose CT images as claimed in claim 3, wherein the training process of the model trained in step S3 is:
training U-net to generate confrontation network to reconstruct high-dose CT image, training U-net to generate confrontation network, storing the generated confrontation network model, inputting the image extracted and segmented in S2 into the confrontation network model, and outputting the corresponding repaired high-dose CT image.
5. The method for restoring and denoising a low-dose CT image according to claim 4, wherein in the step of training a U-net generation confrontation network model, the step of training the generation confrontation network model comprises: training the U-net to generate a network; training a discrimination network; and the loss function propagates backwards.
6. The method of claim 5, wherein the step of training the U-net generation network comprises:
inputting the extracted and segmented low-dose CT image blocks into a U-net generation network in a generation countermeasure network, and then generating high-dose CT image blocks through the U-net generation network, wherein the size specifications of the low-dose CT image blocks and the high-dose CT image blocks need to be kept consistent.
7. The method of claim 6, wherein the step of training the discriminating network comprises:
simultaneously inputting the low-dose CT image block and the high-dose CT image block into a U-net generation countermeasure network identification network; simultaneously inputting a low-dose CT image block and a high-dose CT label image block into a U-net generation countermeasure network identification network; the identification network judges the probability of the high-dose CT label image block being a real sample; and judging the probability of the high-dose CT image block output by the U-net generating network to be a real sample, wherein the fault tolerance rate of the identification network is improved by inputting the low-dose CT image block to the identification network.
8. The method of claim 7, wherein the step of back-propagating the loss function comprises:
calculating a loss function of a high-dose CT image block generated by the U-net generation network relative to a high-dose CT label image block; minimizing a loss function; optimizing the U-net according to the loss function to generate network parameters of a U-net generation network in the countermeasure network, and generating the optimized network parameters of the U-net generation network;
the process of optimizing the network parameters of the U-net generation network is carried out for multiple times; generating a high-dose CT image block after each sub-optimal U-net generates network parameters of a network and generates the network parameters of the optimized U-net generation network; generating new high-dose CT image blocks each time is to generate network parameters of the network by using the newly optimized U-net; the loss function of the countermeasure network is generated through the U-net and is propagated reversely, and the error of the reconstructed high-dose CT image block relative to the high-dose CT label image block is minimized through continuous optimization; the U-net coding-decoding network structure in the algorithm network enables the reconstructed high-dose CT image to have a finer texture structure.
9. The method for restoring and denoising low-dose CT images according to claim 8, wherein the training U-net to generate the training targets of the countermeasure network is:
the error rate of high-dose CT image blocks generated by the U-net generation network and high-dose CT label image blocks is minimized; the accuracy of the probability of judging the high-dose CT image block to be the real high-dose CT label image block by the identification network is maximized; the minimum maximization optimization formula of the generation network and the authentication network for generating the countermeasure network is as follows:
minGmaxDV(D,G)=E[log(D(x))]+E[log(1-D(G(z)))]
wherein E is an expected value, G (z) is a sample of the image low-dose CT image block z output, i.e. a high-dose CT image block, D (x) is a probability that the sample input high-dose CT label image block x of the authentication network is from a real high-resolution label image block, and V (D, G) is an error rate of the generated network and an accuracy of the authentication network;
the minimum maximization optimization formulas of the generation network and the identification network comprise a maximization optimization formula of the accuracy rate of the identification network and a minimization optimization formula of the error rate of the generation network;
the maximum optimization formula for the accuracy rate of the identification network is as follows:
maxDV(D,G)=E[log(D(x))]
the optimization formula for minimizing the error rate of the generated network is as follows:
minGV(D,G)=E[log(1-D(G(z)))]
the loss function is composed of content loss, countermeasure loss and loss regularization terms and is calculated by the following formula:
floss=lContent+k*lGen+lTV
lContent=lMSE+lVGG/i,j
Figure FDA0002255069190000031
Figure FDA0002255069190000032
Figure FDA0002255069190000033
Figure FDA0002255069190000034
wherein f islossAs a function of the total loss,/ContentFor content loss items,/GenTo combat the perceptual loss term, k is the weight to combat perceptual loss, lMSEIs the minimum mean square error, l, of the pixel spaceVGG/i,jIs a minimum mean square error, I, based on a feature spaceLFor low dose CT image blocks, IHFor high dose CT labeled image blocks, GθG(IL) For a high dose CT image block generated for a network, W is the width of the image block, H is the height of the image block, DθD(I) Generates the probability that the image I judged by the discrimination network of the countermeasure network belongs to the real high-dose CT label image block,
Figure FDA0002255069190000035
is a representative feature map in the VGG19 network, i, j is the jth convolution before the ith maximum pooling layer (after activation), and N is the total number of training samples.
10. The method for repairing and denoising a low-dose CT image according to claim 9, wherein in the training process of the U-net countermeasure generation network, the steps after inputting the extracted and segmented CT image blocks into the countermeasure network model are sequentially performed in a loop:
initial parameters for generating the countermeasure network may be set to:
image block size 50 x 50;
the optimizer is selected as an adam optimizer;
the initial learning rate was 0.0001, set to 0.0005 after 50 iterations, and set to 0.00001 after 100 iterations; the number of loop iterations is 200; a batch process value of 16; the step length is less than 1; the number of channels is 4.
CN201911049867.5A 2019-10-31 2019-10-31 Low-dose CT image repairing and denoising method Active CN110930318B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911049867.5A CN110930318B (en) 2019-10-31 2019-10-31 Low-dose CT image repairing and denoising method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911049867.5A CN110930318B (en) 2019-10-31 2019-10-31 Low-dose CT image repairing and denoising method

Publications (2)

Publication Number Publication Date
CN110930318A true CN110930318A (en) 2020-03-27
CN110930318B CN110930318B (en) 2023-04-18

Family

ID=69849937

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911049867.5A Active CN110930318B (en) 2019-10-31 2019-10-31 Low-dose CT image repairing and denoising method

Country Status (1)

Country Link
CN (1) CN110930318B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111815692A (en) * 2020-07-15 2020-10-23 大连东软教育科技集团有限公司 Method, system and storage medium for generating artifact-free data and artifact-containing data
CN112330565A (en) * 2020-11-12 2021-02-05 中国人民解放军战略支援部队信息工程大学 Image denoising method in low-dose CT projection domain based on improved U-net
CN112767273A (en) * 2021-01-21 2021-05-07 中山大学 Low-dose CT image restoration method and system applying feature decoupling
CN113506353A (en) * 2021-07-22 2021-10-15 深圳高性能医疗器械国家研究院有限公司 Image processing method, system and application thereof
CN113570129A (en) * 2021-07-20 2021-10-29 武汉钢铁有限公司 Method for predicting strip steel pickling concentration and computer readable storage medium
WO2022000183A1 (en) * 2020-06-29 2022-01-06 深圳高性能医疗器械国家研究院有限公司 Ct image denoising system and method
WO2022120694A1 (en) * 2020-12-07 2022-06-16 深圳先进技术研究院 Low-dose image denoising network training method and low-dose image denoising method
CN114757847A (en) * 2022-04-24 2022-07-15 汕头市超声仪器研究所股份有限公司 Multi-information extraction extended U-Net and application method thereof in low-dose X-ray imaging
KR20220135683A (en) * 2021-03-31 2022-10-07 연세대학교 산학협력단 Apparatus for Denoising Low-Dose CT Images and Learning Apparatus and Method Therefor
CN115689923A (en) * 2022-10-27 2023-02-03 佛山读图科技有限公司 Low-dose CT image noise reduction system and method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180018757A1 (en) * 2016-07-13 2018-01-18 Kenji Suzuki Transforming projection data in tomography by means of machine learning
US20180225823A1 (en) * 2017-02-09 2018-08-09 Siemens Healthcare Gmbh Adversarial and Dual Inverse Deep Learning Networks for Medical Image Analysis
CN109410273A (en) * 2017-08-15 2019-03-01 西门子保健有限责任公司 According to the locating plate prediction of surface data in medical imaging
CN110060774A (en) * 2019-04-29 2019-07-26 赵蕾 A kind of thyroid nodule recognition methods based on production confrontation network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180018757A1 (en) * 2016-07-13 2018-01-18 Kenji Suzuki Transforming projection data in tomography by means of machine learning
US20180225823A1 (en) * 2017-02-09 2018-08-09 Siemens Healthcare Gmbh Adversarial and Dual Inverse Deep Learning Networks for Medical Image Analysis
CN109410273A (en) * 2017-08-15 2019-03-01 西门子保健有限责任公司 According to the locating plate prediction of surface data in medical imaging
CN110060774A (en) * 2019-04-29 2019-07-26 赵蕾 A kind of thyroid nodule recognition methods based on production confrontation network

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022000183A1 (en) * 2020-06-29 2022-01-06 深圳高性能医疗器械国家研究院有限公司 Ct image denoising system and method
CN111815692A (en) * 2020-07-15 2020-10-23 大连东软教育科技集团有限公司 Method, system and storage medium for generating artifact-free data and artifact-containing data
CN111815692B (en) * 2020-07-15 2023-12-01 东软教育科技集团有限公司 Artifact-free data, method and system for generating artifact-free data, and storage medium
CN112330565A (en) * 2020-11-12 2021-02-05 中国人民解放军战略支援部队信息工程大学 Image denoising method in low-dose CT projection domain based on improved U-net
WO2022120694A1 (en) * 2020-12-07 2022-06-16 深圳先进技术研究院 Low-dose image denoising network training method and low-dose image denoising method
CN112767273A (en) * 2021-01-21 2021-05-07 中山大学 Low-dose CT image restoration method and system applying feature decoupling
CN112767273B (en) * 2021-01-21 2023-10-20 中山大学 Low-dose CT image restoration method and system applying feature decoupling
KR102550338B1 (en) 2021-03-31 2023-06-30 연세대학교 산학협력단 Apparatus for Denoising Low-Dose CT Images and Learning Apparatus and Method Therefor
KR20220135683A (en) * 2021-03-31 2022-10-07 연세대학교 산학협력단 Apparatus for Denoising Low-Dose CT Images and Learning Apparatus and Method Therefor
CN113570129A (en) * 2021-07-20 2021-10-29 武汉钢铁有限公司 Method for predicting strip steel pickling concentration and computer readable storage medium
CN113506353A (en) * 2021-07-22 2021-10-15 深圳高性能医疗器械国家研究院有限公司 Image processing method, system and application thereof
CN114757847A (en) * 2022-04-24 2022-07-15 汕头市超声仪器研究所股份有限公司 Multi-information extraction extended U-Net and application method thereof in low-dose X-ray imaging
CN115689923A (en) * 2022-10-27 2023-02-03 佛山读图科技有限公司 Low-dose CT image noise reduction system and method

Also Published As

Publication number Publication date
CN110930318B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN110930318B (en) Low-dose CT image repairing and denoising method
Armanious et al. MedGAN: Medical image translation using GANs
Lei et al. Learning‐based CBCT correction using alternating random forest based on auto‐context model
CN110827216A (en) Multi-generator generation countermeasure network learning method for image denoising
Ko et al. Rigid and non-rigid motion artifact reduction in X-ray CT using attention module
CN109215014B (en) Training method, device and equipment of CT image prediction model and storage medium
Jiang et al. Low-dose CT lung images denoising based on multiscale parallel convolution neural network
Li et al. DECT-MULTRA: Dual-energy CT image decomposition with learned mixed material models and efficient clustering
Li et al. Low-dose CT image denoising with improving WGAN and hybrid loss function
Wang et al. Medical image inpainting with edge and structure priors
Wang et al. Adaptive convolutional dictionary network for CT metal artifact reduction
Huang et al. U‐net‐based deformation vector field estimation for motion‐compensated 4D‐CBCT reconstruction
Song et al. Denoising of MR and CT images using cascaded multi-supervision convolutional neural networks with progressive training
Lin et al. Batformer: Towards boundary-aware lightweight transformer for efficient medical image segmentation
Niu et al. Low-dimensional manifold-constrained disentanglement network for metal artifact reduction
Kim et al. Weakly-supervised progressive denoising with unpaired CT images
CN111340903B (en) Method and system for generating synthetic PET-CT image based on non-attenuation correction PET image
CN108038840B (en) Image processing method and device, image processing equipment and storage medium
Li et al. Learning non-local perfusion textures for high-quality computed tomography perfusion imaging
Shi et al. A Virtual Monochromatic Imaging Method for Spectral CT Based on Wasserstein Generative Adversarial Network With a Hybrid Loss.
Shi et al. A semi‐supervised learning method of latent features based on convolutional neural networks for CT metal artifact reduction
Li et al. Low-dose CT image synthesis for domain adaptation imaging using a generative adversarial network with noise encoding transfer learning
Guan et al. Federated learning for medical image analysis: A survey
Chan et al. An attention-based deep convolutional neural network for ultra-sparse-view CT reconstruction
Mangalagiri et al. Toward generating synthetic CT volumes using a 3D-conditional generative adversarial network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant