CN110559009A - Method, system and medium for converting multi-modal low-dose CT into high-dose CT based on GAN - Google Patents

Method, system and medium for converting multi-modal low-dose CT into high-dose CT based on GAN Download PDF

Info

Publication number
CN110559009A
CN110559009A CN201910832520.1A CN201910832520A CN110559009A CN 110559009 A CN110559009 A CN 110559009A CN 201910832520 A CN201910832520 A CN 201910832520A CN 110559009 A CN110559009 A CN 110559009A
Authority
CN
China
Prior art keywords
dose
matrix
low
task
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910832520.1A
Other languages
Chinese (zh)
Other versions
CN110559009B (en
Inventor
苏琬棋
瞿毅力
邓楚富
王莹
陈志广
卢宇彤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
National Sun Yat Sen University
Original Assignee
National Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National Sun Yat Sen University filed Critical National Sun Yat Sen University
Priority to CN201910832520.1A priority Critical patent/CN110559009B/en
Publication of CN110559009A publication Critical patent/CN110559009A/en
Application granted granted Critical
Publication of CN110559009B publication Critical patent/CN110559009B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/02Devices for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computerised tomographs
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis

Abstract

The invention discloses a method, a system and a medium for converting multi-modal low-dose CT into high-dose CT based on GAN, which comprises the steps of inputting low-dose CT of any modality; carrying out two-dimensional discrete wavelet transform on the low-dose CT to obtain a plurality of decomposition results; and inputting the low-dose CT and a plurality of decomposition results thereof into a trained coder in the GAN network for coding, and decoding the coding results through a decoder in the GAN network to obtain a corresponding high-dose modal image. Based on the wide development of GAN in multi-domain conversion and the decomposition capability of the traditional wavelet transformation, the invention inputs the low-dose CT and the wavelet transformation result thereof into the encoder in the trained GAN network together for encoding, and then decodes the encoding result through the decoder in the GAN network to obtain the corresponding high-dose modal image, thereby conveniently realizing the conversion of the low-dose CT image of any modality to generate the high-dose CT image.

Description

Method, system and medium for converting multi-modal low-dose CT into high-dose CT based on GAN
Technical Field
The invention relates to the field of medical image processing, in particular to a method, a system and a medium for converting multi-modal low-dose CT (computed tomography) into high-dose CT based on GAN (generation countermeasure network), which are used for generating the high-dose CT by generating the countermeasure network conversion according to the low-dose CT of any modality.
Background
As one of the modern mainstream medical images, a detection method of Computed Tomography (CT) has been widely used for clinical diagnosis in various clinical fields. With the popularization and development of CT scanning, more and more people are concerned about the possible radiation hazard of CT scanning to human body. CT scans are generally accompanied by a higher degree of x-ray radiation, and medical studies have shown that exposure to excessive x-ray radiation may induce metabolic abnormalities or cancer, leukemia or other genetic disorders. Researchers wish to reduce x-ray doses to reduce patient risk. The most common method of reducing the radiation dose is to reduce the x-ray flux by reducing the operating current and shortening the exposure time of the x-ray tube. Generally, the weaker the x-ray flux, the more CT noise reconstructed, which can degrade the signal-to-noise ratio and affect diagnostic performance. Therefore, obtaining high quality images that can be used for clinical diagnosis while reducing the dose has become an important direction of research in the CT field in recent years. In order to solve the inherent physical problem, many methods have been designed in the past to improve the image quality of Low-Dose CT (Low-Dose CT, LDCT), and the conventional methods include model-based iterative reconstruction, filtering before reconstruction, and post-reconstruction image processing, the pre-reconstruction processing technique is scanner-specific, and the data of a commercial scanner is not easily provided to researchers, and the post-reconstruction processing technique cannot accurately determine the noise distribution in the image domain, so that the algorithm cannot obtain the best tradeoff between structure preservation and noise reduction.
In recent years, with the development of deep learning in the field of image processing, research on solving the LDCT problem by using a deep learning technique is started, and generation of a countermeasure network is more widely applied to image conversion and image generation. Generating a countermeasure network (GAN) is a flexible deep neural network that can be trained unsupervised as well as supervised. The generation of a countermeasure network generally includes a generator that can generate realistic images by accepting random input and a discriminator that distinguishes between real images and generated images by learning them and thereby directs the generator to generate more realistic images. In addition, some researches combine the wavelet transform and the deep learning technique, which are the common image denoising methods in the traditional digital image processing, into LDCT processing. In some researches, multi-scale decomposition is performed on the CT by using wavelet transformation to obtain scale information and direction information of an image, then a decomposition result is denoised by using a Convolutional Neural Network (CNN), and finally the denoised decomposition result is reconstructed by using wavelet transformation to obtain the denoised CT. In summary, the current LDCT problem research basically involves two-domain conversion from a fixed low dose level to a high dose level, but how to implement multi-domain conversion from multiple low dose levels to high dose levels is still a key technical problem to be solved.
Disclosure of Invention
The technical problems to be solved by the invention are as follows: based on the wide development of GAN in multi-domain conversion and the decomposition capability of traditional wavelet transformation, the invention inputs the low-dose CT and the wavelet transformation result into the encoder in the trained GAN network together for encoding, and then decodes the encoding result through the decoder in the GAN network to obtain the corresponding high-dose modal image, so that the conversion of the low-dose CT image of any modality can be conveniently realized to generate the high-dose CT image.
In order to solve the technical problems, the invention adopts the technical scheme that:
A method for converting multi-modal low-dose CT into high-dose CT based on GAN comprises the following implementation steps:
1) Inputting low-dose CT of any modality;
2) Carrying out two-dimensional discrete wavelet transform on the low-dose CT to obtain a plurality of decomposition results;
3) And inputting the low-dose CT and a plurality of decomposition results thereof into a trained coder in the GAN network for coding, and decoding the coding results through a decoder in the GAN network to obtain a corresponding high-dose modal image.
Optionally, the step 2) of performing two-dimensional discrete wavelet transform on the low-dose CT to obtain a plurality of decomposition results specifically means obtaining an approximate matrix Wi,1Horizontal matrix Wi,1Hvertical matrix Wi,1Vand diagonal matrix Wi,1Dfour results.
Optionally, the GAN network includes an Encoder for encoding images with different dose levels to obtain the same feature space, a Decoder for decoding the encoding result to obtain a converted high dose map, a Discriminator for discriminating whether an image is a real high dose map, and a tag Discriminator for discriminating a reconstructed tag and an original taglabelAnd a Task processor Task for processing the input image to obtain a Task label.
Optionally, step 3) is preceded by a step of training the GAN network by using a supervised method, and the detailed steps include:
A1) inputting a high dose map h with task labels and label thereof;
A2) Training the Task processor Task to obtain a Task processor Task which completes training; adding i Poisson noises with different levels to the high-dose modal image h to obtain a registered low-dose modal image l with i dose levelsi
A3) Randomly selecting one mode from the i low-dose modes to carry out a low-dose mode image liPerforming conversion training with a high-dose mode image h, and repeating the training guided by a trained Task processor Task to obtain a trained Encoder Encoder and a trained Decoder Decode;
A4) input of test Low dose CT data ltestConverting the test data by using the trained module to obtain converted high-dose CT data, and processing the converted data by using a task processor to obtain a task processing result of the converted data; low dose CT data ltestProcessing the test data by using a task processor to obtain a task processing result of the test data;
A5) Comparing the task processing result of the conversion data with the task processing result of the test data, evaluating whether the conversion result is good, and if the conversion result is good, finishing the training; otherwise, skipping to execute the step A1) and continuing training.
Optionally, the step a2) of training the Task processor Task includes: carrying out first-level decomposition on the high-dose chart h by using two-dimensional discrete wavelet transform to obtain an approximate matrix Wh,1Horizontal matrix Wh,1HVertical matrix Wh,1Vand diagonal matrix Wh,1DHigh dose maps h and Wh,1、Wh,1H、Wh,1V、Wh,1DInputting the data into a Task processor Task together, and processing the data to obtain a reconstructed Task labelrUsing a tag DiscriminatorlabelFor the Label label and the reconstruction Label label of the high dose map hrCarrying out discrimination learning, wherein the learning discriminates the former as true and the latter as false;
Low dose modality image l in step a3)iThe detailed steps of the conversion training with the high dose modality image h include:
A3.1) input of Low dose modality image li
A3.2) for Low dose modality images liTwo-dimensional discrete wavelet transform is carried out to obtain an approximate matrix Wi,1Horizontal matrix Wi,1HVertical matrix Wi,1VAnd diagonal matrix Wi,1DFour results;
A3.3) mapping of Low dose modality images liAnd its approximate matrix Wi,1Horizontal matrix Wi,1HVertical matrix Wi,1VAnd diagonal matrix Wi,1Dinputting the four results into an Encoder Encoder in the GAN network for encoding to obtain an encoding result CodeiThen Code the coding resultiDecoding the coding result by a Decoder in the GAN network to obtain a corresponding converted high-dose modal image hi,t
a3.4) converting the Discriminator into a high dose mode image h with the high dose mode image h as a positive samplei,tPerforming true and false discrimination learning on the negative sample;
A3.5) conversion of high dose modality image hi,tTwo-dimensional discrete wavelet transform is carried out to obtain an approximate matrix Wit,1Horizontal matrix Wit,1HVertical matrix Wit,1VAnd diagonal matrix Wit,1D(ii) a Carrying out two-dimensional discrete wavelet transform on the high-dose modal image h to obtain an approximate matrix Wh,1Horizontal matrix Wh,1HVertical matrix Wh,1VAnd diagonal matrix Wh,1D
A3.6) conversion of high-dose images hi,tOf the approximation matrix Wit,1horizontal matrix Wit,1HVertical matrix Wit,1VAnd diagonal matrix Wit,1DApproximate matrix W of high dose modality image hh,1Horizontal matrix Wh,1HVertical matrix Wh,1VAnd diagonal matrix Wh,1DSolving wavelet self-supervision loss of an approximate matrix, a horizontal matrix, a vertical matrix and a diagonal matrix in a layered mode;
A3.7) approximation matrix W obtained in step A3)i,1Two-dimensional discrete wavelet transform is carried out to obtain an approximate matrix Wi,11For the approximate matrix W obtained in step A5)it,1Two-dimensional discrete wavelet transform is carried out to obtain an approximate matrix Wit,11Second order approximation matrix W for low dose maps and transition mapsi,11And Wit,11Solving the self-supervision loss;
a3.8) conversion of the high dose modality image h obtained in step A3)i,tAnd its approximate matrix Wit,1horizontal matrix Wit,1HVertical matrix Wit,1Vand diagonal matrix Wit,1DThe Task processor Task obtains the label of the conversion iconitLabel for finding high dose map and label for conversion iconitTask tag self-supervision loss of (1);
A3.9) Code results for multiple Low dose modalitiesiAnd Codej,j≠iSolving semantic consistency loss; and solving wavelet consistency loss by layers according to the two-dimensional discrete wavelet transformation results of a plurality of low-dose mode conversion maps.
Optionally, step 3) is preceded by a step of training a GAN network based on the unlabeled low-dose high-dose CT registered dataset a and the task-labeled low-dose CT dataset B, and the detailed steps include:
B1) inputting task-tagged Low dose Modal images lBand label, executing the training process of the supervised Task processor to obtain the trained Task processor Task, inputting the low-dose mode image l of low-dose high-dose registrationAhigh dose modal image hAExecuting a supervised low-dose CT to high-dose CT training process to obtain a trained Encoder Encoder, Decoder Decoder and Discriminator, wherein the low-dose mode image lAAnd high dose modality image hAFrom data set A, Low dose modality image lBAnd label from dataset B;
B2) Recombining the trained Encoder Encoder, Decoder Decoder, Discriminator and Task processor Task to perform unsupervised combination training to obtain the trained Encoder Encoder, Decoder Decoder and Task processor Task;
B3) Input test Low dose CT data ltestconverting by using a trained Encoder Encoder, a Decoder Decode and a Task processor Task to obtain converted high-dose CT data, and processing the converted data by using the Task processor Task to obtain a Task processing result of the converted data; at the same time, test low dose CT data ltestProcessing the test data by using a Task processor Task to obtain a Task processing result of the test data;
B4) And comparing the task processing result of the conversion data with the task processing result of the test data to evaluate whether the conversion data is good, ending the training and exiting if the conversion data is good, and jumping to execute the step B1) if the conversion data is not good.
Optionally, the detailed step of performing the supervised Task processor training procedure in step B1) to obtain the trained Task processor Task includes: low dose modality image l of dataset B with two-dimensional discrete wavelet transformBPerforming first-order decomposition to obtain an approximate matrix WB,1Horizontal matrix WB,1Hvertical matrix WB,1VAnd diagonal matrix WB,1Dcombining the low dose modality image of data set B with WB,1、WB,1H、WB,1V、WB,1DInputting the data into a Task processor Task together, and processing the data to obtain a reconstructed Task labelrUsing a tag DiscriminatorlabelFor low dose modality images lBAnd a reconstruction tag labelrCarrying out discrimination learning, wherein the learning discriminates the former as true and the latter as false;
The detailed steps of performing the supervised low-dose CT to high-dose CT training process in step B1) to obtain the trained Encoder, Decoder and Discriminator include:
B1.1) input Low dose modality image l of dataset AAUsing two-dimensional discrete wavelet transform to low-dose mode imageAPerforming first-order decomposition to obtain an approximate matrix WA,1Horizontal matrix WA,1Hvertical matrix WA,1VAnd diagonal matrix WA,1DFour results;
B1.2) low dose modality image lAAnd approximation matrix WA,1Horizontal matrix WA,1HVertical matrix WA,1VInputting the codes into an Encoder Encoder together for encoding to obtain a Code of an encoding resultA(ii) a Encoding the resulting Code with the DecoderADecoding to obtain a conversion chart h of the converted high-dose modeA,t
B1.3) imaging with Discriminator in high dose modality hAconvert graph h for positive samplesA,tPerforming true and false discrimination learning on the negative sample;
B1.4) conversion of the plot h by two-dimensional discrete wavelet transformA,tPerforming first-order decomposition to obtain an approximate matrix WAt,1Horizontal matrix WAt,1Hvertical matrix WAt,1VAnd diagonal matrix WAt,1D(ii) a Two-dimensional discrete wavelet transform for high dose mode image hAPerforming first-order decomposition to obtain an approximate matrix Wh,1horizontal matrix Wh,1HVertical matrix Wh,1VAnd diagonal matrix Wh,1D
B1.5) converting the graph hA,tFirst order decomposition result of (a), high dose modality image hAThe wavelet self-supervision loss of an approximate matrix, a horizontal matrix, a vertical matrix and a diagonal matrix is obtained by layering the first-level decomposition result;
B1.6) to the approximation matrix WA,1Two-dimensional discrete wavelet transform is carried out to obtain an approximate matrix WA,11To the approximate matrix WAt,1Two-dimensional discrete wavelet transform is carried out to obtain an approximate matrix WAt,11second order approximation matrix W for low dose maps and transition mapsA,11And WAt,11Solving the self-supervision loss;
The detailed steps of recombining the trained Encoder Encoder, Decoder Decoder, Discriminator and Task processor Task in the step B2) to obtain the trained Encoder Encoder, Decoder Decoder and Task processor Task through unsupervised combination training include:
B2.1) input of Low dose modality image l of dataset BBUsing two-dimensional discrete wavelet transform pair lBPerforming first-order decomposition to obtain an approximate matrix WB,1Horizontal matrix WB,1HVertical matrix WB,1VAnd diagonal matrix WB,1DFour results;
B2.2) low dose modality image lBand approximation matrix WB,1Horizontal matrix WB,1HVertical matrix WB,1VAnd diagonal matrix WB,1DInputting the codes into an Encoder Encoder together for encoding to obtain a Code of an encoding resultB(ii) a Encoding the resulting Code with the DecoderBDecoding to obtain a converted high dose modal image hB,t
B2.3) Discriminator with high dose modality image h of dataset AAfor a positive sample, low dose map l of data set BBAnd a transition diagram hB,tPerforming true and false discrimination learning on the negative sample;
B2.4) conversion of the plot h by two-dimensional discrete wavelet transformB,tPerforming first-order decomposition to obtain an approximate matrix WBt,1Horizontal matrix WBt,1HVertical matrix WBt,1VAnd diagonal matrix WBt,1D
B2.5) to the approximation matrix WB,1Performing two-dimensional separationObtaining approximate matrix W by scattered wavelet transformB,11To the approximate matrix WBt,1Two-dimensional discrete wavelet transform is carried out to obtain an approximate matrix WBt,11For low dose modality images lBAnd a transition diagram hB,tSecond order approximation matrix WB,11And WBt,11Solving the self-supervision loss;
B2.6) converting the graph hB,tAn approximate matrix W obtained by first-order decompositionBt,1Horizontal matrix WBt,1HVertical matrix WBt,1VAnd diagonal matrix WBt,1DThe Task processor Task obtains the label of the conversion iconBtFinding a low dose modality image lBThe original label and the conversion icon labelBtTask tag consistency loss of (2);
B2.7) low dose modality image lBand the approximate matrix W obtained by the first-order decompositionB,1Horizontal matrix WB,1HVertical matrix WB,1VAnd diagonal matrix WB,1DThe Task processor Task is input to obtain a reconstructed Task labelBrFinding a low dose modality image lBThe original label and the reconstructed labelBrIs self-monitoring for loss.
In addition, the invention also provides a system for converting multi-modal low-dose CT into high-dose CT based on GAN, which comprises:
The input program unit is used for inputting low-dose CT of any modality;
The wavelet transformation program unit is used for carrying out two-dimensional discrete wavelet transformation on the low-dose CT to obtain a plurality of decomposition results;
And the conversion program unit is used for inputting the low-dose CT and a plurality of decomposition results thereof into a trained encoder in the GAN network for encoding, and then decoding the encoding results through a decoder in the GAN network to obtain a corresponding high-dose modal image.
The invention further provides a system for converting multi-modality GAN-based low-dose CT into high-dose CT, comprising a computer device programmed or configured to execute the steps of the method for converting multi-modality GAN-based low-dose CT into high-dose CT, or a computer program stored on a storage medium of the computer device and programmed or configured to execute the method for converting multi-modality GAN-based low-dose CT into high-dose CT.
The present invention further provides a computer readable storage medium having stored thereon a computer program programmed or configured to perform the method of GAN based multi-modality low-dose CT to high-dose CT.
Compared with the prior art, the invention has the following advantages: based on the wide development of GAN in multi-domain conversion and the decomposition capability of the traditional wavelet transformation, the invention inputs the low-dose CT and the wavelet transformation result thereof into the encoder in the trained GAN network together for encoding, and then decodes the encoding result through the decoder in the GAN network to obtain the corresponding high-dose modal image, thereby conveniently realizing the conversion of the low-dose CT image of any modality to generate the high-dose CT image.
Drawings
FIG. 1 is a schematic diagram of a basic process of an embodiment of the present invention.
Fig. 2 is a schematic diagram of a training flow of Task processor Task according to an embodiment of the present invention.
Fig. 3 is a diagram of a GAN network training architecture according to a first embodiment of the present invention.
Fig. 4 is a main flow chart of GAN network training according to a first embodiment of the present invention.
FIG. 5 is a diagram illustrating a training process of Task processor Task according to a second embodiment of the present invention.
Fig. 6 is a diagram of a training architecture for registering low-dose CT to high-dose CT according to a second embodiment of the present invention.
Fig. 7 is a diagram of a combined training architecture according to a second embodiment of the present invention.
fig. 8 is a block diagram of a module reuse architecture according to a second embodiment of the present invention.
Fig. 9 is a main flow chart of GAN network training according to a first embodiment of the present invention.
Detailed Description
the first embodiment is as follows:
As shown in fig. 1, the implementation steps of the method for converting GAN-based multi-modal low-dose CT into high-dose CT in the present embodiment include:
1) Input of low dose CT (denoted as l in the figure) of arbitrary modalityi);
2) Performing two-dimensional discrete Wavelet transform (represented as Wavelet) on the low-dose CT to obtain a plurality of decomposition results;
3) Inputting the low-dose CT and its multiple decomposition results into the Encoder (EC) in the trained GAN network for encoding (the result is code in the figure)i) Then (code the coding result)i) Decoding the encoded result by a decoder (denoted as DC in the figure) in the GAN network to obtain a corresponding high-dose mode image (denoted as h in the figure)i,t)。
As shown in fig. 1, the step 2) of performing two-dimensional discrete wavelet transform on the low-dose CT to obtain a plurality of decomposition results specifically means obtaining an approximation matrix Wi,1Horizontal matrix Wi,1Hvertical matrix Wi,1VAnd diagonal matrix Wi,1DFour results.
In this embodiment, the GAN network includes an Encoder (abbreviated as EC in the figure) for encoding images with different dose levels to obtain the same feature space, a Decoder (abbreviated as DC in the figure) for decoding the encoding result to obtain a converted high dose map, a Discriminator (abbreviated as D in the figure) for discriminating whether the image is a real high dose map, and a tag Discriminator for discriminating whether a reconstructed tag and an original tag are true or falselabel(abbreviated as D in the figure)label) And a Task processor Task for processing the input image to obtain a Task label.
For the encoder and decoder, the present embodiment employs a smoothing filter for initialization. Smoothing filters are commonly used in conventional digital image processing for preprocessing tasks, where some trivial details in an image are removed before object extraction, pixels are averaged, and the average of pixels in the neighborhood is determined by a filter template to replace the value of each pixel in the image. The embodiment applies the method to the initialization of the encoder and the decoder, and can carry out the blurring processing and the noise reduction on the image while retaining the image information, for example, on a3 × 3 imageSmoothing filter with coefficients of all 1, ziis the gray value of the corresponding position i of the image covered by the filter, R is the average value of the pixel gray levels in the 3 × 3 neighborhood, and the formula is as follows:
in the embodiment, a supervised learning method is adopted for the GAN network, the training data is high-dose CT data with task labels, a multi-mode low-dose and high-dose registration data set is constructed by adding noise, and low-dose CT images of any mode can be received and further converted to generate high-quality high-dose CT images. Meanwhile, the task processing can be further carried out on the converted high-dose CT image according to the task to which the data set belongs, and the effectiveness of the converted high-dose CT image is verified.
In this embodiment, a training step of Task processor Task and a training step of generating multi-modal transformation of high-dose CT by multi-modal low-dose CT transformation are further included before step 3), where:
In order to verify that the original semantic information of the image cannot be changed after the low-dose map is converted into the high-dose map, the embodiment performs task verification on the high-dose map obtained through conversion so as to ensure that the conversion training can achieve denoising without losing the task information of the image. The training step of the Task processor Task comprises the following steps: as shown in FIG. 2, the high dose map h is decomposed in one level by two-dimensional discrete wavelet transform to obtain an approximate matrix Wh,1Horizontal matrix Wh,1HVertical matrix Wh,1VAnd diagonal matrix Wh,1DHigh dose maps h and Wh,1、Wh,1H、Wh,1V、Wh,1Dinputting the data into a Task processor Task together, and processing the data to obtain a reconstructed Task labelrUsing a tag DiscriminatorlabelFor the Label label and the reconstruction Label label of the high dose map hrCarrying out discrimination learning, wherein the learning discriminates the former as true and the latter as false; the trained Task processor Task is used for the conversion map validity verification in the low-dose mode image conversion and high-dose mode image training. The task of the inventionThe processor can directly process the mode diagram and the wavelet transformation result of the mode diagram, and does not process the coding result after the mode diagram is coded, so that the application range is wider.
as shown in fig. 4, before step 3), the present embodiment further includes a step of training a GAN network by using a supervised method, and the detailed steps include:
A1) Inputting a high dose map h with task labels and label thereof;
A2) Training the Task processor Task to obtain a Task processor Task which completes training; adding i Poisson noises with different levels to the high-dose modal image h to obtain a registered low-dose modal image l with i dose levelsi
A3) Randomly selecting one mode from the i low-dose modes to carry out a low-dose mode image liPerforming conversion training with a high-dose mode image h, and using a trained Task processor Task to guide training to obtain a trained coder and a trained decoder;
A4) input of test Low dose CT data ltestConverting the test data by using the trained module to obtain converted high-dose CT data, and processing the converted data by using a task processor to obtain a task processing result of the converted data; low dose CT data ltestProcessing the test data by using a task processor to obtain a task processing result of the test data;
A5) Comparing the task processing result of the conversion data with the task processing result of the test data, evaluating whether the conversion result is good, and if the conversion result is good, finishing the training; otherwise, skipping to execute the step A1) and continuing training.
As shown in fig. 3, step a2) the step of training the Task processor Task includes: carrying out first-level decomposition on the high-dose chart h by using two-dimensional discrete wavelet transform to obtain an approximate matrix Wh,1Horizontal matrix Wh,1Hvertical matrix Wh,1Vand diagonal matrix Wh,1DHigh dose maps h and Wh,1、Wh,1H、Wh,1V、Wh,1DInputting the data into a Task processor Task together, and processing the data to obtain a reconstructed Task labelrBy usinglabel DiscriminatorlabelFor the Label label and the reconstruction Label label of the high dose map hrAnd performing discrimination learning, wherein the learning discriminates the former as true and the latter as false.
as shown in fig. 3, the low dose modality image/is performed in step a3)iThe detailed steps of the conversion training with the high dose modality image h include:
a3.1) input of Low dose modality image li
A3.2) for Low dose modality images liTwo-dimensional discrete wavelet transform is carried out to obtain an approximate matrix Wi,1Horizontal matrix Wi,1HVertical matrix Wi,1VAnd diagonal matrix Wi,1DFour results;
A3.3) mapping of Low dose modality images liAnd its approximate matrix Wi,1Horizontal matrix Wi,1HVertical matrix Wi,1VAnd diagonal matrix Wi,1DInputting the four results into an Encoder Encoder in the GAN network for encoding to obtain an encoding result CodeiThen Code the coding resultiDecoding the coding result by a Decoder in the GAN network to obtain a corresponding converted high-dose modal image hi,t
A3.4) converting the Discriminator into a high dose mode image h with the high dose mode image h as a positive samplei,tPerforming true and false discrimination learning on the negative sample;
A3.5) conversion of high dose modality image hi,tTwo-dimensional discrete wavelet transform is carried out to obtain an approximate matrix Wit,1Horizontal matrix Wit,1HVertical matrix Wit,1VAnd diagonal matrix Wit,1D(ii) a Carrying out two-dimensional discrete wavelet transform on the high-dose modal image h to obtain an approximate matrix Wh,1Horizontal matrix Wh,1HVertical matrix Wh,1VAnd diagonal matrix Wh,1D
A3.6) conversion of high-dose images hi,tof the approximation matrix Wit,1Horizontal matrix Wit,1HVertical matrix Wit,1VAnd diagonal matrix Wit,1DOf high dose modality images hApproximation matrix Wh,1Horizontal matrix Wh,1HVertical matrix Wh,1VAnd diagonal matrix Wh,1DSolving Wavelet self-supervision loss (Multi-level Wavelet loss) of an approximate matrix, a horizontal matrix, a vertical matrix and a diagonal matrix in a layered way;
A3.7) approximation matrix W obtained in step A3)i,1Two-dimensional discrete wavelet transform is carried out to obtain an approximate matrix Wi,11For the approximate matrix W obtained in step A5)it,1Two-dimensional discrete wavelet transform is carried out to obtain an approximate matrix Wit,11second order approximation matrix W for low dose maps and transition mapsi,11And Wit,11Finding the self-supervision loss (L2 loss);
A3.8) conversion of the high dose modality image h obtained in step A3)i,tAnd its approximate matrix Wit,1Horizontal matrix Wit,1Hvertical matrix Wit,1VAnd diagonal matrix Wit,1DThe Task processor Task obtains the label of the conversion iconitLabel for finding high dose map and label for conversion iconitTask tag self-supervision loss of (L2 loss);
a3.9) Code results for multiple Low dose modalitiesiAnd Codej,j≠isolving semantic consistency loss; and (3) solving wavelet consistency loss (Multi-level wavelet loss) in a layered manner for the two-dimensional discrete wavelet transformation results of a plurality of low-dose modal transformation maps.
in this embodiment, the loss function in the GAN network is designed as follows:
1. The loss of the training part of the task processor is divided into a tag discriminator loss part and a generator loss part.
The tag discriminator loss is:
in the above formula, lossDiscriminator,labelFor tag Discriminator loss, Discriminatorlabel(label) is the discrimination result of the label Discriminator for discriminating the label, Discriminatorlabel(labelr) Identifying a label for a label identifierrLabel is a label of the high dose chart h, labelrIs the reconstruction tag of the high dose map h.
the generator loss consists of a resistance loss and an unsupervised loss, which can be expressed as:
lossGenerator,label=lossAdverserail,label+losssupervision,label
In the above formula, lossGenerator,labelLoss of generatorAdverserail,labelLoss of antagonism, losssupervision,labelFor self-supervision of losses, Discriminotorlabel(labelr) Identifying a label for a label identifierrLabel is a label of the high dose chart h, labelrIs the reconstruction tag of the high dose map h.
2. The identifier loss of the training framework part is independently updated, and the specific loss is as follows:
In the above formula, lossDiscriminatordiscriminator (h) representing the Discriminator loss of the training architecture parti,t) Discriminator discrimination conversion of high dose images hi,tI denotes the mode i, and the Discriminator (h) is the discrimination result of the Discriminator for discriminating the high dose map h.
3. Other modules of the embodiment are updated and trained through an optimizer, and the loss items comprise discriminator guide loss provided by the discriminator, layered self-supervision loss of wavelets, layered consistency loss of wavelets, self-supervision loss of wavelet secondary approximation matrix, semantic consistency loss, task label supervision loss and task label consistency loss.
Discriminator guided loss:
In the above formula, Discriminotor (h)i,t) Discriminator discrimination conversion of high dose images hi,tI represents the modality i.
layered self-supervision loss of wavelets:
In the above equation, the matrix W is approximatedit,1Horizontal matrix Wit,1HVertical matrix Wit,1Vand diagonal matrix Wit,1DFor transforming high-dose images hi,tOf the two-dimensional discrete wavelet transform result of (2), approximation matrix Wh,1Horizontal matrix Wh,1Hvertical matrix Wh,1VAnd diagonal matrix Wh,1DI represents the modality i as a result of the two-dimensional discrete wavelet transform of the high-dose modality image h.
Layered consistency loss of wavelets:
In the above equation, the matrix W is approximatedit,1horizontal matrix Wit,1HVertical matrix Wit,1VAnd diagonal matrix Wit,1DFor transforming high-dose images hi,tOf the two-dimensional discrete wavelet transform result of (2), approximation matrix Wj,1horizontal matrix Wjt,1HVertical matrix Wjt,1Vand diagonal matrix Wjt,1DFor transforming high-dose images hj,tI represents the mode i, j represents the mode j, and i is not equal to j.
Self-supervision loss of wavelet second-order approximation matrix:
In the above formula, Wi,1For low dose modality images liTwo-dimensional discrete wavelet transform is carried out to obtain an approximate matrix Wit,11To approximate a matrix Wit,1And performing two-dimensional discrete wavelet transform to obtain an approximate matrix.
Loss of semantic consistency:
In the above formula, CodeiTo form a low dose modality image liand its approximate matrix Wi,1Horizontal matrix Wi,1Hvertical matrix Wi,1VAnd diagonal matrix Wi,1DInputting the four results into an Encoder Encoder in the trained GAN network for encoding to obtain an encoding result, CodejTo form a low dose modality image ljAnd its approximate matrix Wj,1Horizontal matrix Wj,1Hvertical matrix Wj,1VAnd diagonal matrix Wj,1DAnd inputting the four results into an Encoder Encoder in the trained GAN network for encoding to obtain an encoding result, wherein i represents a mode i, j represents a mode j, and i is not equal to j.
Task tag supervision loss:
In the above formula, label is the label of the high dose chart h, labelitTo convert a high dose image hi,tand its approximate matrix Wit,1Horizontal matrix Wit,1HVertical matrix Wit,1VAnd diagonal matrix Wit,1DInputting a Task processor Task to obtain a Task label of the conversion diagram, wherein i represents a mode i;
Task tag consistency loss:
in the above formula, labelitTo convert a high dose image hi,tAnd its approximate matrix Wit,1horizontal matrix Wit,1HVertical matrix Wit,1VAnd diagonal matrix Wit,1DThe Task processor Task is input to obtain the Task label, label of the conversion chartjtTo convert a high dose image hj,tAnd its approximate matrix Wjt,1Horizontal matrix Wjt,1HVertical matrix Wjt,1VAnd diagonal matrix Wjt,1DInputting a Task processor Task to obtain a Task label of the conversion diagram, wherein i represents a mode i, j represents a mode j, and i is not equal to j;
The total loss of each term generator consisting of the encoder-decoder is thus the sum of the above losses, i.e.:
lossGenerator=lossAdversarial+losssupervision,1+lossconsistency,1+losssupervision,2+losscode,consistency+losslabel+losslabel,consistency
The above training process generates training of a high dose modality for complete multi-modal low dose transformation, where the encoder and the decoder are training target products of this embodiment. Unlike some previous LDCT (low dose helical CT) studies that only perform manual empirical physician visual effect quality evaluation on the high dose map converted from the low dose, the present embodiment performs task verification on the converted high dose map by using the task and label to which the data set belongs to ensure that the high dose map obtained by the conversion training is valid. In addition, the method of the present embodiment directly decodes to obtain the converted high dose mode image, and the previous research needs to perform fusion reconstruction on the wavelet image obtained by the recurrent neural network to obtain the converted high dose map, so the image decomposition method of the present embodiment does not require that the decomposition process is reversible.
In addition, the present embodiment further provides a system for converting multi-modality low-dose CT into high-dose CT based on GAN, including:
The input program unit is used for inputting low-dose CT of any modality;
The wavelet transformation program unit is used for carrying out two-dimensional discrete wavelet transformation on the low-dose CT to obtain a plurality of decomposition results;
And the conversion program unit is used for inputting the low-dose CT and a plurality of decomposition results thereof into a trained encoder in the GAN network for encoding, and then decoding the encoding results through a decoder in the GAN network to obtain a corresponding high-dose modal image.
in addition, the present embodiment further provides a system for converting a high-dose CT based on a GAN multi-modal low-dose CT, which includes a computer device programmed or configured to execute the steps of the method for converting a high-dose CT based on a GAN multi-modal low-dose CT according to the present embodiment, or a storage medium of the computer device having stored thereon a computer program programmed or configured to execute the method for converting a high-dose CT based on a GAN multi-modal low-dose CT according to the present embodiment.
Furthermore, the present embodiment also provides a computer readable storage medium, which stores thereon a computer program programmed or configured to execute the method for converting high-dose CT based on GAN multi-modality low-dose CT according to the present embodiment.
example two:
The present embodiment is substantially the same as the first embodiment, and the main differences are as follows: the GAN network is trained differently. The main reasons for this are: because the high-dose CT training data set with task labels required in the first embodiment is difficult to obtain, most of the existing public data sets are: unlabeled low-dose high-dose CT registered dataset (dataset a); task-labeled low dose CT dataset (dataset B). On the basis of continuing to use the modular method of the first embodiment, a supplementary scheme is designed in the first embodiment, a hybrid supervised learning method is adopted, supervised task processor training is carried out on a data set B, supervised low-dose CT to high-dose CT training is carried out on the data set A, then trained modules are combined, unsupervised training is carried out on a labeled data set B, and then high-quality high-dose CT images are generated through conversion.
As shown in fig. 9, this embodiment further includes, before step 3), a step of training a GAN network based on the unlabeled low-dose high-dose CT registered dataset a and the task-labeled low-dose CT dataset B, and the detailed steps include:
B1) Inputting task-tagged Low dose Modal images lBAnd label, executing the training process of the supervised Task processor to obtain the trained Task processor Task, inputting the low-dose mode image l of low-dose high-dose registrationAHigh dose modal image hAExecuting a supervised low-dose CT to high-dose CT training process to obtain a trained Encoder Encoder, Decoder Decoder and Discriminator, wherein the low-dose mode image lAAnd high dose modality image hAFrom data set A, Low dose modality image lBAnd label is from dataset B;
B2) Recombining the trained Encoder Encoder, Decoder Decoder, Discriminator and Task processor Task to perform unsupervised combination training to obtain the trained Encoder Encoder, Decoder Decoder and Task processor Task;
B3) Inputting test low-dose CT data, converting by using a trained Encoder Encoder, a Decoder Decode and a Task processor Task to obtain converted high-dose CT data, and processing the converted data by using the Task processor Task to obtain a Task processing result of the converted data; meanwhile, the test low-dose CT data is processed by using a Task processor Task to obtain a Task processing result of the test data;
B4) and comparing the task processing result of the conversion data with the task processing result of the test data to evaluate whether the conversion data is good, ending the training and exiting if the conversion data is good, and jumping to execute the step B1) if the conversion data is not good.
As shown in fig. 5, the detailed step of executing the supervised Task processor training process in step B1) to obtain the trained Task processor Task includes: performing first-level decomposition on the low-dose modal image of the data set B by using two-dimensional discrete wavelet transform to obtain an approximate matrix WB,1Horizontal matrix WB,1HVertical matrix WB,1VAnd diagonal matrix WB,1DModulo the low dose of data set BState image and WB,1、WB,1H、WB,1V、WB,1DInputting the data into a Task processor Task together, and processing the data to obtain a reconstructed Task labelrUsing a tag DiscriminatorlabelFor low dose modality images lBAnd a reconstruction tag labelrAnd performing discrimination learning, wherein the learning discriminates the former as true and the latter as false.
As shown in fig. 6, the detailed steps of performing the supervised low-dose CT to high-dose CT training procedure in step B1) to obtain the trained Encoder, Decoder, Discriminator include:
B1.1) input Low dose modality image l of dataset AAUsing two-dimensional discrete wavelet transform to low-dose mode imageAPerforming first-order decomposition to obtain an approximate matrix WA,1horizontal matrix WA,1HVertical matrix WA,1VAnd diagonal matrix WA,1DFour results;
B1.2) low dose modality image lAAnd approximation matrix WA,1Horizontal matrix WA,1HVertical matrix WA,1VInputting the codes into an Encoder Encoder together for encoding to obtain a Code of an encoding resultA(ii) a Encoding the resulting Code with the DecoderADecoding to obtain a conversion chart h of the converted high-dose modeA,t
B1.3) imaging with Discriminator in high dose modality hAConvert graph h for positive samplesA,tPerforming true and false discrimination learning on the negative sample;
b1.4) conversion of the plot h by two-dimensional discrete wavelet transformA,tPerforming first-order decomposition to obtain an approximate matrix WAt,1Horizontal matrix WAt,1HVertical matrix WAt,1VAnd diagonal matrix WAt,1D(ii) a Two-dimensional discrete wavelet transform for high dose mode image hAPerforming first-order decomposition to obtain an approximate matrix Wh,1Horizontal matrix Wh,1HVertical matrix Wh,1VAnd diagonal matrix Wh,1D
B1.5) converting the graph hA,tFirst order decomposition result of (a), high dose modality image hAThe wavelet self-supervision loss of an approximate matrix, a horizontal matrix, a vertical matrix and a diagonal matrix is obtained by layering the first-level decomposition result;
B1.6) to the approximation matrix WA,1Two-dimensional discrete wavelet transform is carried out to obtain an approximate matrix WA,11To the approximate matrix WAt,1Two-dimensional discrete wavelet transform is carried out to obtain an approximate matrix WAt,11Second order approximation matrix W for low dose maps and transition mapsA,11And WAt,11And solving the self-supervision loss.
Supervised low-dose to high-dose CT conversion of GAN networks based on dataset a this part of the training primarily utilizes registered low-dose and high-dose data, training the encoder and decoder to convert the low-dose data to generate high-dose data.
As shown in fig. 7, in step B2), the detailed steps of recombining the trained Encoder, Decoder, Discriminator, and Task processor Task to perform unsupervised combination training to obtain the trained Encoder, Decoder, and Task processor Task in this embodiment include:
B2.1) input of Low dose modality image l of dataset BBUsing two-dimensional discrete wavelet transform pair lBPerforming first-order decomposition to obtain an approximate matrix WB,1horizontal matrix WB,1HVertical matrix WB,1VAnd diagonal matrix WB,1Dfour results;
B2.2) low dose modality image lBAnd approximation matrix WB,1Horizontal matrix WB,1HVertical matrix WB,1VAnd diagonal matrix WB,1DInputting the codes into an Encoder Encoder together for encoding to obtain a Code of an encoding resultB(ii) a Encoding the resulting Code with the DecoderBDecoding to obtain a converted high dose modal image hB,t
b2.3) Discriminator with high dose modality image h of dataset AAfor a positive sample, low dose map l of data set BBAnd a transition diagram hB,tPerforming true and false discrimination learning on the negative sample;
B2.4) conversion of the plot h by two-dimensional discrete wavelet transformB,tperforming first-order decomposition to obtain an approximate matrix WBt,1Horizontal matrix WBt,1Hvertical matrix WBt,1VAnd diagonal matrix WBt,1D
B2.5) to the approximation matrix WB,1Two-dimensional discrete wavelet transform is carried out to obtain an approximate matrix WB,11To the approximate matrix WBt,1Two-dimensional discrete wavelet transform is carried out to obtain an approximate matrix WBt,11second order approximation matrix W for low dose maps and transition mapsB,11And WBt,11Solving the self-supervision loss;
B2.6) converting the graph hB,tAnd the approximate matrix W obtained by the first-order decompositionBt,1Horizontal matrix WBt,1HVertical matrix WBt,1VAnd diagonal matrix WBt,1DThe Task processor Task obtains the label of the conversion iconBtFinding a low dose modality image lBThe original label and the label of the conversion graphBtLoss of task tag consistency;
B2.7) low dose modality image lBAnd the approximate matrix W obtained by the first-order decompositionB,1Horizontal matrix WB,1HVertical matrix WB,1VAnd diagonal matrix WB,1DThe Task processor Task is input to obtain a reconstructed Task labelBrFinding a low dose modality image lBthe original label and the reconstructed labelBris self-monitoring for loss.
the method is an unsupervised combined training process, a trained Encoder Encoder, a Decoder Decoder and a Task processor Task are combined, the three modules are continuously trained by combining data sets A and B, the low dose and the high dose do not need to be registered in the training process, the low dose and the high dose are derived from different data sets, and the low dose modal image of the data set B can be converted into the high dose modal image by a discriminator loss, a wavelet secondary approximate matrix self-supervision loss, a Task tag consistency loss and a Task tag self-supervision loss constraint Encoder and Decoder. The embodiment uses the Encoder Encoder trained in the previous two training,The Decoder, the Discriminator and the Task processor Task are combined, and the training of the modules is continued, and the low dose data l of the data set B is inputBConverting to obtain high-dose CT image hB,tTrue high dose CT image h of dataset AAPositive sample, low dose CT image l as discriminatorBAnd transformed high dose CT image hB,tAs a negative example of the discriminator, unsupervised counterlearning is performed, and in addition, l is paired with a Task processor TaskBAnd hB,tPerforming label segmentation, learning the segmentation capability of the high-dose chart while keeping the segmentation capability of the Task processor Task on the low-dose chart, and segmenting a result and the low-dose mode image lBThe label of (1) is compared, so that the Encoder Encode and the Decoder Decode can retain focus information while converting and denoising.
After the training is completed, the present embodiment performs a conversion test on the test data in the data set B, and converts the low-dose test data into high-dose data by using the trained Encoder and Decoder, as shown in fig. 8.
For the test data of the data set B, the embodiment uses the trained Task processor Task to segment the conversion results of the test data and the test data, compares the segmentation results of the test data and the test data, and checks whether the converted data retains the lesion information, thereby verifying whether the converted data is valid. In addition, a trained Task processor Task can be used for carrying out Task processing on the data set A to obtain a Task label with reference value. For the converted data of the second embodiment, the trained Task processor Task is used for validity verification in the second embodiment, the low-dose test data is firstly subjected to Task processing to obtain a processing result, then the test data is converted to obtain high-dose data, the converted high-dose data is subjected to Task processing to obtain a Task processing result of the converted data, finally the two processing results are subjected to similarity evaluation, if the similarity of the two processing results meets the expectation, the generated data is good, otherwise, the network structure of the module is adjusted to be retrained.
The loss function of this embodiment is designed as follows:
1. the design of the supervised task processor training partial loss function is exactly the same as the first embodiment.
2. The loss function of the supervised low-dose CT conversion high-dose CT training part can be divided into two parts, namely discriminator loss and generator loss, and the discriminator loss and the generator loss are updated independently.
2.1, discriminator loss:
In the above formula, Discriminotor (h)A,t) Low dose modality image/in Discriminator discrimination dataset aAIs converted into a graph hA,tthe result of discrimination of (b), Discriminator (h)A) High dose modality image h representing Discriminator discrimination dataset aAThe result of the discrimination.
2.2, the generator loss item comprises guide loss provided by a discriminator and self-supervision loss of a wavelet two-level approximate matrix. The specific formula is as follows:
discriminator guided loss:
In the above formula, Discriminotor (h)A,t) Low dose modality image/in Discriminator discrimination dataset aAIs converted into a graph hA,tThe result of the discrimination.
Layered self-supervision loss of wavelets:
in the above equation, the matrix W is approximatedh,1Horizontal matrix Wh,1HVertical matrix Wh,1VAnd diagonal matrix Wh,1DFor low dose modality images l of data set B using two-dimensional discrete wavelet transformBThe result of the first-order decomposition is approximated by a matrix WAt,1horizontal matrix WAt,1HVertical matrix WAt,1VAnd diagonal matrix WAt,1DFor transforming a graph h by a two-dimensional discrete wavelet transformA,tThe result obtained by performing the first order decomposition.
Self-supervision loss of wavelet second-order approximation matrix:
In the above formula, WA,11To approximate a matrix WA,1two-dimensional discrete wavelet transform is carried out to obtain an approximate matrix WAt,11to approximate a matrix WAt,1And performing two-dimensional discrete wavelet transform to obtain an approximate matrix.
the generator loss term is specifically the sum of the above three equations, which can be expressed as:
lossGenerator,A=lossAdversarial,A+losssupervision,A,1+losssupervision,A,2
3. The unsupervised combined training part can be divided into two parts of discriminator loss and generator loss, and the two parts are updated independently.
3.1, discriminator loss:
In the above formula, Discriminotor (h)B,t) Low dose modality image l representing Discriminator discrimination dataset BBConverting to obtain high-dose CT image hB,tThe result of the discrimination. Discriminator (h)B) Low dose modality image l representing data set BBThe result of the discrimination. Discriminator (h)A) Low dose modality image l representing Discriminator discrimination dataset aAthe result of the discrimination.
3.2, discriminator loss:
The generator loss terms include discriminator-directed losses, wavelet second-order approximation matrix auto-supervision losses, task tag consistency losses, and task tag auto-supervision losses provided by the discriminator. The concrete formula is as follows:
Discriminator guided loss:
in the above formula, Discriminotor (h)A,t) Low dose modality image/in Discriminator discrimination dataset aAIs converted into a graph hA,tThe result of the discrimination.
Wavelet second-order approximation matrix self-supervision loss:
In the above formula, WA,11to approximate a matrix WA,1two-dimensional discrete wavelet transform is carried out to obtain an approximate matrix WAt,11To approximate a matrix WAt,1And performing two-dimensional discrete wavelet transform to obtain an approximate matrix.
Task tag consistency loss:
In the above equation, label is the low dose modality image l of the data set BBOriginal label of (1), labelBtTo convert the graph hB,tan approximate matrix W obtained by first-order decompositionBt,1horizontal matrix WBt,1Hvertical matrix WBt,1VAnd diagonal matrix WBt,1DInputting a Task processor Task to obtain a conversion chart label;
Task tag self-supervision loss:
In the above equation, label is the low dose modality image l of the data set BBOriginal label of (1), labelBrto form a low dose modality image lBAnd the approximate matrix W obtained by the first-order decompositionB,1Horizontal matrix WB,1HVertical matrix WB,1VAnd diagonal matrix WB,1Dinputting the Task processor Task to obtain a reconstructed Task label;
The generator loss is the sum of the above losses, and can be expressed as:
lossGenerator,B=lossAdversarial,B+losssupervision,A+losslabel,consistency+losslabel
in addition, the present embodiment further provides a system for converting multi-modality low-dose CT into high-dose CT based on GAN, including:
The input program unit is used for inputting low-dose CT of any modality;
The wavelet transformation program unit is used for carrying out two-dimensional discrete wavelet transformation on the low-dose CT to obtain a plurality of decomposition results;
and the conversion program unit is used for inputting the low-dose CT and a plurality of decomposition results thereof into a trained encoder in the GAN network for encoding, and then decoding the encoding results through a decoder in the GAN network to obtain a corresponding high-dose modal image.
In addition, the present embodiment further provides a system for converting a high-dose CT based on a GAN multi-modal low-dose CT, which includes a computer device programmed or configured to execute the steps of the method for converting a high-dose CT based on a GAN multi-modal low-dose CT according to the present embodiment, or a storage medium of the computer device having stored thereon a computer program programmed or configured to execute the method for converting a high-dose CT based on a GAN multi-modal low-dose CT according to the present embodiment.
Furthermore, the present embodiment also provides a computer readable storage medium, which stores thereon a computer program programmed or configured to execute the method for converting high-dose CT based on GAN multi-modality low-dose CT according to the present embodiment.
The above description is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may occur to those skilled in the art without departing from the principle of the invention, and are considered to be within the scope of the invention.

Claims (10)

1. A method for converting multi-modal low-dose CT to high-dose CT based on GAN is characterized by comprising the following implementation steps:
1) Inputting low-dose CT of any modality;
2) Carrying out two-dimensional discrete wavelet transform on the low-dose CT to obtain a plurality of decomposition results;
3) And inputting the low-dose CT and a plurality of decomposition results thereof into a trained coder in the GAN network for coding, and decoding the coding results through a decoder in the GAN network to obtain a corresponding high-dose modal image.
2. The method according to claim 1, wherein the step 2) of performing two-dimensional discrete wavelet transform on the low-dose CT to obtain a plurality of decomposition results specifically means obtaining an approximation matrix Wi,1Horizontal matrix Wi,1HVertical matrix Wi,1VAnd diagonal matrix Wi,1DFour results.
3. The method of claim 1, wherein the GAN network comprises an Encoder Encoder for encoding images with different dose levels to obtain the same feature space, a Decoder Decoder for decoding the encoded result to obtain the converted high dose map, a Discriminator for discriminating whether the images are true high dose maps, a tag Discriminator for discriminating between reconstructed tags and original tagslabelAnd a Task processor Task for processing the input image to obtain a Task label.
4. the method for transforming high-dose CT based on GAN multi-modal low-dose CT as claimed in claim 3, wherein step 3) is preceded by the step of training GAN network by supervised method, and the detailed steps comprise:
A1) Inputting a high dose map h with task labels and label thereof;
A2) training the Task processor Task to obtain a Task processor Task which completes training; adding i Poisson noises with different levels to the high-dose modal image h to obtain a registered low-dose modal image l with i dose levelsi
A3) Randomly selecting one mode from the i low-dose modes to carry out a low-dose mode image liPerforming conversion training with a high-dose mode image h, and repeating the training guided by a trained Task processor Task to obtain a trained Encoder Encoder and a trained Decoder Decode;
A4) input of test Low dose CT data ltestConverting the test data by using a trained module to obtain converted high-dose CT data, and processing the converted data by using a Task processor Task to obtain a Task processing result of the converted data; low dose CT data ltestProcessing the test data by using a Task processor Task to obtain a Task processing result of the test data;
A5) Comparing the task processing result of the conversion data with the task processing result of the test data, evaluating whether the conversion result is good, and if the conversion result is good, finishing the training; otherwise, skipping to execute the step A1) and continuing training.
5. The method of GAN based multi-modal low-dose CT to high-dose CT according to claim 4, wherein the step A2) of training the Task processor Task comprises: carrying out first-level decomposition on the high-dose chart h by using two-dimensional discrete wavelet transform to obtain an approximate matrix Wh,1horizontal matrix Wh,1HVertical matrix Wh,1VAnd diagonal matrix Wh,1DHigh dose maps h and Wh,1、Wh,1H、Wh,1V、Wh,1DInputting the data into a Task processor Task together, and processing the data to obtain a reconstructed Task labelrUsing a tag DiscriminatorlabelFor the Label label and the reconstruction Label label of the high dose map hrto carry outLearning by discrimination, wherein the former is discriminated as true and the latter is discriminated as false;
Low dose modality image l in step a3)iThe detailed steps of the conversion training with the high dose modality image h include:
A3.1) input of Low dose modality image li
A3.2) for Low dose modality images liTwo-dimensional discrete wavelet transform is carried out to obtain an approximate matrix Wi,1Horizontal matrix Wi,1HVertical matrix Wi,1VAnd diagonal matrix Wi,1DFour results;
A3.3) mapping of Low dose modality images liAnd its approximate matrix Wi,1horizontal matrix Wi,1HVertical matrix Wi,1VAnd diagonal matrix Wi,1DInputting the four results into an Encoder Encoder in the GAN network for encoding to obtain an encoding result CodeiThen Code the coding resultiDecoding the coding result by a Decoder in the GAN network to obtain a corresponding converted high-dose modal image hi,t
a3.4) converting the Discriminator into a high dose mode image h with the high dose mode image h as a positive samplei,tPerforming true and false discrimination learning on the negative sample;
A3.5) conversion of high dose modality image hi,tTwo-dimensional discrete wavelet transform is carried out to obtain an approximate matrix Wit,1Horizontal matrix Wit,1HVertical matrix Wit,1VAnd diagonal matrix Wit,1D(ii) a Carrying out two-dimensional discrete wavelet transform on the high-dose modal image h to obtain an approximate matrix Wh,1horizontal matrix Wh,1Hvertical matrix Wh,1VAnd diagonal matrix Wh,1D
A3.6) conversion of high-dose images hi,tOf the approximation matrix Wit,1horizontal matrix Wit,1HVertical matrix Wit,1VAnd diagonal matrix Wit,1DApproximate matrix W of high dose modality image hh,1Horizontal matrix Wh,1HVertical matrix Wh,1VAnd diagonal matrix Wh,1DLayered approximationWavelet self-supervision loss of a similar matrix, a horizontal matrix, a vertical matrix and a diagonal matrix;
A3.7) approximation matrix W obtained in step A3)i,1Two-dimensional discrete wavelet transform is carried out to obtain an approximate matrix Wi,11For the approximate matrix W obtained in step A5)it,1Two-dimensional discrete wavelet transform is carried out to obtain an approximate matrix Wit,11Second order approximation matrix W for low dose maps and transition mapsi,11And Wit,11Solving the self-supervision loss;
A3.8) conversion of the high dose modality image h obtained in step A3)i,tAnd its approximate matrix Wit,1Horizontal matrix Wit,1HVertical matrix Wit,1VAnd diagonal matrix Wit,1DThe Task processor Task obtains the label of the conversion iconitLabel for finding high dose map and label for conversion iconitTask tag self-supervision loss of (1);
A3.9) Code results for multiple Low dose modalitiesiAnd Codej,j≠iSolving semantic consistency loss; and solving wavelet consistency loss by layers according to the two-dimensional discrete wavelet transformation results of a plurality of low-dose mode conversion maps.
6. The method for GAN-based multi-modality low-dose CT conversion of high-dose CT according to claim 3, further comprising before step 3) the step of training a GAN network based on unlabeled low-dose high-dose CT registered dataset a, task-labeled low-dose CT dataset B, the detailed steps comprising:
B1) Inputting task-tagged Low dose Modal images lBAnd label, executing the training process of the supervised Task processor to obtain the trained Task processor Task, inputting the low-dose mode image l of low-dose high-dose registrationAHigh dose modal image hAExecuting a supervised low-dose CT to high-dose CT training process to obtain a trained Encoder Encoder, Decoder Decoder and Discriminator, wherein the low-dose mode image lAAnd high dose modality image hAFrom data set A, Low dose ModemImage lBand label from dataset B;
B2) Recombining the trained Encoder Encoder, Decoder Decoder, Discriminator and Task processor Task to perform unsupervised combination training to obtain the trained Encoder Encoder, Decoder Decoder and Task processor Task;
B3) Input test Low dose CT data ltestConverting by using a trained Encoder Encoder, a Decoder Decode and a Task processor Task to obtain converted high-dose CT data, and processing the converted data by using the Task processor Task to obtain a Task processing result of the converted data; at the same time, test low dose CT data ltestProcessing the test data by using a Task processor Task to obtain a Task processing result of the test data;
B4) And comparing the task processing result of the conversion data with the task processing result of the test data to evaluate whether the conversion data is good, ending the training and exiting if the conversion data is good, and jumping to execute the step B1) if the conversion data is not good.
7. The method of GAN based multi-modal low-dose CT to high-dose CT as claimed in claim 6, wherein the detailed step of performing the supervised Task processor training procedure in step B1) to obtain the trained Task processor Task comprises: low dose modality image l of dataset B with two-dimensional discrete wavelet transformBperforming first-order decomposition to obtain an approximate matrix WB,1Horizontal matrix WB,1Hvertical matrix WB,1VAnd diagonal matrix WB,1DA low dose modality image l of the data set BBAnd WB,1、WB,1H、WB,1V、WB,1Dinputting the data into a Task processor Task together, and processing the data to obtain a reconstructed Task labelrUsing a tag DiscriminatorlabelFor low dose modality images lBAnd a reconstruction tag labelrCarrying out discrimination learning, wherein the learning discriminates the former as true and the latter as false;
The detailed steps of performing the supervised low-dose CT to high-dose CT training process in step B1) to obtain the trained Encoder, Decoder and Discriminator include:
B1.1) input Low dose modality image l of dataset AAusing two-dimensional discrete wavelet transform to low-dose mode imageAPerforming first-order decomposition to obtain an approximate matrix WA,1horizontal matrix WA,1HVertical matrix WA,1VAnd diagonal matrix WA,1DFour results;
b1.2) low dose modality image lAand approximation matrix WA,1Horizontal matrix WA,1HVertical matrix WA,1VInputting the codes into an Encoder Encoder together for encoding to obtain a Code of an encoding resultA(ii) a Encoding the resulting Code with the DecoderADecoding to obtain a conversion chart h of the converted high-dose modeA,t
B1.3) imaging with Discriminator in high dose modality hAconvert graph h for positive samplesA,tPerforming true and false discrimination learning on the negative sample;
B1.4) conversion of the plot h by two-dimensional discrete wavelet transformA,tPerforming first-order decomposition to obtain an approximate matrix WAt,1Horizontal matrix WAt,1HVertical matrix WAt,1VAnd diagonal matrix WAt,1D(ii) a Two-dimensional discrete wavelet transform for high dose mode image hAperforming first-order decomposition to obtain an approximate matrix Wh,1Horizontal matrix Wh,1Hvertical matrix Wh,1VAnd diagonal matrix Wh,1D
b1.5) converting the graph hA,tFirst order decomposition result of (a), high dose modality image hAThe wavelet self-supervision loss of an approximate matrix, a horizontal matrix, a vertical matrix and a diagonal matrix is obtained by layering the first-level decomposition result;
B1.6) to the approximation matrix WA,1Two-dimensional discrete wavelet transform is carried out to obtain an approximate matrix WA,11To the approximate matrix WAt,1Two-dimensional discrete wavelet transform is carried out to obtain an approximate matrix WAt,11Second order approximation matrix W for low dose maps and transition mapsA,11and WAt,11Solving the self-supervision loss;
The detailed steps of recombining the trained Encoder Encoder, Decoder Decoder, Discriminator and Task processor Task in the step B2) to obtain the trained Encoder Encoder, Decoder Decoder and Task processor Task through unsupervised combination training include:
B2.1) input of Low dose modality image l of dataset BBUsing two-dimensional discrete wavelet transform pair lBPerforming first-order decomposition to obtain an approximate matrix WB,1Horizontal matrix WB,1HVertical matrix WB,1VAnd diagonal matrix WB,1DFour results;
B2.2) low dose modality image lBand approximation matrix WB,1Horizontal matrix WB,1HVertical matrix WB,1VAnd diagonal matrix WB,1DInputting the codes into an Encoder Encoder together for encoding to obtain a Code of an encoding resultB(ii) a Encoding the resulting Code with the DecoderBDecoding to obtain a converted high dose modal image hB,t
B2.3) Discriminator with high dose modality image h of dataset AAFor a positive sample, low dose map l of data set BBAnd a transition diagram hB,tPerforming true and false discrimination learning on the negative sample;
B2.4) conversion of the plot h by two-dimensional discrete wavelet transformB,tPerforming first-order decomposition to obtain an approximate matrix WBt,1Horizontal matrix WBt,1HVertical matrix WBt,1VAnd diagonal matrix WBt,1D
b2.5) to the approximation matrix WB,1Two-dimensional discrete wavelet transform is carried out to obtain an approximate matrix WB,11To the approximate matrix WBt,1Two-dimensional discrete wavelet transform is carried out to obtain an approximate matrix WBt,11For low dose modality images lBAnd a transition diagram hB,tSecond order approximation matrix WB,11And WBt,11Solving the self-supervision loss;
B2.6) converting the graph hB,tand the approximate matrix W obtained by the first-order decompositionBt,1Horizontal matrix WBt,1HVertical matrix WBt,1VAnd diagonal matrix WBt,1DThe Task processor Task obtains the label of the conversion iconBtFinding a low dose modality image lBThe original label and the conversion icon labelBtTask tag consistency loss of (2);
B2.7) low dose modality image lBand the approximate matrix W obtained by the first-order decompositionB,1horizontal matrix WB,1Hvertical matrix WB,1VAnd diagonal matrix WB,1DThe Task processor Task is input to obtain a reconstructed Task labelBrFinding a low dose modality image lBThe original label and the reconstructed labelBrIs self-monitoring for loss.
8. A system for converting multi-modality GAN-based low-dose CT to high-dose CT, comprising:
The input program unit is used for inputting low-dose CT of any modality;
The wavelet transformation program unit is used for carrying out two-dimensional discrete wavelet transformation on the low-dose CT to obtain a plurality of decomposition results;
And the conversion program unit is used for inputting the low-dose CT and a plurality of decomposition results thereof into a trained encoder in the GAN network for encoding, and then decoding the encoding results through a decoder in the GAN network to obtain a corresponding high-dose modal image.
9. A system for converting GAN-based multi-modal low-dose CT into high-dose CT, comprising a computer device, wherein the computer device is programmed or configured to perform the steps of the method for converting GAN-based multi-modal low-dose CT into high-dose CT of any one of claims 1-7, or wherein a storage medium of the computer device has stored thereon a computer program programmed or configured to perform the method for converting GAN-based multi-modal low-dose CT into high-dose CT of any one of claims 1-7.
10. a computer readable storage medium having stored thereon a computer program programmed or configured to perform the method of GAN based multi-modal low-dose CT to high-dose CT as claimed in any of claims 1-7.
CN201910832520.1A 2019-09-04 2019-09-04 Method for converting multi-modal low-dose CT into high-dose CT based on GAN Active CN110559009B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910832520.1A CN110559009B (en) 2019-09-04 2019-09-04 Method for converting multi-modal low-dose CT into high-dose CT based on GAN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910832520.1A CN110559009B (en) 2019-09-04 2019-09-04 Method for converting multi-modal low-dose CT into high-dose CT based on GAN

Publications (2)

Publication Number Publication Date
CN110559009A true CN110559009A (en) 2019-12-13
CN110559009B CN110559009B (en) 2020-12-25

Family

ID=68777779

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910832520.1A Active CN110559009B (en) 2019-09-04 2019-09-04 Method for converting multi-modal low-dose CT into high-dose CT based on GAN

Country Status (1)

Country Link
CN (1) CN110559009B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111179228A (en) * 2019-12-16 2020-05-19 浙江大学 Single-energy CT energy spectrum imaging method based on deep learning
CN111275640A (en) * 2020-01-17 2020-06-12 天津大学 Image enhancement method for fusing two-dimensional discrete wavelet transform and generating countermeasure network
CN111325695A (en) * 2020-02-29 2020-06-23 深圳先进技术研究院 Low-dose image enhancement method and system based on multi-dose grade and storage medium
CN111437519A (en) * 2020-04-03 2020-07-24 北京易康医疗科技有限公司 Multi-line beam selection method with optimal biological effect
CN111489404A (en) * 2020-03-20 2020-08-04 深圳先进技术研究院 Image reconstruction method, image processing device and device with storage function
CN113053496A (en) * 2021-03-19 2021-06-29 深圳高性能医疗器械国家研究院有限公司 Deep learning method for low-dose estimation of medical images
WO2021189383A1 (en) * 2020-03-26 2021-09-30 深圳先进技术研究院 Training and generation methods for generating high-energy ct image model, device, and storage medium
WO2022193276A1 (en) * 2021-03-19 2022-09-22 深圳高性能医疗器械国家研究院有限公司 Deep learning method for low dose estimation of medical image

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130051516A1 (en) * 2011-08-31 2013-02-28 Carestream Health, Inc. Noise suppression for low x-ray dose cone-beam image reconstruction
CN107610195A (en) * 2017-07-28 2018-01-19 上海联影医疗科技有限公司 The system and method for image conversion
US20180144465A1 (en) * 2016-11-23 2018-05-24 General Electric Company Deep learning medical systems and methods for medical procedures
CN108492269A (en) * 2018-03-23 2018-09-04 西安电子科技大学 Low-dose CT image de-noising method based on gradient canonical convolutional neural networks
US10074038B2 (en) * 2016-11-23 2018-09-11 General Electric Company Deep learning medical systems and methods for image reconstruction and quality evaluation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130051516A1 (en) * 2011-08-31 2013-02-28 Carestream Health, Inc. Noise suppression for low x-ray dose cone-beam image reconstruction
US20180144465A1 (en) * 2016-11-23 2018-05-24 General Electric Company Deep learning medical systems and methods for medical procedures
US10074038B2 (en) * 2016-11-23 2018-09-11 General Electric Company Deep learning medical systems and methods for image reconstruction and quality evaluation
US10242443B2 (en) * 2016-11-23 2019-03-26 General Electric Company Deep learning medical systems and methods for medical procedures
CN107610195A (en) * 2017-07-28 2018-01-19 上海联影医疗科技有限公司 The system and method for image conversion
CN108492269A (en) * 2018-03-23 2018-09-04 西安电子科技大学 Low-dose CT image de-noising method based on gradient canonical convolutional neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
QINGSONG YANG 等: "Low-Dose CT Image Denoising Using a Generative Adversarial Network With Wasserstein Distance and Perceptual Loss", 《IEEE TRANSACTIONS ON MEDICAL》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111179228A (en) * 2019-12-16 2020-05-19 浙江大学 Single-energy CT energy spectrum imaging method based on deep learning
CN111275640B (en) * 2020-01-17 2022-12-09 天津大学 Image enhancement method for fusing two-dimensional discrete wavelet transform and generation of countermeasure network
CN111275640A (en) * 2020-01-17 2020-06-12 天津大学 Image enhancement method for fusing two-dimensional discrete wavelet transform and generating countermeasure network
CN111325695A (en) * 2020-02-29 2020-06-23 深圳先进技术研究院 Low-dose image enhancement method and system based on multi-dose grade and storage medium
WO2021168920A1 (en) * 2020-02-29 2021-09-02 深圳先进技术研究院 Low-dose image enhancement method and system based on multiple dose levels, and computer device, and storage medium
CN111489404A (en) * 2020-03-20 2020-08-04 深圳先进技术研究院 Image reconstruction method, image processing device and device with storage function
CN111489404B (en) * 2020-03-20 2023-09-05 深圳先进技术研究院 Image reconstruction method, image processing device and device with storage function
WO2021189383A1 (en) * 2020-03-26 2021-09-30 深圳先进技术研究院 Training and generation methods for generating high-energy ct image model, device, and storage medium
CN111437519A (en) * 2020-04-03 2020-07-24 北京易康医疗科技有限公司 Multi-line beam selection method with optimal biological effect
CN111437519B (en) * 2020-04-03 2021-10-19 山东省肿瘤防治研究院(山东省肿瘤医院) Multi-line beam selection method with optimal biological effect
WO2022193276A1 (en) * 2021-03-19 2022-09-22 深圳高性能医疗器械国家研究院有限公司 Deep learning method for low dose estimation of medical image
CN113053496B (en) * 2021-03-19 2023-08-29 深圳高性能医疗器械国家研究院有限公司 Deep learning method for low-dose estimation of medical image
CN113053496A (en) * 2021-03-19 2021-06-29 深圳高性能医疗器械国家研究院有限公司 Deep learning method for low-dose estimation of medical images

Also Published As

Publication number Publication date
CN110559009B (en) 2020-12-25

Similar Documents

Publication Publication Date Title
CN110559009B (en) Method for converting multi-modal low-dose CT into high-dose CT based on GAN
Emami et al. Generating synthetic CTs from magnetic resonance images using generative adversarial networks
Park et al. Unpaired image denoising using a generative adversarial network in X-ray CT
Tang et al. Unpaired low-dose CT denoising network based on cycle-consistent generative adversarial network with prior image information
CN110827216A (en) Multi-generator generation countermeasure network learning method for image denoising
CN110444277B (en) Multi-mode brain MRI image bidirectional conversion method based on multi-generation and multi-confrontation
Zhou et al. Deep learning methods for medical image fusion: A review
Chen et al. Bone suppression of chest radiographs with cascaded convolutional networks in wavelet domain
Singh et al. Medical image generation using generative adversarial networks
CN115830163A (en) Progressive medical image cross-mode generation method and device based on deterministic guidance of deep learning
Geng et al. PMS-GAN: Parallel multi-stream generative adversarial network for multi-material decomposition in spectral computed tomography
Amirkolaee et al. Development of a GAN architecture based on integrating global and local information for paired and unpaired medical image translation
Li et al. Low-dose CT image synthesis for domain adaptation imaging using a generative adversarial network with noise encoding transfer learning
CN111340903B (en) Method and system for generating synthetic PET-CT image based on non-attenuation correction PET image
Poonkodi et al. 3d-medtrancsgan: 3d medical image transformation using csgan
Deng et al. Correcting motion artifacts in coronary computed tomography angiography images using a dual-zone cycle generative adversarial network
Prakash Tunga et al. Compression of MRI brain images based on automatic extraction of tumor region
CN115018728A (en) Image fusion method and system based on multi-scale transformation and convolution sparse representation
Singh Compression of MRI brain images based on automatic extraction of tumor region.
Kening et al. Nested recurrent residual unet (nrru) on gan (nrrg) for cardiac ct images segmentation task
Xue et al. PET Synthesis via Self-supervised Adaptive Residual Estimation Generative Adversarial Network
Jones et al. A multi‐stage fusion framework to classify breast lesions using deep learning and radiomics features computed from four‐view mammograms
Rani et al. Efficient fused convolution neural network (EFCNN) for feature level fusion of medical images
Valsala et al. Alzheimer’s detection through neuro imaging and subsequent fusion for clinical diagnosis
Ichikawa et al. Acquisition time reduction in pediatric 99mTc‐DMSA planar imaging using deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant