CN114241074B - CBCT image reconstruction method for deep learning and electronic noise simulation - Google Patents

CBCT image reconstruction method for deep learning and electronic noise simulation Download PDF

Info

Publication number
CN114241074B
CN114241074B CN202111567288.7A CN202111567288A CN114241074B CN 114241074 B CN114241074 B CN 114241074B CN 202111567288 A CN202111567288 A CN 202111567288A CN 114241074 B CN114241074 B CN 114241074B
Authority
CN
China
Prior art keywords
cbct
image
resolution
projection
noise
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111567288.7A
Other languages
Chinese (zh)
Other versions
CN114241074A (en
Inventor
宋�莹
张伟康
苏嘉崇
王强
王雪桃
柏森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN202111567288.7A priority Critical patent/CN114241074B/en
Publication of CN114241074A publication Critical patent/CN114241074A/en
Application granted granted Critical
Publication of CN114241074B publication Critical patent/CN114241074B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/005Specific pre-processing for tomographic reconstruction, e.g. calibration, source positioning, rebinning, scatter correction, retrospective gating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • G06T3/4076Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution using the original low-resolution images to iteratively correct the high-resolution images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention discloses a CBCT image reconstruction method for deep learning and electronic noise simulation, which relates to the technical field of image processing, and aims to solve the problems that a high-precision spiral CT image and a registration CBCT image of the same scanning object under the same body position cannot be obtained simultaneously in the prior art. Compared with the existing reconstruction method based on priori information, the method has the advantages of low scanning dosage, high generation speed and capability of generating high-precision images within 1 minute.

Description

CBCT image reconstruction method for deep learning and electronic noise simulation
Technical Field
The invention relates to the technical field of image processing, in particular to the technical field of CBCT image reconstruction methods for deep learning and electronic noise simulation.
Background
The international cancer research institution under the world health organization issues the latest global cancer statistics data in 2018, 1810 ten thousand cancer cases are increased worldwide in 2018, the number of deaths reaches 960 ten thousand, and the global cancer burden is further increased. 1/5 of men and 1/6 of women worldwide will have cancer, and 1/8 of men and 1/11 of women will die. Radiation therapy is a treatment method for treating tumors by radiation, and about 70% of cancer patients need radiation therapy in the course of treating cancer, and about 40% of cancer can be radically treated by radiation therapy. The role and position of radiation therapy in tumor therapy are increasingly prominent, and radiation therapy has become one of the main means for treating malignant tumors.
Radiation therapy has entered the era of precision radiation therapy, which requires accurate dose calculation and accurate dose projection. The image guiding technology is developed in the accurate direction of dose projection, and the dose projection is ensured to be accurate through the image monitoring technology before dose projection, before projection and after projection. In the image guidance technology, the Cone beam CT (Cone beam CT, CBCT) image guidance technology can directly three-dimensionally observe the conditions of a target area in a body and surrounding organs at risk, is convenient for correcting the positioning of the radiotherapy body of a patient, has a motion mode consistent with the motion mode of a head of a clinical medical small accelerator, can be conveniently carried on the medical small accelerator, and becomes a gold standard of the radiotherapy image guidance technology.
CBCT image guided techniques are widely used in clinic, but have some problems. Firstly, multiple CBCT scanning such as positioning correction before radiation therapy and positioning verification after radiation therapy can bring extra large-range kilovolt X-ray radiation; and secondly, the one-time scanning time of CBCT needs about 3 minutes, so that the clinical treatment efficiency is reduced, the social benefit is reduced, and the economic loss is increased when the accelerator for the clinical radiotherapy is occupied. Thus, low dose CBCT and fast scan CBCT are currently the direction of development of research.
In the prior art, the traditional CBCT scanning is still mainly adopted in clinic at present, and the low-dose CBCT and the rapid-scanning CBCT are not really used in clinic. At present, low-dose CBCT is studied, and the main idea is to supplement information in the CBCT reconstruction process by reducing the CBCT scanning angle and the voltage and current of a CBCT X-ray bulb tube and introducing prior information through other informatics means so as to compensate the reduction of the CBCT image quality caused by insufficient scanning information. The main algorithm ideas of introducing priori information low-dose CBCT reconstruction are that high-precision spiral CT image information [1,2] is introduced through image registration, full-sampling CBCT image information [3] is introduced through dictionary learning, and scanned object information is recovered as much as possible through characterization penalty items [4,5 ].
Among them, the 5 documents cited in the prior art are as follows:
1、Wang J,Li T,Xing L.SU-FF-I-44:Iterative Image Reconstruction for CBCT Using Edge-Preserving Prior.Med.Phys.2009;36(6Part3):2444.doi:10.1118/1.3181163。
2、Chen Y,Yin F-F,Zhang Y,Zhang Y,Ren L.Low dose CBCT reconstruction via prior contour based total variation(PCTV)regularization:a feasibility study.Phys Med Biol 2018;63(8):85014。
3、Song Y,Zhang W,Zhang H,Wang Q,Zhao J.Low-dose cone-beam CT (LD-CBCT)reconstruction for image-guided radiation therapy(IGRT)by three-dimensional dual-dictionary learning.Radiation Oncology 2020;15(1)。
4、Liu L,Li X,Xiang K,Wang J,Tan S.Low-dose CBCT Reconstruction Using Hessian Schatten Penalties.IEEE Transactions on Medical Imaging 2017;36(12):2588–2599。
5、Ouyang L,Solberg T,Wang J.Effects of the penalty on the penalized weighted least-squares image reconstruction for low-dose CBCT.Phys Med Biol 2011;56(17):5535–5552.doi:10.1088/0031-9155/56/17/006。
however, the problems which are not solved by the algorithm exist, and the high-precision spiral CT image information is introduced in an image registration mode, so that the difference of the body positions of the scanning objects between the high-precision spiral CT image and the registration CBCT image can not be solved; the high-precision spiral CT image and the registration CBCT image respectively need different equipment to acquire the images, the situation that the high-precision spiral CT image and the registration CBCT image are acquired by different equipment cannot be achieved in the prior physical time space, and the body positions of a scanning object are different when the high-precision spiral CT image and the registration CBCT image are acquired by different equipment.
Disclosure of Invention
The invention aims at: in order to solve the technical problems that the high-precision spiral CT image and the registration CBCT image of the same scanning object under the same body position can not be obtained at the same time, the invention provides a CBCT image reconstruction method for deep learning and electronic noise simulation.
The invention adopts the following technical scheme for realizing the purposes:
a CBCT image reconstruction method for deep learning and electronic noise simulation comprises the following steps:
step 1: acquiring and processing data;
acquiring a plurality of high-resolution CT images, and generating a simulated low-resolution CBCT image after carrying out noise processing on the high-resolution CT images;
step 2: building a deep neural network model;
step 3: training a deep neural network model;
training the deep neural network model built in the second step by using the high-resolution CT image and the low-resolution CBCT image in the first step;
step 4: reconstructing a CBCT image;
and (3) inputting the acquired low-resolution CBCT image into the depth neural network model trained in the step (3), and outputting a high-resolution CT image by the depth neural network model.
In step 1, the specific steps of generating a low resolution CBCT image after noise processing using a high resolution CT image are:
step 1.1: intercepting a high-resolution CT image according to the cone beam CT scanning range;
step 1.2: obtaining a full sampling or downsampling scanning geometric parameter of the CBCT, and converting the high-resolution CT image intercepted in the step 1.1 into projection data in a CBCT format by utilizing a front projection algorithm;
step 1.3: converting the projection data obtained in the step 1.2 into projection signals, and converting the projection signals into photon signals;
step 1.4: adding a noise signal into the photon signal obtained in the step 1.3 to obtain a noise photon signal;
step 1.5: reconverting the noise photon signals obtained in the step 1.4 into noise projection signals, and converting the noise projection signals into low-dose CBCT projection data;
step 1.6: and (3) reconstructing the low-dose CBCT projection data obtained in the step (1.5) to generate a simulated low-resolution CBCT image.
In step 1.2, the CBCT full-sampling or downsampling scan geometry is derived from a desensitized historical cone-beam CT scan file, or a scan file obtained after non-line free scanning using cone-beam CT.
In step 1.2, when the high resolution CT image intercepted in step 1.1 is converted into the projection data in CBCT format by using the orthographic projection algorithm, the orthographic projection function is adopted as follows:
Figure SMS_1
wherein p represents the signal value of any X-ray projection, X i Representing the attenuation contribution degree of the ith image data to the ray, V i Representing the ith image data value, Ω d Representing the image data set through which the ray reaches the d-th detector.
In step 1.3, when converting the projection signal into a photon signal, the photons are generated and filtered to reach the scanned object, and then attenuated to be received by the CT detector, wherein the relation between the projection signal and the photon signal is as follows:
I(d)=I filt (d)·e -p(d)
wherein d represents the d-th detector acquisition channel, I (d) represents the photon signal intensity of X-rays received by the d-th detector acquisition channel, I filt (d) The p (d) represents the projection signal of the X-ray received by the d-th detector acquisition channel through the scanned object for the X-ray filtered outgoing photon signal intensity received by the d-th detector acquisition channel.
The photons are generated and filtered to reach the scanned object, and in the process, the attenuation formula from the X-ball tube to the detector is as follows:
I(d)=I 0 ·β(d)·e -p(d) ·T(d)
wherein I is 0 X-ball tube for representing filtering frontThe intensity of the released photon signal, beta (d) is the intensity attenuation function of the butterfly filter corresponding to the acquisition channel of the d-th detector, and T (d) is the response function of the acquisition channel of the d-th detector.
For all X-rays at different moments, the attenuation formula from the X-ray tube to the detector after receiving photons is as follows:
I(d,t)=I 0 (t)·β(d)·e -p(d,t) ·T(d)
wherein I (d, t) is the photon signal intensity of the X-ray received by the d-th detector acquisition channel at the time t, I 0 (t) represents the photon signal intensity released by the X-ray tube before filtering at the moment t, p (d, t) is the projection signal of the X-ray received by the d-th detector acquisition channel at the moment t after passing through the scanned object, I 0 And (t) is the intensity of the emergent light signal of the X bulb at the moment t.
In step 1.4, the noise photon signal is calculated by the following formula:
I CBCT =I+Guassian(σ noise )
Figure SMS_2
/>
wherein I is CBCT Representing photon signal intensity with noise signal, I representing photon signal intensity, K 1 Scale factor, K, representing helical CT scan photon composite Poisson distribution 2 Scale factor lambda representing CBCT scanning photon composite Poisson distribution plan Represents the mean value parameter, lambda of the photon composite poisson distribution of the spiral CT scanning CBCT Representing the mean value parameter, N, of the CBCT scanning photon composite Poisson distribution plan Representing the electronic noise parameters of a helical CT scanning system, N CBCT The electronic noise parameters representing the CBCT scanning system were Gaussian (·) functions were Gaussian model functions with a mean of 0 and a variance of 1.
In step 1.5, the noise photon signal reconverts the noise projection signal, and converts the noise projection signal into low dose CBCT projection data, and the calculation formula of the whole process is as follows:
p CBCT (d,t)=ln(I 0 (t)·β(d)·T(d)/I CBCT (d,t))
wherein d represents the acquisition channel of the d-th detector, I 0 (T) represents the photon signal intensity released by the X bulb before filtering at the moment T, beta (d) is the intensity attenuation function of the butterfly filter corresponding to the acquisition channel of the d-th detector, T (d) is the response function of the acquisition channel of the d-th detector, I CBCT (d, t) represents the noise photon signal intensity after noise is added to the photon signal of the X-ray received by the d-th detector acquisition channel at the time t.
In step 1.6, when reconstructing a low resolution CBCT image using low dose CBCT projection data, the reconstruction algorithm uses an analytical algorithm or an iterative algorithm.
The beneficial effects of the invention are as follows:
1. in the invention, the simulated low-resolution CBCT image is generated by adopting the high-resolution CT image, so that the image contents in the high-resolution CT image and the corresponding low-resolution CBCT image are completely the same, and the image information difference caused by the body position difference when the CT image and the CBCT image are respectively acquired can be solved; the depth neural network model is trained by adopting CT images and CBCT images with identical image contents, and the trained depth neural network model can be used for obtaining high-resolution CT images with identical image contents on the basis of subsequently inputting low-resolution CBCT images, so that the problem that high-precision spiral cT images and CBCT images of the same scanning object in the prior art cannot be obtained at the same time under the same body position is solved.
2. In the invention, a simulated low-resolution CBCT image is generated by using a high-resolution CT image, a multi-mode image pair with completely matched anatomical structures is constructed, and the constructed paired image is adopted to train a deep neural network model. The trained deep neural network model extracts shallow and deep relations among the multi-mode images, establishes conversion between the two multi-mode images, realizes conversion from cone beam CT images to spiral CT images, improves image resolution, and improves the quality of radiotherapy image registration.
3. Compared with the existing reconstruction method based on priori information, the method has the advantages of being low in scanning dosage, high in generation speed and capable of generating high-precision images within 1 minute.
Drawings
FIG. 1 is a schematic flow diagram of a system of the present invention;
FIG. 2 is a flow chart of the present invention for generating a simulated low resolution CBCT image using a high resolution CT image;
FIG. 3 is a schematic diagram of a deep neural network model structure;
fig. 4 is a schematic diagram of the internal network structure of the generator.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention.
Example 1
As shown in fig. 1, the present embodiment provides a CBCT image reconstruction method for deep learning and electronic noise simulation, including the following steps:
step 1: acquiring and processing data;
acquiring a plurality of high-resolution CT images, and generating a simulated low-resolution CBCT image after carrying out noise processing on the high-resolution CT images;
step 2: building a deep neural network model;
step 3: training a deep neural network model;
training the deep neural network model built in the second step by using the high-resolution CT image and the low-resolution CBCT image in the first step;
step 4: reconstructing a CBCT image;
and (3) inputting the acquired low-resolution CBCT image into the depth neural network model trained in the step (3), and outputting a high-resolution CT image by the depth neural network model.
The simulated low-resolution CBCT image is generated by utilizing the high-resolution CT image, so that the CT image and the CBCT image with completely consistent image contents are obtained, and the problem that the CT image and the CBCT image with completely consistent image contents of the same object and the same body position can not be obtained simultaneously in the same physical space-time in the prior art is effectively solved. Based on the trained deep neural network model, a multi-modal image pair with completely consistent anatomical structure (namely the same scanned object is in the same body position) can be constructed, a conversion model among the multi-modal images is learned and built by using the deep neural network model, shallow and deep connection among the multi-modal images is extracted, and the conversion from cone beam CT images to spiral CT images is realized.
As shown in fig. 2, this embodiment provides a specific implementation procedure for performing noise processing on a high-resolution CT image to generate a simulated low-resolution CBCT image:
step 1.1: intercepting a high-resolution CT image according to the cone beam CT scanning range;
the helical CT axial scan range is wider than CBCT scan, so the high resolution CT image, the low resolution CBCT image will be different in size. Therefore, several high resolution CT images acquired need to be truncated so that the truncated high resolution CT images are dimensionally adapted to the low resolution CBCT images to be generated.
Step 1.2: obtaining a full sampling or downsampling scanning geometric parameter of the CBCT, and converting the high-resolution CT image intercepted in the step 1.1 into projection data in a CBCT format by utilizing a front projection algorithm;
the CBCT full sampling or downsampling scanning geometric parameters can be derived from a desensitized historical cone beam CT scanning file, or can be obtained after the cone beam CT is used for carrying out non-line free scanning, wherein the scanning file comprises physical positions of a ray source, a detector, a scanning bed and the like, the geometric information of all X rays received by the detector is recorded, and the cone beam CT scanning can modify parameters such as radiation field range, sampling rate and the like besides tube current and tube voltage. And obtaining a scanning file to obtain the CBCT full sampling or downsampling scanning geometric parameters. After obtaining the full sampling or downsampling scanning geometric parameters of the CBCT, converting the high-resolution CT image intercepted in the step 1.1 into projection data in a CBCT format by utilizing a front projection algorithm, wherein the front projection algorithm can use line driving, point driving, distance driving and the like, and an intensity attenuation model can use line weighting, area weighting, volume weighting and the like. Since the CT image value represents the attenuation intensity of the substance, and the projection value is the superposition of the attenuation intensities, the embodiment provides a specific orthographic projection algorithm, and the orthographic projection algorithm adopts the orthographic projection function as follows:
Figure SMS_3
wherein p represents the signal value of any X-ray projection, X i Representing the attenuation contribution degree of the ith image data to the ray, V i Representing the ith image data value, Ω d Representing the image data set through which the ray reaches the d-th detector.
Step 1.3: converting the projection data obtained in the step 1.2 into projection signals, and converting the projection signals into photon signals;
when the projection signal is converted into the photon signal, the conversion method is exponential function conversion, and the real photon signal can be simulated by considering the factors of the CBCT mechanical system, wherein the factors comprise a butterfly filter (bowtie), current intensity, the number of photons received by the air scan, the response intensity of the flat panel detector and the like, so as to more accurately simulate the photon signal received by the detector. In this embodiment, when converting the projection signal into the photon signal, the photons are generated from the spherical anode target, filtered and reach the scanned object, attenuated and received by the CT detector, and according to the CT imaging principle, the attenuation of the photon signal is exponential, and the relationship between the projection signal and the photon signal is:
I(d)=I filt (d)·e -p(d)
wherein d represents the d-th detector acquisition channel, I (d) represents the photon signal intensity of X-rays received by the d-th detector acquisition channel, I filt (d) For the intensity of the X-ray filtered outgoing photon signal received by the d-th detector acquisition channel, p (d) represents the penetration of the X-ray received by the d-th detector acquisition channelProjection signals of the scanned object.
If the photon passes through each module of the cone beam CT system, when the photon passes through the butterfly filter, the intensity attenuation function of the butterfly filter corresponding to the acquisition channel of the d-th detector is set as beta (d), and the response function of the acquisition channel of the d-th detector is set as T (d). At this time, the attenuation formula from the X-ray tube to the detector is as follows:
I(d)=I 0 ·β(d)·e -p(d) ·T(d)
wherein I is 0 And the photon signal intensity released by the X bulb before filtering is represented, beta (d) is an intensity attenuation function of the butterfly filter corresponding to the acquisition channel of the d-th detector, and T (d) is a response function of the acquisition channel of the d-th detector.
On the basis, for all X-rays at different moments, the attenuation formula from the X-ray tube to the detector is as follows:
I(d,t)=I 0 (t)·β(d)·e -p(d,t) ·T(d)
wherein I (d, t) is the photon signal intensity of the X-ray received by the d-th detector acquisition channel at the time t, I 0 (t) represents the photon signal intensity released by the X-ray tube before filtering at the moment t, p (d, t) is the projection signal of the X-ray received by the d-th detector acquisition channel at the moment t after passing through the scanned object, I 0 And (t) is the intensity of the emergent light signal of the X bulb at the moment t.
Step 1.4: adding a noise signal into the photon signal obtained in the step 1.3 to obtain a noise photon signal;
the standard deviation of the noise is added to the analog optical signal, so that the cone beam CT optical signal with enhanced noise can be obtained, namely, the noise photon signal is calculated by the following formula:
I CBCT =I+Guassian(σ noise )
because X-ray is a multi-energy spectrum, the mean value and the variance of the X-ray are in a direct proportion relation, namely:
Figure SMS_4
/>
wherein I is CBCT Representing photon signal intensity with noise signal, I representing photon signal intensity, K 1 Scale factor, K, representing helical CT scan photon composite Poisson distribution 2 Scale factor lambda representing CBCT scanning photon composite Poisson distribution plan Represents the mean value parameter, lambda of the photon composite poisson distribution of the spiral CT scanning CBCT Representing the mean value parameter, N, of the CBCT scanning photon composite Poisson distribution plan Representing the electronic noise parameters of a helical CT scanning system, N CBCT The electronic noise parameters representing the CBCT scanning system were Gaussian (·) functions were Gaussian model functions with a mean of 0 and a variance of 1.
The composite poisson distribution mean value can be estimated by setting different current intensities and scanning the same uniform die body, such as a water die, to obtain a plurality of groups of projection signals. The ratio coefficient K of the composite Poisson distribution can be estimated by comparing the projection signal variances under different currents because the composite Poisson distribution variances have a ratio relation with the mean value. The poisson distribution mean parameter is estimated through a scanning air experiment, the X-ray intensity is far greater than the noise intensity, and the air scanning can be approximately regarded as no attenuation, so that the air scanning projection signal can be regarded as a bulb tube emergent light signal, the experiment is repeated, and the acquired signal mean value is calculated to estimate the composite poisson distribution mean parameter lambda. The estimation of the electronic noise parameters may scan a high attenuation coefficient phantom, such as a lead plate, and the acquisition of the signal by the detector may estimate the system noise. Both spiral and CBCT require the above-described additional experiments. In this example, the composite poisson distribution coefficient K 1 732.49, K 2 Mean lambda under a spiral CT scan of 358.80 and 200mA plan Is 3.27x10 4 Lambda under 10mA current CBCT scan CBCT Is 3.34x10 3 ,N plan 302.19, N CBCT 213.91.
Step 1.5: reconverting the noise photon signals obtained in the step 1.4 into noise projection signals, and converting the noise projection signals into low-dose CBCT projection data;
the algorithm of the step is the inverse operation of the step 1.4, the noise photon signals are converted into noise projection signals again, the noise projection signals are converted into low-dose CBCT projection data, and the calculation formula of the whole process is as follows:
p CBCT (d,t)=ln(I 0 (t)·β(d)·T(d)/I CBCT (d,t))
wherein d represents the acquisition channel of the d-th detector, I 0 (T) represents the photon signal intensity released by the X bulb before filtering at the moment T, beta (d) is the intensity attenuation function of the butterfly filter corresponding to the acquisition channel of the d-th detector, T (d) is the response function of the acquisition channel of the d-th detector, I CBCT (d, t) represents the noise photon signal intensity after noise is added to the photon signal of the X-ray received by the d-th detector acquisition channel at the time t.
Step 1.6: and (3) reconstructing the low-dose CBCT projection data obtained in the step (1.5) to generate a simulated low-resolution CBCT image.
In the reconstruction, the reconstruction algorithm may use the existing mainstream algorithms such as an analytical algorithm or an iterative algorithm.
As shown in fig. 3, in step 2, the embodiment uses generation of an countermeasure network, and the structure of the deep neural network model is as follows:
the deep neural network model comprises a generator and a discriminator, wherein the input of the generator is a simulated low-resolution CBCT image, and the generator can learn the mapping relation from the low-resolution CBCT to the high-resolution CT and output the generated high-resolution CT image; the input of the discriminator is the image generated by the generator and the real high-resolution CT image, the discriminator outputs a score for guiding the generator to continue training, and after the generator and the discriminator are built, the model can be trained.
The generator includes two parts, encoding and decoding; the coding part is a deep neural network with cavity convolution, a 50-layer deep residual error network is used as a backbone network, cavity convolution extraction features with different cavity rates are used, pooling operation is used for reducing the dimension of the features, abstract features with translation invariance can be extracted from an input CBCT image by being matched with convolution, and the pooled output is spliced in the channel dimension and output to a decoder; the decoding part performs bilinear upsampling on the output of the encoding part by a factor of 4, then cascades with corresponding features from the encoding part, further fuses the bottom layer features with the high layer features, improves feature multiplexing, accelerates model convergence, thereby improving the quality of generated pictures, refines the features by using 3×3 convolution after cascade, then performs bilinear upsampling with a factor of 4 again, and finally outputs the generated high resolution CT image;
the discriminator takes the real high-resolution CT image and the high-resolution CT image generated by the generator as inputs, then carries out normalization operation with a LeakyReLU activation function on 3 convolution layers with convolution kernel size of 3x3, and finally outputs the comprehensive score. The output of the releaserlu activation function is y=max (0, x) +releasmin (0, x), where x is the input and the releaser is a very small constant, typically on the order of 10 e-5.
In step 3: when training the deep neural network model, the specific training method comprises the following steps:
after the generator and the discriminator are built, the deep neural network model can be trained, and the specific steps comprise:
step 31, preprocessing CT images;
dividing the data obtained and processed in the step 1 into a training set and a testing set, carrying out normalization operation on the simulated low-resolution CBCT image in the training set according to the window level of the image window, carrying out a large number of augmentation operations on the training set, including horizontal overturning of left and right, up and down, diagonal line and the like, scaling the image, and carrying out random image rotation operation at different angles, thereby enabling the training of the deep neural network model to have better robustness and generalization capability.
Step 32, forward calculation;
according to the characteristic construction and the data operation flow direction of the generator and the discriminator, the low-resolution CBCT image input data are used as the input of the generator, the high-quality reconstructed image generated by the generator is used as the input of the discriminator, and the forward calculation result of the network can be obtained through sequential calculation;
step 33, back propagation;
for the generator, the target is to approach to the high-precision image, and an evaluation function (only need to use the existing function) such as a mean square error, an absolute error and the like is adopted as the evaluation index and the target function of the generator; for the discriminator, the task is to judge whether the generated high-precision image is different from the target high-precision image, so that a weighted cross entropy evaluation index is adopted as an objective function (the existing function is adopted), a gradient descent algorithm is adopted in the deep neural network model through iteration, and parameter values of the deep neural network are dynamically updated in the training process through a series of super parameter settings;
step 34, training a model;
training the deep neural network model, namely training a discriminator, training a generator, and training the discriminator, wherein the training step and the training step are performed alternately; the deep neural network model is trained in batches for 200 iterations, the initial learning rate is 0.001, the learning rate attenuation factor is 0.1, the attenuation step length is 5000, and the weight attenuation is 0.0001.
Step 35, model verification;
after the deep neural network model is trained, the recognition effect of the deep neural network model on the test set is evaluated by using quantitative parameters, the quantitative evaluation indexes of the image are root mean square error (Root Mean Square Error, RMSE) and structural similarity (Structural Similarity index, SSIM), the RMSE represents the average deviation degree of the predicted value from the true value, and the specific judgment formula is as follows:
Figure SMS_5
wherein h (x i ) To reconstruct an image, y i For input image elements, m is the number of all pixel values, and RMSE is root mean square error; root mean square error RMSE calculates a reconstructed image (h (x i ) And input image element (y) i ) The square of the pixel difference between them and then the square root of the average value of the whole graph.
The loss based solely on minimum mean square error is not sufficient to express the visual perception of the human visual system. When measuring the distance between two graphs, there is a tendency to focus on the structural similarity of the two graphs, rather than calculating the difference of the two graphs pixel by pixel. Therefore, besides the root mean square error, a measurement method based on structural similarity is adopted, and the SSIM can reflect the judgment of the structural similarity between two images by the human visual system, and the specific judgment formula is as follows:
SSIM(x,y)=[l(x,y)] α *[c(x,y)] β *[s(x,y)] γ
wherein x, y represent pixel positions, l (x, y) represents brightness of the pixel positions, c (x, y) represents contrast of the pixel positions, s (x, y) represents structure, SSIM represents structural similarity; SSIM compares the similarity of two pictures from three dimensions of brightness (i (x, y)), contrast (c (x, y)) and structure (s (x, y)), and more reflects the human visual system's judgment of the similarity between two images than RMSE.

Claims (9)

1. The CBCT image reconstruction method for deep learning and electronic noise simulation is characterized by comprising the following steps of:
step 1: acquiring and processing data;
acquiring a plurality of high-resolution CT images, and generating a simulated low-resolution CBCT image after carrying out noise processing on the high-resolution CT images;
step 2: building a deep neural network model;
the depth neural network model comprises a generator and a discriminator, wherein the input of the generator is a simulated low-resolution CBCT image, and the generator can learn the mapping relation from the low-resolution CBCT to the high-resolution CT and output the generated high-resolution CT image; the input of the discriminator is an image generated by the generator and a real high-resolution CT image, and the discriminator outputs a score for guiding the generator to continue training;
the generator comprises an encoding part and a decoding part; the coding part is a deep neural network with cavity convolution, a 50-layer deep residual error network is used as a backbone network, cavity convolution extraction features with different cavity rates are used, pooling operation is used for reducing the dimension of the features, abstract features with translation invariance can be extracted from an input CBCT image by being matched with convolution, and the pooled output is spliced in the channel dimension and output to a decoder; the decoding part performs bilinear upsampling on the output of the encoding part by a factor of 4, then cascades with corresponding features from the encoding part, further fuses the bottom layer features with the high layer features, refines the features by using 3×3 convolution after cascade, then performs bilinear upsampling with a factor of 4 once again, and finally outputs the generated high resolution CT image;
the discriminator takes the real high-resolution CT image and the high-resolution CT image generated by the generator as inputs, then carries out normalization operation with a LeakyReLU activation function on 3 convolution layers with convolution kernel sizes of 3x3, and finally outputs comprehensive scores;
step 3: training a deep neural network model;
training the deep neural network model built in the second step by using the high-resolution CT image and the low-resolution CBCT image in the step 1, wherein the training comprises the following steps:
step 31, preprocessing CT images;
dividing the high-resolution CT image and the low-resolution CBCT image obtained and processed in the step 1 into a training set and a test set, and carrying out normalization operation on the simulated low-resolution CBCT image in the training set according to the window width and the window level of the image;
step 32, forward calculation;
taking the low-resolution CBCT image as the input of a generator, taking the high-quality reconstructed image generated by the generator as the input of a discriminator, and sequentially calculating to obtain a forward calculation result of the network;
step 33, back propagation;
the generator adopts a mean square error and absolute error evaluation function as a generator evaluation index and an objective function; the arbiter adopts the weighted cross entropy evaluation index as an objective function, the deep neural network model adopts a gradient descent algorithm through iteration, and the parameter values of the deep neural network are dynamically updated in the training process through a series of super parameter settings;
step 34, training a model;
training the discriminator, training the generator, training the discriminator, and alternately training the two; carrying out batch training of 200 iterations on the deep neural network model, wherein the initial learning rate is 0.001, the learning rate attenuation factor is 0.1, the attenuation step length is 5000, and the weight attenuation is 0.0001;
step 4: reconstructing a CBCT image;
inputting the acquired low-resolution CBCT image into the depth neural network model trained in the step 3, and outputting a high-resolution CT image by the depth neural network model;
in step 1, the specific steps of generating a low resolution CBCT image after noise processing using a high resolution CT image are:
step 1.1: intercepting a high-resolution CT image according to the cone beam CT scanning range;
step 1.2: obtaining a full sampling or downsampling scanning geometric parameter of the CBCT, and converting the high-resolution CT image intercepted in the step 1.1 into projection data in a CBCT format by utilizing a front projection algorithm;
step 1.3: converting the projection data obtained in the step 1.2 into projection signals, and converting the projection signals into photon signals;
step 1.4: adding a noise signal into the photon signal obtained in the step 1.3 to obtain a noise photon signal;
step 1.5: reconverting the noise photon signals obtained in the step 1.4 into noise projection signals, and converting the noise projection signals into low-dose CBCT projection data;
step 1.6: and (3) reconstructing the low-dose CBCT projection data obtained in the step (1.5) to generate a simulated low-resolution CBCT image.
2. A CBCT image reconstruction method for deep learning and electronic noise simulation as defined in claim 1, wherein in step 1.2, the CBCT full sampling or downsampling scan geometry is derived from a desensitized historical cone beam CT scan file, or a scan file obtained after a non-line free scan using cone beam CT.
3. The CBCT image reconstruction method for deep learning and electronic noise simulation as defined in claim 1 or 2, wherein in step 1.2, when the high resolution CT image intercepted in step 1.1 is converted into the projection data in CBCT format by using the orthographic projection algorithm, the orthographic projection function is:
Figure FDA0004122850910000031
wherein p represents the signal value of any X-ray projection, X i Representing the attenuation contribution degree of the ith image data to the ray, V i Representing the ith image data value, Ω d Representing the image data set through which the ray reaches the d-th detector.
4. The CBCT image reconstruction method for deep learning and electronic noise simulation according to claim 1, wherein in step 1.3, when the projection signal is converted into a photon signal, the photon is generated and filtered, then reaches the scanned object, is received by the CT detector after being attenuated, and the relationship between the projection signal and the photon signal is:
I(d)=I filt (d)·e -p(d)
wherein d represents the d-th detector acquisition channel, I (d) represents the photon signal intensity of X-rays received by the d-th detector acquisition channel, I filt (d) The p (d) represents the projection signal of the X-ray received by the d-th detector acquisition channel through the scanned object for the X-ray filtered outgoing photon signal intensity received by the d-th detector acquisition channel.
5. A CBCT image reconstruction method for deep learning and electronic noise simulation as defined in claim 4, wherein photons are generated and filtered to reach the scanned object, and wherein an attenuation formula from the X-ray tube to the detector is:
I(d)=I 0 ·β(d)·e -p(d) ·T(d)
wherein I is 0 And the photon signal intensity released by the X bulb before filtering is represented, beta (d) is an intensity attenuation function of the butterfly filter corresponding to the acquisition channel of the d-th detector, and T (d) is a response function of the acquisition channel of the d-th detector.
6. A CBCT image reconstruction method for deep learning and electronic noise simulation as defined in claim 5, wherein for all X-rays at different times, the attenuation formula after photons from the X-ray tube to the detector is:
I(d,y)=I 0 (t)·β(d)·e -p(d,t) ·T(d)
wherein I (d, t) is the photon signal intensity of the X-ray received by the d-th detector acquisition channel at the time t, I 0 (t) represents the photon signal intensity released by the X-ray tube before filtering at the moment t, p (d, t) is the projection signal of the X-ray received by the d-th detector acquisition channel at the moment t after passing through the scanned object, I 0 And (t) is the intensity of the emergent light signal of the X bulb at the moment t.
7. A CBCT image reconstruction method for deep learning and electronic noise simulation as claimed in claim 1, wherein in step 1.4, the noise photon signal is calculated by the following formula:
I CBCT =I+Guassian(σ noise )
Figure FDA0004122850910000051
wherein I is CBCT Representing photon signal intensity with noise signal, I representing photon signal intensity, K 1 Scale factor, K, representing helical CT scan photon composite Poisson distribution 2 Scale factor lambda representing CBCT scanning photon composite Poisson distribution plan Represents the mean value parameter, lambda of the photon composite poisson distribution of the spiral CT scanning CBCT Representing the mean value parameter, N, of the CBCT scanning photon composite Poisson distribution plan Representing the electronic noise parameters of a helical CT scanning system, N CBCT The electronic noise parameters representing the CBCT scanning system were Gaussian (·) functions were Gaussian model functions with a mean of 0 and a variance of 1.
8. The CBCT image reconstruction method for deep learning and electronic noise simulation of claim 1, wherein in step 1.5, the noise photon signal reconverts the noise projection signal, and converts the noise projection signal into low dose CBCT projection data, and the whole process calculation formula is:
p CBCT (d,t)=ln(I 0 (t)·β(d)·T(d)/I CBCT (d,t))
wherein d represents the acquisition channel of the d-th detector, I 0 (T) represents the photon signal intensity released by the X bulb before filtering at the moment T, beta (d) is the intensity attenuation function of the butterfly filter corresponding to the acquisition channel of the d-th detector, T (d) is the response function of the acquisition channel of the d-th detector, I CBCT (d, t) represents the noise photon signal intensity after noise is added to the photon signal of the X-ray received by the d-th detector acquisition channel at the time t.
9. A CBCT image reconstruction method for deep learning and electronic noise simulation as claimed in claim 1, wherein in step 1.6, the reconstruction algorithm uses an analytical algorithm or an iterative algorithm when reconstructing the low resolution CBCT image using the low dose CBCT projection data.
CN202111567288.7A 2021-12-20 2021-12-20 CBCT image reconstruction method for deep learning and electronic noise simulation Active CN114241074B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111567288.7A CN114241074B (en) 2021-12-20 2021-12-20 CBCT image reconstruction method for deep learning and electronic noise simulation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111567288.7A CN114241074B (en) 2021-12-20 2021-12-20 CBCT image reconstruction method for deep learning and electronic noise simulation

Publications (2)

Publication Number Publication Date
CN114241074A CN114241074A (en) 2022-03-25
CN114241074B true CN114241074B (en) 2023-04-21

Family

ID=80759841

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111567288.7A Active CN114241074B (en) 2021-12-20 2021-12-20 CBCT image reconstruction method for deep learning and electronic noise simulation

Country Status (1)

Country Link
CN (1) CN114241074B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152365B (en) * 2023-10-31 2024-02-02 中日友好医院(中日友好临床医学研究所) Method, system and device for oral cavity CBCT ultra-low dose imaging

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108416821A (en) * 2018-03-08 2018-08-17 山东财经大学 A kind of CT Image Super-resolution Reconstruction methods of deep neural network
EP3467766A1 (en) * 2017-10-06 2019-04-10 Canon Medical Systems Corporation Medical image processing apparatus and medical image processing system
CN112348936A (en) * 2020-11-30 2021-02-09 华中科技大学 Low-dose cone-beam CT image reconstruction method based on deep learning

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109035360A (en) * 2018-07-31 2018-12-18 四川大学华西医院 A kind of compressed sensing based CBCT image rebuilding method
CN110288671A (en) * 2019-06-25 2019-09-27 南京邮电大学 The low dosage CBCT image rebuilding method of network is generated based on three-dimensional antagonism
US11120582B2 (en) * 2019-07-31 2021-09-14 Z2Sky Technologies Inc. Unified dual-domain network for medical image formation, recovery, and analysis
CN111192268B (en) * 2019-12-31 2024-03-22 广州开云影像科技有限公司 Medical image segmentation model construction method and CBCT image bone segmentation method
CN112802036A (en) * 2021-03-16 2021-05-14 上海联影医疗科技股份有限公司 Method, system and device for segmenting target area of three-dimensional medical image
CN113041516B (en) * 2021-03-25 2022-07-19 中国科学院近代物理研究所 Method, system, processing equipment and storage medium for guiding positioning of three-dimensional image
CN113516586A (en) * 2021-04-23 2021-10-19 浙江工业大学 Low-dose CT image super-resolution denoising method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3467766A1 (en) * 2017-10-06 2019-04-10 Canon Medical Systems Corporation Medical image processing apparatus and medical image processing system
CN108416821A (en) * 2018-03-08 2018-08-17 山东财经大学 A kind of CT Image Super-resolution Reconstruction methods of deep neural network
CN112348936A (en) * 2020-11-30 2021-02-09 华中科技大学 Low-dose cone-beam CT image reconstruction method based on deep learning

Also Published As

Publication number Publication date
CN114241074A (en) 2022-03-25

Similar Documents

Publication Publication Date Title
Zhang et al. Improving CBCT quality to CT level using deep learning with generative adversarial network
JP6761128B2 (en) Neural network for generating synthetic medical images
JP2020168352A (en) Medical apparatus and program
Jia et al. GPU-based fast low-dose cone beam CT reconstruction via total variation
CN107545584A (en) The method, apparatus and its system of area-of-interest are positioned in medical image
CN112598649B (en) 2D/3D spine CT non-rigid registration method based on generation of countermeasure network
Peng et al. MCDNet–a denoising convolutional neural network to accelerate Monte Carlo radiation transport simulations: A proof of principle with patient dose from x-ray CT imaging
EP3785222B1 (en) Systems and methods for image processing
CN110390361A (en) A kind of 4D-CBCT imaging method based on motion compensation study
CN113052936A (en) Single-view CT reconstruction method integrating FDK and deep learning
CN114241074B (en) CBCT image reconstruction method for deep learning and electronic noise simulation
US11941805B2 (en) Systems and methods for image processing
CN117813055A (en) Multi-modality and multi-scale feature aggregation for synthesis of SPECT images from fast SPECT scans and CT images
Xie et al. Generation of contrast-enhanced CT with residual cycle-consistent generative adversarial network (Res-CycleGAN)
Zhang et al. Automatic parotid gland segmentation in MVCT using deep convolutional neural networks
CN111161371B (en) Imaging system and method
Liu et al. Low-dose CBCT reconstruction via 3D dictionary learning
WO2023125683A1 (en) Systems and methods for image reconstruction
CN116563402A (en) Cross-modal MRI-CT image synthesis method, system, equipment and medium
CN113129327B (en) Method and system for generating internal general target area based on neural network model
Wang et al. An unsupervised dual contrastive learning framework for scatter correction in cone-beam CT image
Zhang et al. DR-only Carbon-ion radiotherapy treatment planning via deep learning
Vizitiu et al. Data-driven adversarial learning for sinogram-based iterative low-dose CT image reconstruction
CN117115046B (en) Method, system and device for enhancing sparse sampling image of radiotherapy CBCT
Xiao et al. Real-Time 4-D-Cone Beam CT Accurate Estimation Based on Single-Angle Projection via Dual-Attention Mechanism Residual Network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant