CN114159021B - Dual-input-single-output deep learning-based fluorescence scanning tomography reconstruction method for Cherenkov excitation - Google Patents

Dual-input-single-output deep learning-based fluorescence scanning tomography reconstruction method for Cherenkov excitation Download PDF

Info

Publication number
CN114159021B
CN114159021B CN202110999622.XA CN202110999622A CN114159021B CN 114159021 B CN114159021 B CN 114159021B CN 202110999622 A CN202110999622 A CN 202110999622A CN 114159021 B CN114159021 B CN 114159021B
Authority
CN
China
Prior art keywords
image
feature
quantum yield
fluorescence
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110999622.XA
Other languages
Chinese (zh)
Other versions
CN114159021A (en
Inventor
冯金超
张虎
贾克斌
李哲
孙中华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN202110999622.XA priority Critical patent/CN114159021B/en
Publication of CN114159021A publication Critical patent/CN114159021A/en
Application granted granted Critical
Publication of CN114159021B publication Critical patent/CN114159021B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0071Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence by measuring fluorescence emission
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N5/00Radiation therapy
    • A61N5/10X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N5/00Radiation therapy
    • A61N5/10X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
    • A61N5/1048Monitoring, verifying, controlling systems and methods

Abstract

The invention discloses a dual-input-single-output deep learning-based fluorescence scanning tomography reconstruction method for Cherenkov excitation, which can model a CELST reconstruction problem as a nonlinear optimization problem, but is a high-level uncertainty problem in mathematics because the number of measurement signals is far smaller than the number of signals to be solved, and is usually solved by regularization iteration based on L2 norm constraint. However, the fluorescence quantum yield obtained by the traditional regularization method is subjected to Gaussian distribution and is reflected to be too smooth on the image, so that reconstruction artifacts are serious. In order to solve the problem, the invention provides a double-input-single-output deep learning model structure, and the method takes the collected fluorescence signal and the low-quality fluorescence quantum yield image reconstructed by the traditional method as network input at the same time, and directly outputs the high-quality fluorescence quantum yield image. Experimental results show that the method can realize accurate reconstruction of CELST.

Description

Dual-input-single-output deep learning-based fluorescence scanning tomography reconstruction method for Cherenkov excitation
Technical Field
The invention belongs to the field of medical image processing, and relates to a dual-input-single-output deep learning-based fluorescence scanning tomography reconstruction method for Cherenkov excitation.
Background
About half of cancer patients in the world receive radiation therapy, the most common being external radiation therapy (EBRT), which uses high-energy radiation generated by a medical linear accelerator (hereinafter LINAC) to irradiate diseased tumor tissue of the human body. During the treatment, the radiation therapist adopts different irradiation schemes according to different states of the tumor cells. Therefore, monitoring the physiological state of tumor cells during radiotherapy has important research significance.
The fluorescent scanning tomography (CELST) excited by the Cerenkov is a novel medical imaging mode, the Cerenkov radiation caused by high-energy rays generated by LINAC in tissues is used as an internal light source, secondary excitation is distributed in tissues or cells marked by fluorescent probes in biological tissues and generates fluorescence, and the fluorescence penetrates the surfaces of the tissues to the outside after being scattered, refracted and reflected and finally is detected by a CCD camera. And reconstructing a fluorescence quantum yield image in the tissue through a reconstruction algorithm according to the acquired fluorescence signals. Because of limited biological tissue boundary data and unavoidable noise mixed in measured fluorescent signals in the measuring process, the reconstruction problem of CELST is a highly pathological and uncomfortable problem mathematically, and if tiny measurement disturbance is introduced from the outside, the reconstruction result can be greatly changed. To reduce the pathogenicity of the CELST reconstruction problem, a regularized solution may be used in the reconstruction to transform the CELST reconstruction problem into a nonlinear optimization problem. The most commonly used reconstruction algorithm is Tikhonov regularization, and because Tikhonov regularization solution is based on an L2 norm, the edges of the finally reconstructed fluorescence quantum yield image are too smooth, so that image artifacts are serious, and the imaging quality of CELST is still limited.
In order to overcome the problems, the invention provides a dual-input-single-output deep learning-based fluorescence scanning tomography reconstruction method for the excitation of the Cherenkov for the first time, and the imaging quality of CELST is improved.
Disclosure of Invention
The invention designs a dual-input-single-output deep learning-based fluorescence scanning tomography reconstruction method for Cherenkov excitation, wherein a deep learning network part inputs acquired fluorescence signals and low-quality fluorescence quantum yield images obtained by a traditional reconstruction method, and outputs high-quality fluorescence quantum yield images.
The invention designs a dual-input-single-output deep learning-based fluorescence scanning tomography reconstruction method for Cherenkov excitation, which comprises the steps of firstly mapping an acquired one-dimensional fluorescence signal phi as input into a two-dimensional space, secondly encoding the mapped two-dimensional signal by using a three-layer convolution layer with a convolution kernel of 3*3 and a step length of 1, and finally up-sampling by using transposed convolution conv.T to obtain a data characteristic map data feature The whole process can be represented by the following formula:
data feature =conv.T(f 2 (f 1 (f 0 (φ,ω 0 ,b 0 ),ω i ,b 1 ),ω 2 ,b 2 )) (1)
wherein f n (n=0, 1, 2) contains a bulk normalization layer and an activation layer, which uses LeakyReLU as an activation function, ω n And b n (n=0, 1, 2) is the weight and offset of the nth convolution layer, respectively, and Φ is the acquired fluorescence signal.
The invention designs a dual-input-single-output deep learning-based Cherenkov excitation fluorescence scanning tomography reconstruction method, which takes a low-quality fluorescence quantum yield image reconstructed by a traditional algorithm as input, firstly uses a convolution layer with a convolution kernel of 3*3 and a step length of 1 to extract characteristics of the low-quality image to obtain a characteristic image, then uses a maximum pooling layer to downsample the characteristic image to obtain deep information, and finally uses transposed convolution conv.T upsampling to obtain an image characteristic image img feature The whole process can be represented by the following formula:
img feature =U 2 (U 1 (U 0 (D 2 (D 1 (D 0 (x)))))) (2)
wherein D is n (n=0, 1, 2) is the largest pooling layer for downsampling the image, U n (n=0, 1, 2) is a transposed convolution layer that upsamples the image, and x is a low quality fluorescent quantum yield image.
The invention designs a dual-input-single-output deep learning-based Cherenkov excitation fluorescence scanning tomography reconstruction method, and a deep learning network carries out data feature map data feature And image feature map img feature Fusion is carried out, and a high-quality fluorescence quantum yield image out with the size of 128 x 1 is obtained through a convolution layer with the convolution kernel size of 3*3:
out=conv(img feature +data feature ) (3)
wherein conv is a convolution output layer, img feature And data feature Respectively representing an image feature map and a data feature map.
Advantageous effects
The invention designs a dual-input-single-output deep learning-based fluorescence scanning tomography reconstruction method for Cherenkov excitation, which is used for inputting collected fluorescence signals and a low-quality fluorescence quantum yield image reconstructed by a traditional method at the same time and directly outputting a high-quality fluorescence quantum yield image. The deep learning model is trained, and can be finally used for solving the problems of serious artifacts and blurred edges of the fluorescence quantum yield image reconstructed by the traditional method.
Drawings
Fig. 1 is a shape of a dummy in which a circular area with a brighter color is a real abnormal area.
Fig. 2 is a dual input network model structure, with an image coding network in the upper part, a signal coding network in the lower part,
fig. 3 is a graph of network-input fluorescence signal (left) and fluorescence quantum yield image reconstructed by conventional methods (right).
Fig. 4 is a graph showing the distribution of fluorescence quantum yields output by the network model presented herein.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the following detailed description of the embodiments of the present invention will be given with reference to the accompanying drawings.
Step 1: a mathematical model of the transmission of light in the tissue is built. CELST comprises two processes, the first being an excitation process, i.e. a process in which biological tissue is scanned by LINAC to produce cerenkov radiation, which secondary excites the marked tissue to produce fluorescence; the second is the emission process, i.e. the arrival of the secondarily excited fluorescence photons at the tissue surface, detected by a CCD camera. The whole process can be mathematically modeled by coupling a continuous wave diffusion approximation equation, which can be expressed as follows
Wherein formula (4) is the Cherenkov excitation domain, formula (5) is the fluorescence emission domain, D, μ, the following tables x andm represents the excitation process and the emission process respectively,for gradient computation operator, excite Φ in the domain formula x (r) is the intensity of the Cerenkov excitation light at position r, Φ in the emission domain formula m (r) is the intensity of the emitted fluorescence at position r, μ ax (r) and mu am (r) represents the tissue absorption coefficient at the position r, D x、m Represents the optical diffusion coefficient, the right eta of the equation of the formula (5) is the quantum efficiency eta mu of the fluorescent probe af Is the fluorescence quantum yield that ultimately needs to be reconstructed.
Step 2: and solving discrete numerical values of a diffusion approximation equation. The solution of CELST is divided into forward and reverse models. The forward model obtains an emitted fluorescence signal measured on the surface of the tissue, and the reverse model reconstructs the fluorescence quantum yield in the tissue by a traditional method according to the measured fluorescence signal.
1): the CELST forward model approximately converts the diffusion equations (4), (5) into linear matrix equations, i.e. the light intensity of fluorescence generated by secondary excitation at tissue boundaries, i.e. the fluorescence signal measured by a CCD camera, is calculated under the condition that the optical properties (including tissue absorption coefficient, scattering coefficient) and cerenkov excitation light inside biological tissues are known, and the forward model can be represented by the following linear equation set
φ=Ax(6)
Wherein A is a system matrix constructed by a finite element method, phi is a fluorescence signal measured by a surface, and x is a real fluorescence quantum yield in a tissue.
2): the essence of the CELST inverse model is that, given the known fluorescence measurements, the fluorescence quantum yield inside the tissue is reconstructed, since the number of measurements Φ in equation (6) is much smaller than x. Therefore, solving the above equation is a serious ill-posed problem, which can be solved using Tikhonov regularization method, represented by the following equation
The first part of the equation (7) is a fitting term of the fluorescence quantum yield, the second part is a regular term, lambda is used for balancing the data fitting part and the regularization intensity, and the fluorescence quantum yield distribution can be obtained initially by solving the equation.
Step 3: and generating a training data set. The size of the numerical simulation body used in the invention is set to be 100mm or 60mm, the numerical simulation body is divided into 2747 finite element nodes and 5280 triangular grid units by a finite element method, and a real abnormal area is shown in figure 1. The method comprises the steps of simulating LINAC to generate X-rays, performing equidistant scanning for 32 times, placing 64 detectors at the upper end of a simulator to collect fluorescent signals, performing iteration once through a traditional reconstruction method to obtain low-quality fluorescent quantum yield images, generating random depths and random numbers of abnormal areas for expanding training samples, and generating 20000 pairs of data in total, wherein 18000 pairs of data are used in the training stage, 2000 pairs of data are used in the testing stage, and each training sample comprises 2 inputs (one-dimensional fluorescent signals phi, low-quality fluorescent quantum yield images X 1 ) And 1 output (true fluorescence quantum yield image).
Step 4: training of a dual input-single output deep learning model. In the model training stage, a fluorescence signal obtained by a CELST forward model and a low-quality image obtained by iterative reconstruction of a traditional method are used as inputs of a network, the images are output as real fluorescence quantum yield images, a GTX2080 GPU 16G is used for training, a Pytorch1.4.0 is used as a framework, an initial learning rate is set to be 1e-4, a regularization parameter is set to be 0.001, and a training epoch is set to be 1000.
1): the input one-dimensional fluorescent signal phi is subjected to feature extraction to obtain a signal feature map data feature
data feature =conv.T(f 2 (f 1 (f 0 (φ,ω 0 ,b 0 ),ω 1 ,b 1 ),ω 2 ,b 2 )) (8)
2): input low quality image x 1 Obtaining an image feature image img through feature extraction feature
img feature =U 2 (U 1 (U 0 (D 2 (D 1 (D 0 (x 1 )))))) (9)
3): will data feature And img feature And (5) channel fusion is carried out, and single output out is obtained.
out=conv(img feature +data feature ) (10)
Step 5: testing of a dual input-single output deep learning model. In the model test stage, the fluorescent signals and the corresponding reconstruction results of the conventional method are selected from the test set as inputs, and the reconstruction results of the abnormal region can be seen from fig. 4. Experiments prove that the method provided by the invention effectively removes the artifacts of the reconstructed image and improves the reconstruction quality.

Claims (1)

1. The method is used in the field of medical images, and is characterized in that a deep learning network takes an acquired fluorescent signal and a low-quality fluorescent quantum yield image as input at the same time, and directly outputs a high-quality fluorescent quantum yield image;
firstly, mapping an acquired one-dimensional fluorescent signal into a two-dimensional space, coding the mapped two-dimensional signal by using a three-layer convolution layer with a convolution kernel of 3*3 and a step length of 1, and finally, up-sampling by using transposed convolution conv.T to obtain a data characteristic map data feature
data feature =conv.T(f 2 (f 1 (f 0 (φ,ω 0 ,b 0 ),ω 1 ,b 1 ),ω 2 ,b 2 )) (1)
Wherein f n (n=0, 1, 2) contains a bulk normalization layer and an activation layer using the LeakyReLU activation function, ω n And b n (n=0, 1, 2) is the weight and offset of the nth convolution layer, respectively, phi is the one-dimensional fluorescence signal;
on the basis of the reconstructed low-quality fluorescence quantum yield image, the image is subjected to code downsampling to obtain deep information, and then on the basis, the image characteristic image img is restored through decoding upsampling feature
img feature =U 2 (U 1 (U 0 (D 2 (D 1 (D 0 (x)))))) (2)
Wherein D is n To sample down the image to the largest pooling layer, U n For a transposed convolution layer that upsamples an image, x is a low quality fluorescent quantum yield image; n=0, 1,2;
first, data characteristic diagram data is recorded feature And image feature map img feature Fusion is carried out, and a high-quality fluorescence quantum yield image out with the size of 128 x 1 is obtained through a convolution layer with the convolution kernel size of 3*3:
out=conv(img feature +data feature ) (3)
where conv is the convolutional output layer, img feature And data feature Respectively representing an image feature map and a data feature map.
CN202110999622.XA 2021-08-29 2021-08-29 Dual-input-single-output deep learning-based fluorescence scanning tomography reconstruction method for Cherenkov excitation Active CN114159021B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110999622.XA CN114159021B (en) 2021-08-29 2021-08-29 Dual-input-single-output deep learning-based fluorescence scanning tomography reconstruction method for Cherenkov excitation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110999622.XA CN114159021B (en) 2021-08-29 2021-08-29 Dual-input-single-output deep learning-based fluorescence scanning tomography reconstruction method for Cherenkov excitation

Publications (2)

Publication Number Publication Date
CN114159021A CN114159021A (en) 2022-03-11
CN114159021B true CN114159021B (en) 2023-08-18

Family

ID=80476557

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110999622.XA Active CN114159021B (en) 2021-08-29 2021-08-29 Dual-input-single-output deep learning-based fluorescence scanning tomography reconstruction method for Cherenkov excitation

Country Status (1)

Country Link
CN (1) CN114159021B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107242860A (en) * 2017-07-25 2017-10-13 京东方科技集团股份有限公司 Fluorescent molecular tomography system and method
CN108090936A (en) * 2017-12-17 2018-05-29 北京工业大学 The scanning mating plate tomograph imaging method of Qie Lunkefu fluorescence excitations

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107242860A (en) * 2017-07-25 2017-10-13 京东方科技集团股份有限公司 Fluorescent molecular tomography system and method
CN108090936A (en) * 2017-12-17 2018-05-29 北京工业大学 The scanning mating plate tomograph imaging method of Qie Lunkefu fluorescence excitations

Also Published As

Publication number Publication date
CN114159021A (en) 2022-03-11

Similar Documents

Publication Publication Date Title
CN109978778B (en) Convolutional neural network medical CT image denoising method based on residual learning
JP7150837B2 (en) Image generation using machine learning
Jose et al. Speed‐of‐sound compensated photoacoustic tomography for accurate imaging
Barbour et al. MRI-guided optical tomography: prospects and computation for a new imaging method
US11176642B2 (en) System and method for processing data acquired utilizing multi-energy computed tomography imaging
CN107392977B (en) Single-view Cerenkov luminescence tomography reconstruction method
Na et al. Transcranial photoacoustic computed tomography based on a layered back-projection method
JP2021013725A (en) Medical apparatus
US20170148193A1 (en) Bioluminescence tomography reconstruction based on multitasking Bayesian compressed sensing
Zhang et al. PET image reconstruction using a cascading back-projection neural network
CN111915733A (en) LeNet network-based three-dimensional cone-beam X-ray luminescence tomography method
Wang et al. A novel adaptive parameter search elastic net method for fluorescent molecular tomography
CN111223162B (en) Deep learning method and system for reconstructing EPAT image
Natterer et al. Past and future directions in x‐ray computed tomography (CT)
Dean-Ben et al. A practical guide for model-based reconstruction in optoacoustic imaging
Murad et al. Reconstruction and localization of tumors in breast optical imaging via convolution neural network based on batch normalization layers
Zuo et al. Spectral crosstalk in photoacoustic computed tomography
CN114159021B (en) Dual-input-single-output deep learning-based fluorescence scanning tomography reconstruction method for Cherenkov excitation
CN112914543A (en) Electrical impedance tomography method for detecting lung tumor of human body
CN115294300A (en) Multi-branch attention prior parameterized finite projection fast fluorescence tomography reconstruction method
Zalev et al. Opto-acoustic image reconstruction and motion tracking using convex optimization
CN114926559A (en) PET reconstruction method based on dictionary learning thought attenuation-free correction
Sivasubramanian et al. Deep learning for image processing and reconstruction to enhance led-based photoacoustic imaging
Jiang et al. Radiation-induced acoustic signal denoising using a supervised deep learning framework for imaging and therapy monitoring
Pogue et al. Forward and inverse calculations for near-infrared imaging using a multigrid finite difference method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant