CN108451508B - Biological autofluorescence three-dimensional imaging method based on multilayer perceptron - Google Patents

Biological autofluorescence three-dimensional imaging method based on multilayer perceptron Download PDF

Info

Publication number
CN108451508B
CN108451508B CN201810407969.9A CN201810407969A CN108451508B CN 108451508 B CN108451508 B CN 108451508B CN 201810407969 A CN201810407969 A CN 201810407969A CN 108451508 B CN108451508 B CN 108451508B
Authority
CN
China
Prior art keywords
simulation
light source
biological
layer
multilayer perceptron
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810407969.9A
Other languages
Chinese (zh)
Other versions
CN108451508A (en
Inventor
田捷
王坤
高源�
安羽
周辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN201810407969.9A priority Critical patent/CN108451508B/en
Publication of CN108451508A publication Critical patent/CN108451508A/en
Application granted granted Critical
Publication of CN108451508B publication Critical patent/CN108451508B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0071Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence by measuring fluorescence emission
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • A61B5/0035Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for acquisition of images from more than one imaging mode, e.g. combining MRI and optical tomography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2503/00Evaluating a particular growth phase or type of persons or animals
    • A61B2503/40Animals

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Surgery (AREA)
  • Public Health (AREA)
  • Pathology (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Investigating, Analyzing Materials By Fluorescence Or Luminescence (AREA)

Abstract

The present disclosure provides a biological autofluorescence three-dimensional imaging method based on a multilayer perceptron, which includes: step 1: generating a training sample set by using Monte Carlo simulation; step 2: constructing a multilayer perceptron, wherein the multilayer perceptron comprises an input layer, a hidden layer and an output layer; and step 3: model training, namely performing model training on the multilayer perceptron and the weight thereof constructed in the step 2 by using the training sample set generated in the step 1; and step 4: and (3) actual in-vivo reconstruction, storing the model trained in the step (3) and the weight, and reconstructing by using the multilayer perceptron constructed in the step (2) to obtain an actual in-vivo biological autofluorescence light source distribution result. The method is based on a machine learning theory of statistical learning, and provides a method for generating a perceptron simulation training sample by using Monte Carlo simulation and expanding the simulation training sample, so that the training scale of the multilayer perceptron is enlarged, and the reconstruction capability of the multilayer perceptron and the reconstruction precision of biological autofluorescence three-dimensional imaging are improved.

Description

Biological autofluorescence three-dimensional imaging method based on multilayer perceptron
Technical Field
The disclosure relates to the field of medical molecular imaging, relates to methods of machine learning, computer vision and autofluorescence tomography, and particularly relates to a biological autofluorescence three-dimensional imaging method based on a multilayer perceptron.
Background
The bioluminescence imaging (BLI) technology is a molecular imaging technology emerging in recent years, and due to the characteristics of imaging specificity, high imaging signal-to-back ratio, radiation non-ionization and the like, the bioluminescence imaging is widely applied to the research related to the in vivo imaging of small animals. Bioluminescence computed tomography (BLT) is based on a bioluminescence imaging technique, combining bioluminescence imaging and structural tomography (e.g., X-ray computed tomography, magnetic resonance imaging, etc.).
The current major bioluminescence computed tomography (bioluminescence computed tomography) technology is mainly the combination of bioluminescence imaging and X-ray computed tomography (X-CT). The technology obtains the photon distribution of tumor focus on the surface of organism through BLI imaging, and obtains the organ distribution in the organism through X-CT. The distribution of organs in vivo provided by X-CT will be used to describe the scattering and absorption of photons in different organs of the organism, and then a photon propagation model is established. The propagation model reversely deduces the photon distribution of the autofluorescence light source in the organism based on the BLI photon distribution on the organism surface. However, extraction of organ distribution from structural tomography such as X-CT requires the aid of an image segmentation process. The results obtained by the segmentation of the process have errors, and organ distribution cannot be described very accurately. In addition, the construction of photon propagation models based on organ distribution in living organisms requires optical scattering parameters and optical absorption parameters of the organs. These optical parameters are measured by in vitro experiments, and the optical properties change in the in vitro experiments, so that the optical parameters of organs during in vivo shooting cannot be accurately described.
The biological autofluorescence three-dimensional imaging method based on the multilayer perceptron is different from the traditional method based on constructing an optical transmission model, and the method is based on statistical learning and trains learning to obtain the reverse process of photon propagation. The method does not need to construct a photon propagation model and does not need to depend on the optical parameter characteristics of biological organs. Therefore, the problems caused by the traditional photon retransmission model method such as inaccurate organ distribution description (organ region segmentation) and inaccurate organ optical parameters can be avoided. And further improves the reconstruction precision of biological autofluorescence computer tomography. However, the biological autofluorescence three-dimensional imaging method based on the multilayer perceptron is a machine learning method based on statistical learning, and extremely depends on a large number of training samples, namely a large number of case samples for calibrating the distribution of autofluorescence light sources in a living body and the distribution of corresponding biological surface autofluorescence, and the weight parameters of the perceptron are learned through the relevance of different autofluorescence light source distributions and surface fluorescence distributions, so that the multilayer perceptron for biological autofluorescence three-dimensional imaging is obtained.
BRIEF SUMMARY OF THE PRESENT DISCLOSURE
Technical problem to be solved
Based on the above problems, the present disclosure provides a biological autofluorescence three-dimensional imaging method based on a multilayer perceptron, and proposes that a Monte Carlo Simulation (MOSE) is used to generate a perceptron Simulation training sample, and the Simulation training sample is expanded, so as to improve the number of training samples, further improve the reconstruction capability of the multilayer perceptron, and alleviate the problem of low reconstruction accuracy of biological autofluorescence three-dimensional imaging caused by inaccurate forward description.
(II) technical scheme
The present disclosure provides a biological autofluorescence three-dimensional imaging method based on a multilayer perceptron, which includes: step 1: generating a training sample set by using Monte Carlo simulation; step 2: constructing a multilayer perceptron, wherein the multilayer perceptron comprises an input layer, a hidden layer and an output layer; and step 3: model training, namely performing model training on the multilayer perceptron and the weight thereof constructed in the step 2 by using the training sample set generated in the step 1; and step 4: and (3) actual in-vivo reconstruction, storing the model trained in the step (3) and the weight, and reconstructing by using the multilayer perceptron constructed in the step (2) to obtain an actual in-vivo biological autofluorescence light source distribution result.
In an embodiment of the present disclosure, the step 1 includes: step 1.1: constructing a sample model, carrying out image segmentation on X-CT image data of an organism of the same type as a shooting object, constructing gridding data, describing optical properties of different segmentation areas by using optical parameters of organs of the organism, constructing a simulation grid, and further generating the sample model; step 1.2: constructing a simulation sample, performing optical simulation by using a Monte Carlo simulation platform 'molecular optical simulation environment', setting a spherical light source in the simulation grid constructed in the step 1.1, and respectively generating biological autofluorescence simulation samples at different light source positions to form a single-light-source simulation sample; the distance between the light source position and the surface of the organism is not more than 7mm, and the number of the light sources is based on traversing all the areas which accord with the imaging distance; and step 1.3: expanding the simulation sample for training by using a sample combination method, and combining a single light source simulation sample by using the sample combination method on the basis of the simulation sample obtained in the step 1.2 to obtain a multi-light source simulation sample, thereby expanding the training sample; wherein the step 1.3 comprises: step 1.3.1: selecting a standard grid of which the grid is the whole sample set from the single-light-source simulation sample set, and mapping simulation samples which are not in the same simulation grid to the standard grid; and step 1.3.2: and combining the single light source simulation samples to obtain a multi-light source simulation sample.
In an embodiment of the present disclosure, the step 2 includes: step 2.1: importing biological surface fluorescence distribution data serving as input data of a multilayer perceptron into an input layer; step 2.2: input data is connected to the hidden layer through the input layer; and step 2.3: data enters the output layer after passing through the hidden layer; the output layer is a linear output unit, and the output result is corrected by a Relu function, so that elements smaller than 0 in the result are cleared.
In the embodiment of the present disclosure, the hidden layers of the perceptron constructed in step 2 are 4.
In the disclosed embodiment, in the step 1.3.1, the biological autofluorescence simulation sample (x)1,φ1) And (x)2,φ2) Not in the same simulation grid, will (x)2,φ2Is) and (x)1,φ1) Mapping into the standard grid, selecting three points which are closest to mapping points on the mapping grid in the standard grid, and averagely distributing mapping values to the three points by using the following formula:
Figure BDA0001645465380000031
Figure BDA0001645465380000032
wherein (x)s<-1i,φs<-1i),(xs<-2i,φs<-2i) Are respectively (x)1,φ1) And (x)2,φ2) Mapping samples on a standard grid, { S }s<-1iAnd { S }s<-2iAre distance simulation samples (x) in the standard grid respectively1,φ1) And a simulation sample (x)2,φ2) Most preferablyA set of three near spatial point x values; { Gs<-1iAnd { G }s<-2iAre distance simulation samples (x) in the standard grid respectively1,φ1) And a simulation sample (x)2,φ2) The set of the nearest three spatial points values.
In the embodiment of the present disclosure, in the step 1.3.2, the single light source simulation samples are combined to obtain a multi-light source simulation sample (x)s,φs) The formula is used as follows:
Figure BDA0001645465380000042
wherein xsi,φsiAre respectively simulation samples (x)s,φs) X on i grid point in standard gridsAnd phisThe value is obtained.
In the embodiment of the present disclosure, in the step 2.2, the kth hidden layer LhkWith a hidden layer Lh thereonk-1Or input layer LiThe relationship between them is represented by the following formula:
Figure BDA0001645465380000043
among them, DropoutpThe function is a random function, and P is more than or equal to 10% and less than or equal to 20%; mi,kIs the linear element linking weight of the input layer to the current hidden layer, bi,kIs the link bias; mk-1,kIs the link weight from the k-1 hidden layer to the k hidden layer, bk-1.kIs the link bias; the Relu function is a positive value correction response unit, which is used to correct the negative value in the output result of the linear unit, and the formula is as follows:
Figure BDA0001645465380000044
in this embodiment of the present disclosure, in step 2.3, the data enters the output layer after passing through the hidden layer, the output layer is a linear output unit, and the output result is corrected by using the Relu function, so that the elements smaller than 0 in the result are cleared, where the formula is as follows:
Lo=Relu(Mk,oLk+bk,o)
wherein L isoIs the output result of the output layer, Mk,oIs the linear link weight of the k layer of the preceding hidden layer of the output layer to the output layer, bk,oIs the link bias.
In an embodiment of the present disclosure, the step 3 includes: performing iterative training by using an Adam optimization method by using a loss function;
all linear link weights M and linear biases b in the model have the mean square error MSE between the output result of the output layer and the real result as a loss function, and the formula is as follows:
Figure BDA0001645465380000041
wherein XoutReconstruction of the distribution of the fluorescent light source output by the output layer, XtrueTo train a known distribution of fluorescent light sources in the sample.
In the embodiment of the present disclosure, in step 4, the in-vivo data of the real similar organism is imported into the multilayer perceptron, and then the in-vivo biological autofluorescence light source distribution result of the actual in-vivo body is reconstructed.
(III) advantageous effects
According to the technical scheme, the biological autofluorescence three-dimensional imaging method based on the multilayer perceptron has at least one of the following beneficial effects:
(1) based on a machine learning theory of statistical learning, the image segmentation of a biological organ is not needed in the reconstruction process, and the photon propagation process of photons in different organ regions is not needed to be described by using optical parameters;
(2) the problems existing in the traditional photon propagation model method such as inaccurate organ distribution description (organ region segmentation) and inaccurate organ optical parameters do not exist;
(3) the reconstruction precision of biological autofluorescence computer tomography is improved;
(4) the Monte Carlo is used for constructing simulation training samples, and the sample combination method is used for improving the number of the training samples, so that the reconstruction capability of the multilayer perceptron is improved.
Drawings
FIG. 1 is a schematic frame flow diagram of a biological autofluorescence three-dimensional imaging method based on a multilayer perceptron in an embodiment of the disclosure.
Fig. 2 is a schematic diagram of a light source distribution simulation result of a glioma according to an embodiment of the present disclosure.
Fig. 3 is a schematic diagram of a distribution structure of a multi-layered perceptron according to an embodiment of the present disclosure.
FIG. 4 is a diagram of the distribution results of the biological autofluorescence light sources in mice according to the embodiment of the disclosure.
[ description of main reference numerals in the drawings ] of the embodiments of the present disclosure
Detailed Description
The method is based on a machine learning theory of statistical learning, image segmentation is not needed to be carried out on a biological organ in a reconstruction process, optical parameters are not needed to describe a photon propagation process of photons in different organ regions, the trained multilayer perceptron capable of carrying out biological autofluorescence three-dimensional imaging is only needed to be obtained, and the autofluorescence distribution on the surface of a living body is input into the multilayer perceptron as an input image, so that the light source distribution of the biological autofluorescence in the living body can be output. Monte Carlo simulation is considered as a simulation method closest to the real photon propagation process by the field of biological autofluorescence. Therefore, by means of Monte Carlo simulation, under the condition that the distribution of the biological autofluorescence light source is known, the corresponding biological surface fluorescence light spot is generated, so that a biological autofluorescence training sample is provided, the number of the training samples is further expanded, a training set is increased, the training scale of the multilayer perceptron is increased, and the reconstruction capability of the multilayer perceptron is improved.
For the purpose of promoting a better understanding of the objects, aspects and advantages of the present disclosure, reference is made to the following detailed description taken in conjunction with the accompanying drawings.
In an embodiment of the present disclosure, fig. 1 is a schematic flow chart of a framework of a multilayer perceptron-based three-dimensional biological autofluorescence imaging method, and as shown in fig. 1, the multilayer perceptron-based three-dimensional biological autofluorescence imaging method includes:
step 1: generating a training sample set by using Monte Carlo simulation; the method comprises the following substeps:
step 1.1: constructing a sample model;
image segmentation is carried out on X-CT image data of organisms (3 to 5 examples) of the same type as a shooting object, gridding data is constructed, optical properties of different segmentation areas are described by using optical parameters of organs of the organisms, a simulation grid is constructed, and then a sample model is generated.
All the X-CT image data used in the step are obtained by being fixed in the same spatial position.
Step 1.2: constructing a simulation sample;
optical simulation is carried out by using a Monte Carlo simulation (MOSE) platform 'molecular optical simulation environment', spherical light sources are set in the simulation grid constructed in the step 1.1, the radius of a single spherical light source is 0.2mm, the distance between adjacent spherical light sources is 0.5mm, biological autofluorescence simulation samples are respectively generated at different light source positions, and a single light source simulation sample is formed.
The distance between the light source position and the surface of the organism is not more than 7mm, and the number of the light sources is based on traversing all the areas which accord with the imaging distance.
In an embodiment of the present disclosure, fig. 2 is a schematic diagram of a brain glioma light source distribution simulation result. As shown in fig. 2, the light source is mainly distributed in the brain parenchyma wrapped by the skull and is in the area with the depth of no more than 7mm from the surface of the top of the head (the dotted line in fig. 2 indicates the range). The simulation sample cases are recorded as single light source simulation samples, and all the single light source simulation samples form a single light source simulation sample set.
Step 1.3: expanding simulation samples for training by using a sample combination method;
on the basis of the simulation samples obtained in the step 1.2, the single light source simulation samples are combined by using a sample combination method to obtain multi-light source simulation samples, and then training samples are expanded.
The biological surface fluorescence distribution data of the organism is recorded as phi, the biological autofluorescence light source distribution data in the organism is recorded as x, and the combined double-light source simulation sample is taken as a specific embodiment to explain, wherein the biological surface fluorescence distribution is respectively phi1And phi2The distribution of biological autofluorescence light source in organism is x1And x2The specific combination steps are as follows:
step 1.3.1: and selecting one grid in the single-light-source simulation sample set as a standard grid of the whole sample set.
If biological autofluorescence simulation sample (x)1,φ1) And (x)2,φ2) Not in the same simulation grid, will (X)2,φ2Is) and (x)1,φ1) Mapping to the standard grid, selecting three points which are closest to mapping points on the mapping grid in the standard grid, and averagely distributing mapping values to the three points, wherein the formula is as follows:
Figure BDA0001645465380000071
Figure BDA0001645465380000072
wherein (x)s<-1i,φs<-1i),(xs<-2i,φs<-2i) Are respectively (x)1,φ1) And (x)2,φ2) Mapping samples on a standard grid, { S }s<-1iAnd { S }s<-2iAre distance simulation samples (x) in the standard grid respectively1,φ1) And a simulation sample (x)2,φ2) Three nearest spatial points xsiSet of values (x)siIs x on i grid point in standard gridsA value); { Gs<-1iAnd { G }s<-2iAre distance simulation samples (x) in the standard grid respectively1,φ1) Andsimulation sample (x)2,φ2) Three nearest spatial points phisiSet of values (phi)siIs phi at i grid point in the standard gridsA value);
if the sample to be merged originally belongs to the standard grid, the step 1.3.2 is directly entered.
Step 1.3.2: combining the simulation samples with single light source to obtain multiple light source simulation samples (x)s,φs) The formula is as follows:
Figure BDA0001645465380000073
wherein xsi,φsiAre respectively simulation samples (x)s,φs) X on i grid point in standard gridsAnd phisThe value is obtained.
Step 2: and constructing a multilayer perceptron.
After the training sample expansion is completed, a multilayer perceptron is constructed, and the multilayer perceptron comprises: an input layer, a hidden layer and an output layer. Wherein, the number of the hidden layers is 4.
In an embodiment of the present disclosure, fig. 3 is a schematic diagram of a distribution structure of a multi-layer sensor, as shown in fig. 3, the multi-layer sensor includes: 1 input layer on one side, 4 hidden layers placed adjacent to the input layer, and 1 output layer on the other side. The number of cells in each layer is the same as the number of grid points in the standard grid. Each hidden layer and output layer is excited by a ReLU function, followed by a Dropout function. The construction of the multilayer perceptron is divided into the following sub-steps:
step 2.1: the biosurface fluorescence distribution data (phi) is imported as input data of the multilayer perceptron into the input layer (denoted as Li layer).
The number of cells of the input layer is the number of grid points of the standard grid, and the input value of each cell of the input layer is the surface fluorescence intensity value of the corresponding grid point.
Step 2.2: input data is connected to the hidden layer Lh via an input layer1(the hidden layers are collectively called Lh layers, and each hidden layer is named Lh in sequencekAnd k is a serial number). The hidden layers are then connected layer by layer and finally to the output layer Lo
In the embodiment of the disclosure, the number of hidden layer units is the same as that of the input layer, and the kth hidden layer LhkWith a hidden layer Lh thereonk-1Or input layer LiThe relationship between them is represented by the following formula:
Figure BDA0001645465380000081
among them, DropoutpThe function is a random function, and the function is to generate a random number for each element in the input vector, namely, each element has P probability zero clearing, and P is more than or equal to 10% and less than or equal to 20%. Mi,kIs the linear element linking weight of the input layer to the current hidden layer, bi,kIs the link bias. Mk-1,kIs the link weight from the k-1 hidden layer to the k hidden layer, bk-1.kIs the link bias. The Relu (corrected Linear Unit) function is a positive value correction response Unit for correcting the negative value in the output result of the Linear Unit, i.e. any input Y value can be corrected by the Relu function and output as the positive value Y+The formula is as follows:
Figure BDA0001645465380000082
step 2.3: and the data enters an output layer after passing through the hidden layer, the output layer is a linear output unit, and the output result is corrected by using a Relu function, so that elements smaller than 0 in the result are cleared. This operation satisfies a physical prior that the fluorescence photon intensity is greater than 0. The formula is as follows:
Lo=Relu(Mk,oLk+bk,o)
wherein L isoIs the output result of the output layer, Mk, o areLinear link weight of k layer of preceding hidden layer of output layer to output layer, bk,oIs the link bias.
And step 3: and (5) training a model.
And (3) performing model training on the multilayer perceptron and the weight thereof constructed in the step (2) by using the training sample library generated in the step (1), specifically performing iterative training by using an Adaptive moment estimation (Adam) optimization method by using a loss function. All linear link weights M and linear biases b in the model have MSE (Mean-square error) between the output result of the output layer and the true result as a loss function. The formula is as follows:
Figure BDA0001645465380000091
wherein XoutReconstruction of the distribution of the fluorescent light source output by the output layer, XtrueTo train a known distribution of fluorescent light sources in the sample.
And 4, step 4: actual in vivo reconstruction.
And (3) storing the model and the weight trained in the step (3), and importing the real in-vivo data of the same kind of organisms into the multilayer perceptron by using the biological autofluorescence multilayer perceptron which is constructed in the step (2) and can carry out computed tomography, and further reconstructing to obtain the in-vivo biological autofluorescence light source distribution result of the actual in-vivo.
In the embodiment of the present disclosure, taking obtaining an autofluorescence light source distribution image of a mouse as a specific embodiment, the method specifically includes the following sub-steps:
first, a biological autofluorescence image of the surface of the mouse was acquired.
I.e. the mice are fixed in the same spatial position as the X-CT images taken when the training set was prepared, and the biological autofluorescence imaging BLI and X-CT images are taken.
The biological autofluorescence imaging BLI is then mapped from the two-dimensional image onto the standard grid used in the training set.
Finally, the mapped mouse surface biological autofluorescence distribution phiinvivoInputting into a multilayer perceptron, and reconstructing to obtain a corresponding distribution result x of the biological autofluorescence light source in the mouse bodyinvivo. As shown in FIG. 4, FIG. 4 is a graph showing the distribution of bioluminescent light in mice.
So far, the embodiments of the present disclosure have been described in detail with reference to the accompanying drawings. It is to be noted that, in the attached drawings or in the description, the implementation modes not shown or described are all the modes known by the ordinary skilled person in the field of technology, and are not described in detail. Furthermore, the above definitions of the various elements and methods are not limited to the particular structures, shapes or arrangements of parts mentioned in the examples, which may be easily modified or substituted by one of ordinary skill in the art, for example:
(1) the simulation sample can also be described as a simulation case;
(2) the perceptron may also be described as a network of perceptrons;
from the above description, those skilled in the art should clearly recognize that the disclosed multilayer perceptron-based biological autofluorescence three-dimensional imaging method is provided.
In summary, the present disclosure provides a biological autofluorescence three-dimensional imaging method based on a multilayer perceptron, which is a method for training the multilayer perceptron to perform biological autofluorescence three-dimensional imaging, and the method is based on a machine learning theory of statistical learning, and does not need to perform image segmentation on a biological organ in a reconstruction process, and also does not need to describe a photon propagation process of photons in different organ regions by using optical parameters, and only needs to obtain the trained multilayer perceptron capable of performing biological autofluorescence three-dimensional imaging, and inputs autofluorescence distribution on a surface of a living body as an input image into the multilayer perceptron, so that light source distribution of biological autofluorescence in the living body can be output and obtained. Monte Carlo simulation is considered as a simulation method closest to the real photon propagation process by the field of biological autofluorescence. Therefore, by means of Monte Carlo simulation, under the condition of known biological autofluorescence light source distribution, the corresponding biological surface fluorescence light spot is generated, so as to provide biological autofluorescence training samples, further expand the number of the training samples, increase the training set, further increase the training scale of the multilayer perceptron, and improve the reconstruction capability of the multilayer perceptron
It should also be noted that directional terms, such as "upper", "lower", "front", "rear", "left", "right", and the like, used in the embodiments are only directions referring to the drawings, and are not intended to limit the scope of the present disclosure. Throughout the drawings, like elements are represented by like or similar reference numerals. Conventional structures or constructions will be omitted when they may obscure the understanding of the present disclosure.
And the shapes and sizes of the respective components in the drawings do not reflect actual sizes and proportions, but merely illustrate the contents of the embodiments of the present disclosure. Furthermore, in the claims, any reference signs placed between parentheses shall not be construed as limiting the claim.
Unless otherwise indicated, the numerical parameters set forth in the specification and attached claims are approximations that can vary depending upon the desired properties sought to be obtained by the present disclosure. In particular, all numbers expressing quantities of ingredients, reaction conditions, and so forth used in the specification and claims are to be understood as being modified in all instances by the term "about". Generally, the expression is meant to encompass variations of ± 10% in some embodiments, 5% in some embodiments, 1% in some embodiments, 0.5% in some embodiments by the specified amount.
Furthermore, the word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements.
The use of ordinal numbers such as "first," "second," "third," etc., in the specification and claims to modify a corresponding element does not by itself connote any ordinal number of the element or any ordering of one element from another or the order of manufacture, and the use of the ordinal numbers is only used to distinguish one element having a certain name from another element having a same name.
In addition, unless steps are specifically described or must occur in sequence, the order of the steps is not limited to that listed above and may be changed or rearranged as desired by the desired design. The embodiments described above may be mixed and matched with each other or with other embodiments based on design and reliability considerations, i.e., technical features in different embodiments may be freely combined to form further embodiments.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, this disclosure is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the present disclosure as described herein, and any descriptions above of specific languages are provided for disclosure of enablement and best mode of the present disclosure.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Also in the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the disclosure, various features of the disclosure are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various disclosed aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that is, the claimed disclosure requires more features than are expressly recited in each claim. Rather, as the following claims reflect, disclosed aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this disclosure.
The above-mentioned embodiments are intended to illustrate the objects, aspects and advantages of the present disclosure in further detail, and it should be understood that the above-mentioned embodiments are only illustrative of the present disclosure and are not intended to limit the present disclosure, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (10)

1. A biological autofluorescence three-dimensional imaging method based on a multilayer perceptron comprises the following steps:
step 1: generating a training sample set by using Monte Carlo simulation;
step 2: constructing a multilayer perceptron, wherein the multilayer perceptron comprises an input layer, a hidden layer and an output layer;
and step 3: model training, namely performing model training on the multilayer perceptron and the weight thereof constructed in the step 2 by using the training sample set generated in the step 1; and
and 4, step 4: and (3) actual in-vivo reconstruction, storing the model trained in the step (3) and the weight, and reconstructing by using the multilayer perceptron constructed in the step (2) to obtain an actual in-vivo biological autofluorescence light source distribution result.
2. The biological autofluorescence three-dimensional imaging method based on multi-layered perceptron according to claim 1, wherein the step 1 comprises:
step 1.1: constructing a sample model;
carrying out image segmentation on X-CT image data of an organism of the same type as a shooting object to construct gridding data, describing optical properties of different segmentation areas by using optical parameters of organs of the organism to construct simulation grids, and further generating a sample model;
step 1.2: constructing a simulation sample;
performing optical simulation by using a Monte Carlo simulation platform 'molecular optical simulation environment', setting a spherical light source in the simulation grid constructed in the step 1.1, and respectively generating biological autofluorescence simulation samples at different light source positions to form a single light source simulation sample; the distance between the light source position and the surface of the organism is not more than 7mm, and the number of the light sources is based on traversing all the areas which accord with the imaging distance; and
step 1.3: expanding simulation samples for training by using a sample combination method;
on the basis of the simulation samples obtained in the step 1.2, combining a single light source simulation sample by using a sample combination method to obtain a multi-light source simulation sample, thereby expanding training samples; the step 1.3 comprises the following steps:
step 1.3.1: selecting a standard grid of which the grid is the whole sample set from the single-light-source simulation sample set, and mapping simulation samples which are not in the same simulation grid to the standard grid; and
step 1.3.2: and combining the single light source simulation samples to obtain a multi-light source simulation sample.
3. The biological autofluorescence three-dimensional imaging method based on multi-layered perceptron according to claim 1, wherein the step 2 comprises:
step 2.1: importing biological surface fluorescence distribution data serving as input data of a multilayer perceptron into an input layer;
step 2.2: input data is connected to the hidden layer through the input layer; and
step 2.3: data enters the output layer after passing through the hidden layer; the output layer is a linear output unit, and the output result is corrected by a Relu function, so that elements smaller than 0 in the result are cleared.
4. The biological autofluorescence three-dimensional imaging method based on multi-layered perceptron according to claim 1, wherein the hidden layers of the perceptron constructed in step 2 are 4.
5. The multilayer perceptron-based three-dimensional imaging method of biological autofluorescence based on claim 2, wherein in step 1.3.1, the biological autofluorescence simulation sample (x)1,φ1) And (x)2,φ2) Not in the same simulation grid, will (x)2,φ2Is) and (x)1,φ1) Mapping into the standard grid, selecting three points which are closest to mapping points on the mapping grid in the standard grid, and averagely distributing mapping values to the three points by using the following formula:
Figure FDA0002378058090000021
Figure FDA0002378058090000022
recording the biological surface fluorescence distribution data of the organism as phi and the biological autofluorescence light source distribution data in the organism as x; the distribution of fluorescence on the surface of the organism is phi1And phi2The distribution of biological autofluorescence light source in organism is x1And x2Wherein (x)s<-1i,φs<-1i),(xs<-2i,φs<-2i) Are respectively (x)1,φ1) And (x)2,φ2) Mapping samples on a standard grid, { S }s<-1iAnd { S }s<-2iAre distance simulation samples (x) in the standard grid respectively1,φ1) And a simulation sample (x)2,φ2) A set of the nearest three spatial point x values; { Gs<-1iAnd { G }s<-2iAre distance simulation samples (x) in the standard grid respectively1,φ1) And a simulation sample (x)2,φ2) The set of the nearest three spatial points values.
6. The biological autofluorescence three-dimensional imaging method based on multi-layered perceptron of claim 2, wherein step 1.3.2Combining the simulation samples with single light source to obtain multiple light source simulation samples (x)s,φs) The formula is used as follows:
Figure FDA0002378058090000031
recording the biological surface fluorescence distribution data of the organism as phi and the biological autofluorescence light source distribution data in the organism as x; wherein xsi,φsiAre respectively simulation samples (x)s,φs) X on i grid point in standard gridsAnd phisThe value is obtained.
7. The biological autofluorescence three-dimensional imaging method based on multi-layered sensor as claimed in claim 3, wherein in step 2.2, the k-th hidden layer Lhk and the previous hidden layer Lh are locatedk-1Or input layer LiThe relationship between them is represented by the following formula:
Figure FDA0002378058090000032
among them, DropoutpThe function is a random function, and P is more than or equal to 10% and less than or equal to 20%; mi,kIs the linear element linking weight of the input layer to the current hidden layer, bi,kIs the link bias; mk-1,kIs the link weight from the k-1 hidden layer to the k hidden layer, bk-1.kIs the link bias; the Relu function is a positive value correction response unit, which is used to correct the negative value in the output result of the linear unit, and the formula is as follows:
Figure FDA0002378058090000033
8. the biological autofluorescence three-dimensional imaging method based on the multilayer perceptron as claimed in claim 3, in step 2.3, data enters an output layer after passing through an implied layer, the output layer is a linear output unit, and a Relu function is used to correct an output result, so that elements less than 0 in the result are cleared, and the formula is as follows:
Lo=Relu(Mk,oLk+bk,o)
wherein L isoIs the output result of the output layer, Mk,oIs the linear link weight of the k layer of the preceding hidden layer of the output layer to the output layer, bk,oIs the link bias.
9. The biological autofluorescence three-dimensional imaging method based on multi-layered perceptron according to claim 1, wherein the step 3 comprises: performing iterative training by using an Adam optimization method by using a loss function;
all linear link weights M and linear biases b in the model have the mean square error MSE between the output result of the output layer and the real result as a loss function, and the formula is as follows:
Figure FDA0002378058090000041
wherein XoutReconstruction of the distribution of the fluorescent light source output by the output layer, xtrueTo train a known distribution of fluorescent light sources in the sample.
10. The biological autofluorescence three-dimensional imaging method based on the multilayer perceptron of claim 1, wherein, in the step 4, in vivo data of real similar living beings is imported into the multilayer perceptron, and then an in vivo biological autofluorescence light source distribution result of an actual in vivo is reconstructed.
CN201810407969.9A 2018-04-28 2018-04-28 Biological autofluorescence three-dimensional imaging method based on multilayer perceptron Active CN108451508B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810407969.9A CN108451508B (en) 2018-04-28 2018-04-28 Biological autofluorescence three-dimensional imaging method based on multilayer perceptron

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810407969.9A CN108451508B (en) 2018-04-28 2018-04-28 Biological autofluorescence three-dimensional imaging method based on multilayer perceptron

Publications (2)

Publication Number Publication Date
CN108451508A CN108451508A (en) 2018-08-28
CN108451508B true CN108451508B (en) 2020-05-05

Family

ID=63214447

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810407969.9A Active CN108451508B (en) 2018-04-28 2018-04-28 Biological autofluorescence three-dimensional imaging method based on multilayer perceptron

Country Status (1)

Country Link
CN (1) CN108451508B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109949404A (en) * 2019-01-16 2019-06-28 深圳市旭东数字医学影像技术有限公司 Based on Digital Human and CT and/or the MRI image three-dimensional rebuilding method merged and system
CN110974166B (en) * 2019-12-10 2021-03-12 中国科学院自动化研究所 Optical tomography method and system based on K-nearest neighbor local connection network
CN111795955B (en) * 2020-06-22 2023-08-04 天津大学 Fluorescence pharmacokinetics tomography method based on multilayer perception neural network
CN112137581A (en) * 2020-08-26 2020-12-29 西北大学 Cerenkov fluorescence tomography reconstruction method based on multilayer perception network
CN113409466B (en) * 2021-07-06 2023-08-25 中国科学院自动化研究所 Excitation fluorescence tomography method based on GCN residual error connection network
CN113254918B (en) * 2021-07-14 2021-10-12 杭州云信智策科技有限公司 Information processing method, electronic device, and computer-readable storage medium
CN114117875B (en) * 2021-11-05 2024-06-04 东北大学 Fast Monte Carlo simulation method for simulating photon propagation
CN118363014B (en) * 2024-06-20 2024-08-16 北京航空航天大学 Underwater non-visual field data acquisition system and underwater non-visual field object modeling method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007032940A2 (en) * 2005-09-12 2007-03-22 Rutgers, The State University Of New Jersey Office Of Corporate Liaison And Technology Transfer System and methods for generating three-dimensional images from two-dimensional bioluminescence images and visualizing tumor shapes and locations
CN101539518A (en) * 2008-03-20 2009-09-23 中国科学院自动化研究所 Finite-element reconstruction method for space weighting of auto-fluorescence imaging
CN101947103A (en) * 2010-09-20 2011-01-19 西安电子科技大学 Optical bioluminescence tomography method
CN103271723A (en) * 2013-06-26 2013-09-04 西安电子科技大学 Bioluminescence tomography reconstruction method
CN105326475A (en) * 2015-09-16 2016-02-17 西北大学 Bioluminescence tomography reconstruction method based on multi-light-source resolution
CN106097441A (en) * 2016-06-25 2016-11-09 北京工业大学 Compound regularization Bioluminescence tomography reconstruction method based on L1 norm Yu TV norm
CN107045728A (en) * 2016-12-14 2017-08-15 北京工业大学 Bioluminescence fault imaging is combined the auto-adaptive parameter system of selection that regularization is rebuild
CN107146261A (en) * 2017-03-21 2017-09-08 中国医学科学院北京协和医院 Bioluminescence fault imaging Quantitative Reconstruction method based on nuclear magnetic resonance image priori region of interest

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201341209Y (en) * 2009-02-19 2009-11-04 广州中科恺盛医疗科技有限公司 Three-dimensional imaging data acquisition device
CN101766476B (en) * 2009-07-08 2011-05-11 中国科学院自动化研究所 Auto-fluorescence molecule imaging system
CN103300829B (en) * 2013-06-25 2015-01-07 中国科学院自动化研究所 Biological autofluorescence tomography method based on iteration reweighting
CN106097437B (en) * 2016-06-14 2019-03-15 中国科学院自动化研究所 Archebiosis light three-D imaging method based on pure optical system
CN107374588B (en) * 2017-08-01 2020-05-19 西北大学 Multi-light-source fluorescent molecular tomography reconstruction method based on synchronous clustering
CN107392977B (en) * 2017-08-22 2021-04-13 西北大学 Single-view Cerenkov luminescence tomography reconstruction method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007032940A2 (en) * 2005-09-12 2007-03-22 Rutgers, The State University Of New Jersey Office Of Corporate Liaison And Technology Transfer System and methods for generating three-dimensional images from two-dimensional bioluminescence images and visualizing tumor shapes and locations
CN101539518A (en) * 2008-03-20 2009-09-23 中国科学院自动化研究所 Finite-element reconstruction method for space weighting of auto-fluorescence imaging
CN101947103A (en) * 2010-09-20 2011-01-19 西安电子科技大学 Optical bioluminescence tomography method
CN103271723A (en) * 2013-06-26 2013-09-04 西安电子科技大学 Bioluminescence tomography reconstruction method
CN105326475A (en) * 2015-09-16 2016-02-17 西北大学 Bioluminescence tomography reconstruction method based on multi-light-source resolution
CN106097441A (en) * 2016-06-25 2016-11-09 北京工业大学 Compound regularization Bioluminescence tomography reconstruction method based on L1 norm Yu TV norm
CN107045728A (en) * 2016-12-14 2017-08-15 北京工业大学 Bioluminescence fault imaging is combined the auto-adaptive parameter system of selection that regularization is rebuild
CN107146261A (en) * 2017-03-21 2017-09-08 中国医学科学院北京协和医院 Bioluminescence fault imaging Quantitative Reconstruction method based on nuclear magnetic resonance image priori region of interest

Also Published As

Publication number Publication date
CN108451508A (en) 2018-08-28

Similar Documents

Publication Publication Date Title
CN108451508B (en) Biological autofluorescence three-dimensional imaging method based on multilayer perceptron
CN109191564B (en) Depth learning-based three-dimensional reconstruction method for fluorescence tomography
Babier et al. Knowledge‐based automated planning with three‐dimensional generative adversarial networks
Zhang et al. ME‐Net: multi‐encoder net framework for brain tumor segmentation
CN110009669B (en) 3D/2D medical image registration method based on deep reinforcement learning
CN103271723B (en) Bioluminescence tomography reconstruction method
CN110047056A (en) With the cross-domain image analysis and synthesis of depth image to image network and confrontation network
CN109410273A (en) According to the locating plate prediction of surface data in medical imaging
JP6782051B2 (en) Atlas-based automatic segmentation enhanced by online learning
Li et al. DenseX-net: an end-to-end model for lymphoma segmentation in whole-body PET/CT images
CN102334979B (en) Bimodal fusion tomography method based on iterative shrinkage
US20210035340A1 (en) Ct big data from simulation, emulation and transfer learning
CN107392977A (en) Single-view Cherenkov lights tomography rebuilding method
CN111915733A (en) LeNet network-based three-dimensional cone-beam X-ray luminescence tomography method
US9295432B2 (en) Method of determining distribution of a dose in a body
CN110974166B (en) Optical tomography method and system based on K-nearest neighbor local connection network
JP7246116B1 (en) PET image reconstruction method, apparatus, device and medium based on transformer feature sharing
CN109166103B (en) Excitation fluorescence tomography method based on multilayer perception network
CN110327018A (en) Degree of rarefication adaptively organizes the excitation fluorescence cross sectional reconstruction method of orthogonal matching pursuit
JP2021087629A (en) Medical data processing device
Kläser et al. Deep boosted regression for MR to CT synthesis
Gupta et al. Study on anatomical and functional medical image registration methods
US20220308242A1 (en) X-ray photon-counting data correction through deep learning
Xiong et al. Automatic 3D surface reconstruction of the left atrium from clinically mapped point clouds using convolutional neural networks
CN113205567A (en) Method for synthesizing CT image by MRI image based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant