CN114587272B - In-vivo fluorescence imaging deblurring method based on deep learning - Google Patents
In-vivo fluorescence imaging deblurring method based on deep learning Download PDFInfo
- Publication number
- CN114587272B CN114587272B CN202210182173.4A CN202210182173A CN114587272B CN 114587272 B CN114587272 B CN 114587272B CN 202210182173 A CN202210182173 A CN 202210182173A CN 114587272 B CN114587272 B CN 114587272B
- Authority
- CN
- China
- Prior art keywords
- near infrared
- fluorescence
- region
- image
- fluorescence image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000799 fluorescence microscopy Methods 0.000 title claims abstract description 56
- 238000000034 method Methods 0.000 title claims abstract description 52
- 238000013135 deep learning Methods 0.000 title claims abstract description 17
- 238000001727 in vivo Methods 0.000 title claims description 34
- 238000002073 fluorescence micrograph Methods 0.000 claims abstract description 131
- 238000012549 training Methods 0.000 claims abstract description 42
- 238000012795 verification Methods 0.000 claims abstract description 16
- 230000035945 sensitivity Effects 0.000 claims abstract description 12
- 238000012360 testing method Methods 0.000 claims abstract description 11
- 238000003384 imaging method Methods 0.000 claims description 36
- 230000005284 excitation Effects 0.000 claims description 12
- 241000699670 Mus sp. Species 0.000 claims description 10
- 238000010276 construction Methods 0.000 claims description 10
- 230000006870 function Effects 0.000 claims description 10
- 230000003287 optical effect Effects 0.000 claims description 10
- 241000699666 Mus <mouse, genus> Species 0.000 claims description 8
- 238000004422 calculation algorithm Methods 0.000 claims description 8
- 238000001914 filtration Methods 0.000 claims description 6
- 238000011065 in-situ storage Methods 0.000 claims description 6
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 claims description 4
- 230000008859 change Effects 0.000 claims description 4
- 239000007789 gas Substances 0.000 claims description 4
- 239000001301 oxygen Substances 0.000 claims description 4
- 229910052760 oxygen Inorganic materials 0.000 claims description 4
- 239000000284 extract Substances 0.000 claims description 3
- 230000008569 process Effects 0.000 claims description 3
- 206010002091 Anaesthesia Diseases 0.000 claims description 2
- 230000037005 anaesthesia Effects 0.000 claims description 2
- 230000000694 effects Effects 0.000 description 10
- MOFVSTNWEDAEEK-UHFFFAOYSA-M indocyanine green Chemical compound [Na+].[O-]S(=O)(=O)CCCCN1C2=CC=C3C=CC=CC3=C2C(C)(C)C1=CC=CC=CC=CC1=[N+](CCCCS([O-])(=O)=O)C2=CC=C(C=CC=C3)C3=C2C1(C)C MOFVSTNWEDAEEK-UHFFFAOYSA-M 0.000 description 5
- 229960004657 indocyanine green Drugs 0.000 description 5
- 239000000523 sample Substances 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000007547 defect Effects 0.000 description 3
- 230000035515 penetration Effects 0.000 description 3
- 230000003444 anaesthetic effect Effects 0.000 description 2
- 238000000149 argon plasma sintering Methods 0.000 description 2
- 238000010200 validation analysis Methods 0.000 description 2
- 238000010521 absorption reaction Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 210000003462 vein Anatomy 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0071—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence by measuring fluorescence emission
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Physics & Mathematics (AREA)
- Biophysics (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- General Health & Medical Sciences (AREA)
- Animal Behavior & Ethology (AREA)
- Surgery (AREA)
- Molecular Biology (AREA)
- Medical Informatics (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Evolutionary Computation (AREA)
- Fuzzy Systems (AREA)
- Signal Processing (AREA)
- Psychiatry (AREA)
- Physiology (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention belongs to the technical field of fluorescence imaging, and discloses a living body fluorescence imaging deblurring method based on deep learning, which utilizes the characteristic that a near infrared two-region fluorescence image has high resolution, and adopts a near infrared one-region fluorescence camera and a near infrared two-region fluorescence camera with common vision to simultaneously acquire the fluorescence image of a mouse; randomly dividing the collected living body fluorescence image into a training data set, a verification data set and a test data set according to a certain proportion; constructing a generation countermeasure network GAN by utilizing a full gradient loss method to convert a near infrared first-region fluorescence image into a near infrared second-region fluorescence image, and training the acquired living body fluorescence image by adopting the constructed generation countermeasure network; and calculating the near infrared one-area fluorescence image by adopting the trained network to obtain a deblurred fluorescence image. The deblurred fluorescent image provided by the invention has the sensitivity of the near infrared first-region fluorescent image and also has the definition of the near infrared second-region fluorescent image.
Description
Technical Field
The invention belongs to the technical field of fluorescence imaging, and particularly relates to a living body fluorescence imaging deblurring method based on deep learning.
Background
At present, near infrared fluorescence imaging mainly relies on a fluorescence probe marking technology to track and observe the change information of a region of interest in biological tissues and cells, and has the advantages of rapidness, simplicity, convenience, no radiation damage, non-invasiveness, high sensitivity and the like.
Near infrared fluorescence imaging can be classified into near infrared one-region fluorescence imaging with a wavelength of 700nm to 1000nm and near infrared two-region fluorescence imaging with a wavelength of 1000nm to 1700 nm. Near infrared one-region imaging has the characteristics of low background noise and high signal intensity; but due to the large light scattering effect, tissue autofluorescence and background interference are large, tissue penetration depth is small, imaging effect is fuzzy and resolution is low. Near infrared two-region fluorescence imaging has the advantages of small scattering effect and absorption effect in biological tissues, deep penetration depth, small tissue autofluorescence and high signal-to-back ratio, and has clear effect and high resolution; however, since the near infrared two-region fluorescence imaging signal intensity is not high, some details of the imaging effect are not as good as near infrared one-region fluorescence imaging. At present, most fluorescence probes emit fluorescence in a near infrared first region, and near infrared second region fluorescence imaging cannot be performed. Therefore, there is a need to design a new in-vivo fluorescence imaging deblurring method and system to remedy the drawbacks of the prior art.
Through the above analysis, the problems and defects existing in the prior art are as follows:
(1) Because the near infrared one-region imaging has larger light scattering effect, the tissue autofluorescence and the background interference are large, the tissue penetration depth is small, the imaging effect is fuzzy and the resolution is low.
(2) Because the near infrared two-region fluorescence imaging signal intensity is not high, some details of the imaging effect are not as good as near infrared one-region fluorescence imaging.
(3) At present, most fluorescence probes emit fluorescence in a near infrared first region, and near infrared second region fluorescence imaging cannot be performed.
The difficulty of solving the problems and the defects is as follows:
1. The first-area and the second-area fluorescence cameras simultaneously form the same-view imaging.
2. And (3) acquisition of a data set.
The meaning of solving the problems and the defects is as follows:
1. two-region fluorescence images can be obtained without the use of two-region probes.
2. A fluorescence image having both near infrared first-region fluorescence image sensitivity and near infrared second-region fluorescence imaging definition can be obtained.
Disclosure of Invention
Aiming at the problems existing in the prior art, the invention provides a depth learning-based living body fluorescence imaging deblurring method.
The invention is realized in such a way that a depth learning-based in-vivo fluorescence imaging deblurring method comprises the following steps: the characteristic that the near infrared two-region fluorescence image has high resolution is utilized, and a near infrared one-region fluorescence camera and a near infrared two-region fluorescence camera with common vision are adopted to collect fluorescence images of the mice simultaneously; randomly dividing the collected living body fluorescence image into a training data set, a verification data set and a test data set according to a certain proportion; constructing a generation countermeasure network GAN for converting a near infrared first-region fluorescence image into a near infrared second-region fluorescence image by using a full gradient loss method through a deep learning technology, and training the acquired living body fluorescence image by adopting the constructed generation countermeasure network; and calculating the near infrared first-region fluorescence image by adopting a trained network to obtain a deblurred fluorescence image, wherein the deblurred fluorescence image has the sensitivity of the near infrared first-region fluorescence image and the definition of the near infrared second-region fluorescence image.
Further, the in-vivo fluorescence imaging deblurring method based on deep learning comprises the following steps:
step one, constructing a same-view near infrared first-area and second-area imaging system;
step two, collecting a living body fluorescence image;
Step three, constructing a data set;
step four, constructing and generating an countermeasure network by using a full gradient loss method;
and fifthly, obtaining a trained network and a deblurred near infrared one-area fluorescence image.
Further, the constructing the co-vision near infrared first-area and second-area imaging system in the first step includes:
The near infrared one-area fluorescence camera is used for collecting near infrared one-area fluorescence images; the optical filter is used for filtering optical signals below 850nm wave bands; the near infrared two-region fluorescence camera is used for collecting near infrared two-region fluorescence images; the optical filter is used for filtering optical signals below a 1300nm wave band; the beam splitting cube is used for simultaneously transmitting the fluorescence image to the near infrared first-region fluorescence camera and the near infrared second-region fluorescence camera; a convex lens with a focal length of 100mm; a convex lens with a focal length of 100mm; a lens for increasing an imaging field of view; the reflector is used for reflecting the imaging of the mouse to the lens through a certain angle; the excitation light source is used for emitting excitation light; the computer is used for collecting fluorescence images formed by the near infrared first-area fluorescence camera and the near infrared second-area fluorescence camera; and the gas anesthesia device is used for anesthetizing the mice and providing oxygen.
Further, the acquiring the in-vivo fluorescence image in the second step includes:
Starting an excitation light emitting device to emit excitation light with certain wavelength and intensity to irradiate the mice, and emitting fluorescence by ICG; imaging by using the built in-situ fluorescence imaging system with the same field of view and the built in-situ fluorescence imaging system with the built in-situ fluorescence imaging system; and collecting a near infrared first-area fluorescence image and a near infrared second-area fluorescence image in the mouse body, and collecting the required fluorescence image quantity according to different parts.
The constructing the data set in the third step comprises the following steps:
The acquired near infrared first-region living body fluorescence image and near infrared second-region living body fluorescence image of the same imaging target are randomly divided into a training data set, a verification data set and a test data set according to a certain proportion.
Further, the constructing and generating the countermeasure network by the full gradient loss method in the fourth step includes:
Training the data set through an improved generation countermeasure network by a full gradient loss method; the gradients of adjacent pixels of each pixel in the image are minimized, the gradients of the near infrared first-region living body fluorescent image and the near infrared second-region living body fluorescent image are compared, and then the gradients between the original image and the generated image are compared, so that a loss function is established, and feedback is provided for network training.
In a generating countermeasure network constructed by a full gradient loss method, calculating a perceived loss by using a pretrained VGG and ResNet model, training the network by a loss function with fixed super parameters, and training the network by setting the loss function with the super parameters; the set super-parameters are adjusted according to the loss change and the performance of the verification data set, and the super-parameters comprise generator loss and peak signal-to-noise ratio obtained at the end of each epoch in the training process; the network downsamples the near infrared one-area image through stride convolution, extracts main features through the parameters of a learning stride convolution kernel, scales the image by adopting ResNet and a sub-pixel convolution layer structure, and finally evaluates the network performance of each epoch by using a verification data set.
Further, the obtaining the trained network and the deblurred near infrared one-region fluorescence image in the fifth step includes:
Predicting the blurred in-vivo fluorescence image by using the trained generation countermeasure network to generate a high-definition deblurred in-vivo fluorescence image.
Deblurring the near infrared first-region fluorescence image through a trained algorithm network to obtain a deblurred fluorescence image, wherein the deblurred fluorescence image has the sensitivity of the near infrared first-region fluorescence image and the definition of near infrared second-region fluorescence imaging.
Another object of the present invention is to provide an in-vivo fluorescence imaging deblurring system to which the in-vivo fluorescence imaging deblurring method based on depth learning is applied, the in-vivo fluorescence imaging deblurring system comprising:
the imaging system building module is used for building the same-view near infrared first-area and second-area imaging systems;
The living body fluorescence image acquisition module is used for acquiring a near infrared first-region living body fluorescence image and a near infrared second-region living body fluorescence image;
the data set construction module is used for randomly dividing the near infrared first-region living body fluorescence image and the near infrared second-region living body fluorescence image into a training data set, a verification data set and a test data set according to a certain proportion;
the generation countermeasure network construction module is used for constructing a generation countermeasure network by using a full gradient loss method;
the generating countermeasure network training module is used for training the constructed data set by adopting a full gradient loss method improved generating countermeasure network training method to obtain a trained network;
The network deblurring module is used for deblurring the near-infrared one-area fluorescent image through the trained algorithm network to obtain the deblurred near-infrared one-area fluorescent image.
It is a further object of the present invention to provide a computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of:
The characteristic that the near infrared two-region fluorescence image has high resolution is utilized, and a near infrared one-region fluorescence camera and a near infrared two-region fluorescence camera with common vision are adopted to collect fluorescence images of the mice simultaneously; randomly dividing the collected living body fluorescence image into a training data set, a verification data set and a test data set according to a certain proportion; constructing a generation countermeasure network GAN for converting a near infrared first-region fluorescence image into a near infrared second-region fluorescence image by using a full gradient loss method through a deep learning technology, and training the acquired living body fluorescence image by adopting the constructed generation countermeasure network; and calculating the near infrared first-region fluorescence image by adopting a trained network to obtain a deblurred fluorescence image, wherein the deblurred fluorescence image has the sensitivity of the near infrared first-region fluorescence image and the definition of the near infrared second-region fluorescence image.
By combining all the technical schemes, the invention has the advantages and positive effects that: the in-vivo fluorescence imaging deblurring method based on deep learning provided by the invention utilizes the characteristic that the near infrared two-region fluorescence image has high resolution, constructs a generation countermeasure network for converting the near infrared one-region fluorescence image into the near infrared two-region fluorescence image through the deep learning technology, predicts the infrared one-region in-vivo fluorescence imaging by utilizing the network to obtain the deblurring effect, and the deblurred fluorescence image has the sensitivity of the near infrared one-region fluorescence image and the definition of the near infrared two-region fluorescence image.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments of the present invention will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a method for deblurring in vivo fluorescence imaging according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a method for deblurring in vivo fluorescence imaging according to an embodiment of the present invention.
FIG. 3 is a block diagram of a system for deblurring in vivo fluorescence imaging according to an embodiment of the present invention;
In the figure: 1. the imaging system building module; 2. a living body fluorescence image acquisition module; 3. a data set construction module; 4. generating an countermeasure network construction module; 5. generating an countermeasure network training module; 6. and a network deblurring module.
Fig. 4 is a diagram of a single field near infrared one-zone and two-zone imaging system according to an embodiment of the invention.
Detailed Description
The present invention will be described in further detail with reference to the following examples in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Aiming at the problems existing in the prior art, the invention provides a living body fluorescence imaging deblurring method, a living body fluorescence imaging deblurring system, computer equipment and an intelligent terminal, and the invention is described in detail below with reference to the accompanying drawings.
As shown in fig. 1, the in-vivo fluorescence imaging deblurring method provided by the embodiment of the invention comprises the following steps:
S101, constructing a same-view near infrared first-area and second-area imaging system;
S102, collecting a living body fluorescence image;
S103, constructing a data set;
s104, constructing and generating an countermeasure network by using a full gradient loss method;
and S105, obtaining a network after training and a near infrared one-area fluorescence image after deblurring.
The schematic diagram of the in-vivo fluorescence imaging deblurring method provided by the embodiment of the invention is shown in fig. 2.
As shown in fig. 3, the in-vivo fluorescence imaging deblurring system provided by the embodiment of the invention includes:
The imaging system building module 1 is used for building a near infrared first-area imaging system and a near infrared second-area imaging system with the same vision;
the living body fluorescence image acquisition module 2 is used for acquiring a near infrared first-region living body fluorescence image and a near infrared second-region living body fluorescence image;
the data set construction module 3 is used for randomly dividing the near infrared first-region living body fluorescence image and the near infrared second-region living body fluorescence image into a training data set, a verification data set and a test data set according to a certain proportion;
a generation countermeasure network construction module 4 for constructing a generation countermeasure network by using the full gradient loss method;
The generating countermeasure network training module 5 is used for training the constructed data set by adopting a full gradient loss method improved generating countermeasure network training method to obtain a trained network;
The network deblurring module 6 is used for deblurring the near-infrared first-area fluorescent image through the trained algorithm network to obtain a deblurred near-infrared first-area fluorescent image.
The technical scheme of the invention is further described below with reference to specific embodiments.
Example 1
The invention provides a depth learning-based living body fluorescence imaging deblurring method, and a deblurred fluorescence image has the sensitivity of a near infrared first-region fluorescence image and the definition of a near infrared second-region fluorescence image.
The in-vivo fluorescence imaging deblurring method based on deep learning provided by the invention comprises the following steps:
(1) The built near infrared first area and near infrared second area same visual field are adopted for imaging, and the imaging area and imaging quantity can be adjusted according to the requirement; and then randomly dividing the acquired near infrared first-region living body fluorescence image and the near infrared second-region living body fluorescence image into a training data set, a verification data set and a test data set according to a certain proportion.
(2) In the algorithm network, gradients of a near infrared first-region living body fluorescent image and a near infrared second-region living body fluorescent image are compared, and gradients between an original image and a generated image are compared, so that a loss function is established, and feedback is provided for network training.
(3) The full gradient loss method constructed generation counter-network calculates perceived loss using pre-trained VGG and ResNet models, the network is trained with a loss function having fixed super-parameters that are adjusted according to the loss variation and performance of the validation data set, including generator loss and peak signal-to-noise ratio at the end of each epoch during training, and finally the validation data set is used to evaluate the network performance of each epoch.
(4) And calculating the near infrared first-region fluorescence image by adopting a trained network to obtain a deblurred fluorescence image, wherein the deblurred fluorescence image has the sensitivity of the near infrared first-region fluorescence image and the definition of the near infrared second-region fluorescence image.
Example 2
The invention adopts a self-built same-vision near infrared first-area and second-area imaging system to collect and process images of mice, and comprises the following steps:
step 1: constructing a single-field near infrared one-region and two-region imaging system (see figure 4)
1. A near infrared one-area fluorescence camera for collecting near infrared one-area fluorescence images; 2. an 850nm filter for filtering optical signals below 850nm wave band; 3. a near infrared two-region fluorescence camera for collecting near infrared two-region fluorescence images; 4. a 1300nm filter for filtering optical signals below 1300nm wave band; 5. the beam splitting cube simultaneously transmits the fluorescence image to the near infrared first-region fluorescence camera and the near infrared second-region fluorescence camera; 6. a convex lens with a focal length of 100mm; 7. a convex lens with a focal length of 100mm; 8. a lens to increase an imaging field of view; 9. a reflecting mirror for reflecting the imaging of the mouse to the lens through a certain angle; 10. an excitation light source that emits excitation light; 11. the computer is used for collecting fluorescence images formed by the near infrared first-area fluorescence camera and the near infrared second-area fluorescence camera; 12. a gas anesthetic device for anesthetizing the mice and providing oxygen.
Step 2: acquiring a fluorescence image of a living body
The mice were anesthetized and oxygen was provided using a gas anesthetic apparatus. Then, the probe ICG (indocyanine green) is injected into the mouse body through the tail vein, and the excitation light emitting device is started to emit excitation light with certain wavelength and intensity to irradiate the mouse, so that the ICG (indocyanine green) emits fluorescence. Then using a built near infrared first area and near infrared second area same-vision simultaneous living body fluorescence imaging system to image; the near infrared first-area fluorescence image and the near infrared second-area fluorescence image in the mouse body are collected, and the required fluorescence image quantity can be collected according to different parts.
Step 3: constructing a dataset
The collected near infrared first-region living body fluorescence image and the near infrared second-region living body fluorescence image are randomly divided into a training data set, a verification data set and a test data set according to a certain proportion.
Step 4: construction of full gradient loss method to generate countermeasure network
Training the data set by generating an countermeasure network improved by a full gradient loss method; in the algorithm network, firstly, the gradients of the near-infrared first-region living body fluorescent image and the near-infrared second-region living body fluorescent image are compared, and then the gradients between the original image and the generated image are compared, so that a loss function is established, and feedback is provided for network training.
In the network, the pre-trained VGG and ResNet models are used to calculate the perceived loss, and then the network is trained by setting a loss function of the super parameters, wherein the set super parameters are adjusted according to the loss change and the performance of the verification data set. The network can downsample the near infrared one-area image through stride convolution to reduce the size of the image and accelerate calculation, and simultaneously extract main characteristics through the parameters of the learning stride convolution kernel, and scale the image by adopting ResNet and a sub-pixel convolution layer structure to improve the performance of the training network.
Step 5: obtaining a trained network
The trained generation countermeasure network can predict the blurred in-vivo fluorescence image to generate a high-definition deblurred in-vivo fluorescence image.
Step 6: obtaining a deblurred near infrared one-region fluorescence image
Deblurring the near infrared first-region fluorescence image through a trained algorithm network to obtain a deblurred fluorescence image, wherein the deblurred fluorescence image has the sensitivity of the near infrared first-region fluorescence image and the definition of near infrared second-region fluorescence imaging.
The foregoing is merely illustrative of specific embodiments of the present invention, and the scope of the invention is not limited thereto, but any modifications, equivalents, improvements and alternatives falling within the spirit and principles of the present invention will be apparent to those skilled in the art within the scope of the present invention.
Claims (5)
1. The in-vivo fluorescence imaging deblurring method based on the deep learning is characterized in that the in-vivo fluorescence imaging deblurring method based on the deep learning utilizes the characteristic that a near infrared two-region fluorescence image has high resolution, and a near infrared one-region fluorescence camera and a near infrared two-region fluorescence camera with common vision are adopted to collect fluorescence images of mice simultaneously; randomly dividing the collected living body fluorescence image into a training data set, a verification data set and a test data set according to a proportion; constructing a generation countermeasure network GAN for converting a near infrared first-region fluorescence image into a near infrared second-region fluorescence image by using a full gradient loss method through deep learning, and training the acquired living body fluorescence image by adopting the constructed generation countermeasure network; calculating the near infrared first-region fluorescence image by adopting a trained network to obtain a deblurred fluorescence image, wherein the deblurred fluorescence image has the sensitivity of the near infrared first-region fluorescence image and the definition of the near infrared second-region fluorescence image;
The in-vivo fluorescence imaging deblurring method based on deep learning comprises the following steps of:
step one, constructing a same-view near infrared first-area and second-area imaging system;
step two, collecting a living body fluorescence image;
Step three, constructing a data set;
step four, constructing and generating an countermeasure network by using a full gradient loss method;
Step five, obtaining a trained network and a deblurred near infrared one-area fluorescence image;
The construction of the full gradient loss method in the fourth step to generate the countermeasure network comprises the following steps: training the data set through an improved generation countermeasure network by a full gradient loss method; the gradient of adjacent pixels of each pixel in the image is minimized, the gradient of the near infrared first-region living body fluorescent image and the gradient of the near infrared second-region living body fluorescent image are compared, and then the gradient between the original image and the generated image is compared, so that a loss function is established, and feedback is provided for network training;
In a generating countermeasure network constructed by a full gradient loss method, calculating a perceived loss by using a pretrained VGG and ResNet model, training the network by a loss function with fixed super parameters, and training the network by setting the loss function with the super parameters; the set super-parameters are adjusted according to the loss change and the performance of the verification data set, and the super-parameters comprise generator loss and peak signal-to-noise ratio obtained at the end of each epoch in the training process; the network downsamples the near infrared one-area image through stride convolution, extracts main features through the parameters of a learning stride convolution kernel, scales the image by adopting ResNet and a sub-pixel convolution layer structure, and finally evaluates the network performance of each epoch by using a verification data set.
2. The in-vivo fluorescence imaging deblurring method based on deep learning of claim 1, wherein the constructing the co-vision near infrared one-region and two-region imaging system in the first step comprises: the near infrared one-area fluorescence camera is used for collecting near infrared one-area fluorescence images; the optical filter is used for filtering optical signals below 850nm wave bands; the near infrared two-region fluorescence camera is used for collecting near infrared two-region fluorescence images; the optical filter is used for filtering optical signals below a 1300nm wave band; the beam splitting cube is used for simultaneously transmitting the fluorescence image to the near infrared first-region fluorescence camera and the near infrared second-region fluorescence camera; a convex lens with a focal length of 100mm; a convex lens with a focal length of 100mm; a lens for increasing an imaging field of view; a mirror for reflecting the imaging of the mouse to the lens; the excitation light source is used for emitting excitation light; the computer is used for collecting fluorescence images formed by the near infrared first-area fluorescence camera and the near infrared second-area fluorescence camera; and the gas anesthesia device is used for anesthetizing the mice and providing oxygen.
3. The in-vivo fluorescence imaging deblurring method based on deep learning of claim 1, wherein the acquiring in-vivo fluorescence images in step two comprises: starting an excitation light emitting device to emit excitation light with certain wavelength and intensity to irradiate the mice, and emitting fluorescence by ICG; imaging by using the built in-situ fluorescence imaging system with the same field of view and the built in-situ fluorescence imaging system with the built in-situ fluorescence imaging system; collecting a near infrared first-area fluorescence image and a near infrared second-area fluorescence image in a mouse body, and collecting the required fluorescence image quantity according to different parts;
The constructing the data set in the third step comprises the following steps: the acquired near infrared first-region living body fluorescence image and near infrared second-region living body fluorescence image of the same imaging target are randomly divided into a training data set, a verification data set and a test data set according to the proportion.
4. The in-vivo fluorescence imaging deblurring method based on deep learning of claim 1, wherein the obtaining the trained network and deblurred near infrared one-region fluorescence image in step five comprises: predicting the blurred in-vivo fluorescent image by using the generated countermeasure network after training to generate a high-definition deblurred in-vivo fluorescent image;
Deblurring the near infrared first-region fluorescence image through a trained algorithm network to obtain a deblurred fluorescence image, wherein the deblurred fluorescence image has the sensitivity of the near infrared first-region fluorescence image and the definition of near infrared second-region fluorescence imaging.
5. A in-vivo fluorescence imaging deblurring system for implementing the in-vivo fluorescence imaging deblurring method based on depth learning according to any one of claims 1 to 4, characterized in that said in-vivo fluorescence imaging deblurring system comprises:
the imaging system building module is used for building the same-view near infrared first-area and second-area imaging systems;
The living body fluorescence image acquisition module is used for acquiring a near infrared first-region living body fluorescence image and a near infrared second-region living body fluorescence image;
the data set construction module is used for randomly dividing the near infrared first-region living body fluorescence image and the near infrared second-region living body fluorescence image into a training data set, a verification data set and a test data set according to the proportion;
the generation countermeasure network construction module is used for constructing a generation countermeasure network by using a full gradient loss method;
the generating countermeasure network training module is used for training the constructed data set by adopting a full gradient loss method improved generating countermeasure network training method to obtain a trained network;
The network deblurring module is used for deblurring the near-infrared one-area fluorescent image through the trained algorithm network to obtain the deblurred near-infrared one-area fluorescent image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210182173.4A CN114587272B (en) | 2022-02-25 | 2022-02-25 | In-vivo fluorescence imaging deblurring method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210182173.4A CN114587272B (en) | 2022-02-25 | 2022-02-25 | In-vivo fluorescence imaging deblurring method based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114587272A CN114587272A (en) | 2022-06-07 |
CN114587272B true CN114587272B (en) | 2024-07-02 |
Family
ID=81804317
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210182173.4A Active CN114587272B (en) | 2022-02-25 | 2022-02-25 | In-vivo fluorescence imaging deblurring method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114587272B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107266929A (en) * | 2017-06-21 | 2017-10-20 | 四川大学 | One class is using Cyanine Dyes Fluorescence group as near infrared fluorescent dye of precursor skeleton structure and preparation method and application |
CN109480776A (en) * | 2018-10-30 | 2019-03-19 | 中国科学院自动化研究所 | Near-infrared fluorescent surgical imaging systems and its application method |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003164414A (en) * | 2001-11-29 | 2003-06-10 | Fuji Photo Film Co Ltd | Method and device for displaying fluoroscopic image |
US20130109963A1 (en) * | 2011-10-31 | 2013-05-02 | The University Of Connecticut | Method and apparatus for medical imaging using combined near-infrared optical tomography, fluorescent tomography and ultrasound |
US11266295B2 (en) * | 2016-08-08 | 2022-03-08 | Sony Corporation | Endoscope apparatus and control method of endoscope apparatus |
CN109776380A (en) * | 2019-03-12 | 2019-05-21 | 遵义医科大学 | It is applied in the bis- targeting near infrared fluorescent probe preparations of IR780 and tumour diagnosis and treatment |
CN110327020B (en) * | 2019-07-04 | 2021-09-28 | 中国科学院自动化研究所 | Near-infrared two-zone/one-zone bimodal fluorescence tomography system |
CN110804434B (en) * | 2019-10-17 | 2021-11-02 | 西安电子科技大学 | Rare earth probe capable of identifying squamous cell lung carcinoma in targeted manner and preparation method and application thereof |
EP4162406A1 (en) * | 2020-06-05 | 2023-04-12 | National University of Singapore | Deep fluorescence imaging by laser-scanning excitation and artificial neural network processing |
CN112274108B (en) * | 2020-08-25 | 2021-07-30 | 中国人民解放军军事科学院军事医学研究院 | CKIP-1-targeted fluorescent probe for detecting osteoporosis and application of probe in-vivo detection |
-
2022
- 2022-02-25 CN CN202210182173.4A patent/CN114587272B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107266929A (en) * | 2017-06-21 | 2017-10-20 | 四川大学 | One class is using Cyanine Dyes Fluorescence group as near infrared fluorescent dye of precursor skeleton structure and preparation method and application |
CN109480776A (en) * | 2018-10-30 | 2019-03-19 | 中国科学院自动化研究所 | Near-infrared fluorescent surgical imaging systems and its application method |
Also Published As
Publication number | Publication date |
---|---|
CN114587272A (en) | 2022-06-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP1968431B2 (en) | Combined x-ray and optical tomographic imaging system | |
Deng et al. | Deep learning in photoacoustic imaging: a review | |
US8913803B2 (en) | Method, device and system for analyzing images | |
US9407796B2 (en) | System for reconstructing optical properties in a diffusing medium, comprising a pulsed radiation source and at least two detectors of two different types, and associated reconstruction method | |
US8401618B2 (en) | Systems and methods for tomographic imaging in diffuse media using a hybrid inversion technique | |
Ermilov et al. | Three-dimensional optoacoustic and laser-induced ultrasound tomography system for preclinical research in mice: design and phantom validation | |
US20120022367A1 (en) | Chemically-selective, label free, microendoscopic system based on coherent anti-stokes raman scattering and microelectromechanical fiber optic probe | |
US20110007957A1 (en) | Imaging apparatus and control method therefor | |
CA2917308A1 (en) | Methods related to real-time cancer diagnostics at endoscopy utilizing fiber-optic raman spectroscopy | |
Cheng et al. | High-resolution photoacoustic microscopy with deep penetration through learning | |
CN105395164B (en) | The control method of image processing apparatus and image processing apparatus | |
Joshi et al. | Fully adaptive FEM based fluorescence optical tomography from time‐dependent measurements with area illumination and detection | |
He et al. | De-noising of photoacoustic microscopy images by attentive generative adversarial network | |
CN115272590B (en) | Method, apparatus, system and medium for reconstructing spatial distribution of optical transmission parameters | |
CN109924949A (en) | A kind of near infrared spectrum tomography rebuilding method based on convolutional neural networks | |
Harris et al. | A pulse coupled neural network segmentation algorithm for reflectance confocal images of epithelial tissue | |
CN105388135A (en) | Non-invasive laser scanning imaging method | |
CN103815929B (en) | Subject information acquisition device | |
Hu et al. | Deep learning-based inpainting of saturation artifacts in optical coherence tomography images. | |
CN114587272B (en) | In-vivo fluorescence imaging deblurring method based on deep learning | |
WO2021099127A1 (en) | Device, apparatus and method for imaging an object | |
Ren et al. | High-resolution tomographic reconstruction of optical absorbance through scattering media using neural fields | |
Chen et al. | Full field optical coherence tomography image denoising using deep learning with spatial compounding | |
Jahnavi et al. | Segmentation of medical images using U-Net++ | |
Jiao et al. | NeuralOCT: Airway OCT Analysis via Neural Fields |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |