CN117876377B - Microscopic imaging general nerve extraction method based on large model - Google Patents
Microscopic imaging general nerve extraction method based on large model Download PDFInfo
- Publication number
- CN117876377B CN117876377B CN202410282582.0A CN202410282582A CN117876377B CN 117876377 B CN117876377 B CN 117876377B CN 202410282582 A CN202410282582 A CN 202410282582A CN 117876377 B CN117876377 B CN 117876377B
- Authority
- CN
- China
- Prior art keywords
- model
- data
- imaging
- nerve
- photon
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 129
- 210000005036 nerve Anatomy 0.000 title claims abstract description 50
- 238000000605 extraction Methods 0.000 title claims abstract description 19
- 238000004458 analytical method Methods 0.000 claims abstract description 45
- 210000002569 neuron Anatomy 0.000 claims abstract description 37
- 238000000034 method Methods 0.000 claims abstract description 29
- 230000011218 segmentation Effects 0.000 claims abstract description 23
- 230000001537 neural effect Effects 0.000 claims abstract description 22
- 238000002610 neuroimaging Methods 0.000 claims abstract description 22
- 230000008569 process Effects 0.000 claims abstract description 11
- 230000005284 excitation Effects 0.000 claims abstract description 7
- 230000006870 function Effects 0.000 claims description 11
- 238000013528 artificial neural network Methods 0.000 claims description 9
- 238000004088 simulation Methods 0.000 claims description 9
- 230000008043 neural expression Effects 0.000 claims description 4
- 238000005457 optimization Methods 0.000 abstract description 4
- 238000002372 labelling Methods 0.000 description 4
- 238000000386 microscopy Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000000388 lateral force microscopy Methods 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 210000004027 cell Anatomy 0.000 description 2
- OYPRJOBELJOOCE-UHFFFAOYSA-N Calcium Chemical compound [Ca] OYPRJOBELJOOCE-UHFFFAOYSA-N 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 229910052791 calcium Inorganic materials 0.000 description 1
- 239000011575 calcium Substances 0.000 description 1
- 210000003710 cerebral cortex Anatomy 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Landscapes
- Microscoopes, Condenser (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a microscopic imaging general nerve extraction method based on a large model, which comprises the following steps: acquiring nerve imaging data of any modality; acquiring data description corresponding to the neuroimaging data; the data description includes any one or more of the following: time resolution, spatial resolution, imaging system numerical aperture and imaging system excitation light wavelength of imaging data; inputting the neuroimaging data and the corresponding data description into a trained general neural analysis model to obtain analysis results of neuron space information and time information; the general neural analysis model consists of a strategy generation model, an image enhancement model, a pixel alignment model, a background removal model and a footprint segmentation model. The method can analyze and process all nerve microscopic imaging modes, and has strong robustness; in addition, through bottom layer optimization, the general nerve analysis model can realize rapid nerve cell analysis under the condition of using less computing resources, and the use experience of a user is improved.
Description
Technical Field
The invention relates to the technical fields of wide-field microscopic imaging, two-photon microscopic imaging and light-field microscopic imaging, in particular to a general nerve extraction method based on large model microscopic imaging, which can extract the spatial information and the time information of neurons from nerve microscopic imaging results.
Background
In neural network dynamics of advanced perception, cognition and complex behavior, information flow between cortical areas plays a key role. However, at cell-level resolution, this information flow is tracked volumetrically across the mesoscopic field of view (FOVs), and this task has been challenging at a time bandwidth sufficient to capture the dynamic of the gene-encoded calcium indicator (GECIs) (10-20 Hz). Although recent advances in two-photon microscopy (2PM,Two Photon Microscopy) technology have greatly improved voxel acquisition speeds, enabling us to record neural activity at cell-level resolution at multi-hertz speeds, the serialized acquisition method is inherently limited in scalability to mesovolumes. This is because in the serial scanning method, the volume acquisition speed is proportional to the inverse square of the side length of the imaged volume. In this context, scanning-free, parallel volume acquisition methods such as light field Microscopy (LFM, light Field microscope) and related art, in which three-dimensional (3D) sample locations are mapped onto two-dimensional (2D) sensors, provide a better solution for scalability of mesovolumes while enabling neuronal-level differentiation.
In light field microscopy, the encoding of three-dimensional sample voxels onto a two-dimensional camera sensor is achieved by placing a set of microlens arrays on the imaging plane of the microscope. Next, these sensor images are computationally reconstructed using the point spread function of the system, thereby obtaining 3D sample information. Thus, LFM provides the ability to expand acquisition volumes in both the lateral and axial directions without sacrificing frame rate, which can theoretically enable rapid mesoscopic volume imaging. However, the application of LFMs is still limited to sub-millimeter fields of view and weakly scattering samples due to the limitations of scattering organization and the computational cost of large scale deconvolution.
For two-photon imaging, its main challenge to neuron extraction algorithms is the strong noise signal; the main challenge for neuron extraction algorithms for wide-field imaging is the strong background signal. For imaging, the analytical model also needs to adapt to the change in resolution for dynamic learning.
Referring to fig. 1, the current mainstream schemes of neural microscopic imaging can be classified into single photon (1P) imaging and two photon (2P) imaging in terms of imaging principle; from the imaging dimension, both uniplanar imaging and three-dimensional imaging are included. For 2P imaging, noise is a major factor limiting neural signal analysis; for 1P imaging, background is the primary factor that constrains neural signal analysis. In addition to this, resolution alignment is another factor that constrains the general ability of the overall model.
At present, the nerve microscopic imaging modes are numerous, and the problems of high two-photon imaging noise and strong single-photon imaging background are faced, so that no nerve analysis method can solve all the nerve imaging analysis problems at present.
Disclosure of Invention
In view of the above, the invention provides a microscopic imaging general nerve extraction method based on a large model, which can solve all the problems of nerve imaging analysis in the prior art, constructs a general nerve analysis model based on simulation supervised learning and self-supervised learning, and can obtain analysis results by inputting nerve imaging data of any mode.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
The embodiment of the invention provides a microscopic imaging general nerve extraction method based on a large model, which comprises the following steps of:
S1, acquiring nerve imaging data of any mode; the neuroimaging data includes any of the following: two-photon point scanning imaging data, two-photon light field data, single-photon wide-field imaging data and light field single-photon data;
S2, acquiring data description corresponding to the neuroimaging data; the data description includes one or more of the following: time resolution, spatial resolution, imaging system numerical aperture and imaging system excitation light wavelength of imaging data;
s3, inputting the neuroimaging data and the corresponding data description into a trained general nerve analysis model to obtain analysis results of neuron space information and time information; the general neural analysis model consists of a strategy generation model, an image enhancement model, a pixel alignment model, a background removal model and a footprint segmentation model.
Further, the step S3 includes:
automatically configuring data processing steps and parameters by a strategy generation model according to the data description corresponding to the neuroimaging data; performing neuron analysis according to the configured steps and parameters;
The neuron analysis process comprises: when the nerve imaging data are two-photon point scanning imaging data, noise is removed through the image enhancement model, the pixel alignment model is subjected to resolution registration, and the footprint segmentation model obtains a space footprint.
Further, the neuron analysis process further comprises:
When the nerve imaging data are two-photon light field imaging data, combining a point spread function of an imaging system to obtain a ray trace of the imaging system, and performing three-dimensional projection to obtain three-dimensional positioning of neurons.
Further, the neuron analysis process further comprises:
When the nerve imaging data are single photon wide field imaging data, sequentially carrying out resolution alignment through a pixel alignment model, removing the background through a background removing model, and obtaining the two-dimensional footprint of the nerve cells through a footprint segmentation model.
Further, the neuron analysis process further comprises:
When the nerve imaging data are light field single photon data, combining a point spread function of an imaging system to obtain a ray trace of the imaging system, and fusing a segmentation result of each view angle to obtain a three-dimensional positioning result.
Further, in the step S3, the image enhancement model training uses a self-supervision strategy, splits the neural imaging result into an odd frame and an even frame through a three-dimensional convolution U-shaped neural network, and uses the odd frame as an input and the even frame as a learning target.
Further, in the step S3, the pixel alignment model training uses a self-supervision strategy, and uses an arbitrary magnification downsampled image as an input through an implicit neural expression network, and an original image as a learning target.
Further, in the step S3, the background removal model trains a supervised network using simulation generated wide-field imaging data and background-free neuroimaging data; and inputting a wide-field imaging result generated by simulation into a model through a simplified three-dimensional neural network, wherein background-free neural imaging data generated by simulation is used as a model learning target.
Further, in the step S3, the footprint segmentation model training is performed by using a large amount of public neural imaging data, and the training is performed by performing sequence segmentation through a three-dimensional neural network, inputting the training as a neuron imaging sequence, and learning the truth mask with a target of machine labeling or manual labeling.
Compared with the prior art, the invention discloses a microscopic imaging general nerve extraction method based on a large model, which has the following advantages:
(1) The invention has strong robustness, and can analyze and process all nerve microscopic imaging modes.
(2) The invention accelerates the optimization fully, and the invention accelerates the optimization fully on the bottom layer of the whole flow, so that the user can use a small amount of computing resources to realize the rapid neuron analysis.
(3) The main method of the strategy generation model is to match corresponding super parameters and experimental flows according to experimental information input by a user, and the use cost of the user is greatly reduced by using semantic control software through semantic control, so that the method is convenient to popularize and apply.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of the problems faced by the general neural analysis model.
Fig. 2 is a flowchart of a general microscopic imaging nerve extraction method based on a large model provided by the invention.
Fig. 3 is a schematic diagram of data processing of the general microscopic imaging nerve extraction method based on the large model.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The embodiment of the invention discloses a microscopic imaging general nerve extraction method based on a large model, which is shown by referring to FIG. 2 and comprises the following steps of:
S1, acquiring nerve imaging data of any mode; the neuroimaging data includes any of the following: two-photon point scanning imaging data, two-photon light field data, single-photon wide-field imaging data and light field single-photon data;
S2, acquiring data description corresponding to the neuroimaging data; the data description includes one or more of the following: time resolution, spatial resolution, imaging system numerical aperture and imaging system excitation light wavelength of imaging data;
s3, inputting the neuroimaging data and the corresponding data description into a trained general nerve analysis model to obtain analysis results of neuron space information and time information; the general neural analysis model consists of a strategy generation model, an image enhancement model, a pixel alignment model, a background removal model and a footprint segmentation model.
In this embodiment, referring to fig. 3, which is a schematic diagram of data processing of the whole extraction method, in step S1, the neural imaging data acquired by the user may be single photon imaging data or two photon imaging data, and may be light field three-dimensional imaging data or two-dimensional imaging data. The two-photon point scanning imaging data are acquired by a two-photon system of traditional point scanning, and are usually excited by laser of an infrared band, and the point spread function of the system is punctiform and is acquired by photomultiplier light; the two-photon double-length data are collected by a two-photon light field system, and the point spread function of the system is needle-shaped; the single photon wide field imaging data is collected by a planar array detector, and the system point spread function is ellipsoidal; the single photon light field data are acquired by a single photon light field system, and a micro lens array is arranged in the system, so that the point spread function presents multi-point distribution;
In step S2, a description of the neuroimaging data is obtained, for example, describing the necessary experimental conditions, for example, "i have imaged a cortex of a mouse for 30 minutes using a wide-field microscope, and the image pixel size is 1 micrometer. "data description includes one or more of the following: time resolution, spatial resolution, imaging system numerical aperture, and imaging system excitation light wavelength of the imaging data.
In step S3, the strategy generation model of the generic neural analysis model will self-configure the data processing steps and corresponding parameters according to the data fingerprint (data type) and experimental description of the imaging data. The addition of a strategy generation model (the model determines some super parameters according to experimental experience) can greatly improve the generalization capability of the whole model, and the super parameters of a plurality of subsequent models need to be determined through the strategy generation model. After the data processing steps and parameters are determined, an analysis process of the neurons will be initiated.
Wherein, the process of analysis is as follows:
(1) For two-photon point scanning imaging data, confirming the data type according to input description, firstly, calling an image enhancement model to enhance original data, and removing noise by the model through random distribution priori of noise; then, carrying out resolution registration on the enhanced cleaner nerve data through a pixel alignment model, and aligning the pixel size and the frame rate; and dividing the data after the resolution registration through a footprint dividing model to obtain a space footprint, wherein the footprint dividing model is from simulation supervision training.
(2) If the data is the data of the two-photon light field, three-dimensional projection is carried out after each view angle is divided, so that three-dimensional positioning of the neuron is obtained.
(3) For single photon wide field imaging data, firstly, a pixel alignment model is called to align the resolution of the dynamically learned single photon data; then, a background removal model is called to fully expose neurons from the acquired data; and finally, calling the footprint segmentation model to obtain the two-dimensional footprint of the neuron.
(4) If the single photon data is the light field single photon data, the segmentation results of each view angle are fused to obtain a three-dimensional positioning result.
When the method is implemented, a general nerve analysis system can be constructed based on a software program for implementation; the method is completed by dividing and splitting the neurons into reliable steps, so that the robustness of the whole system is greatly improved, and the neuromicroscopic imaging data of any mode of a user can be analyzed and processed.
The general nerve analysis model provided by the invention has the specific steps of the training process:
1) Training an image enhancement model: two-photon image noise removal using a self-supervised strategy. By using the three-dimensional convolution U-shaped neural network as a model backbone, the neural imaging result is split into an odd frame and an even frame, the odd frame is used as a model input, and the even frame is used as a learning target of the model, so that training of the image enhancement model is completed without depending on high signal-to-noise ratio data.
2) Training a pixel alignment model: the self-supervision strategy is used for training the pixel alignment model, the implicit neural expression network is used as a main model, the downsampled image with any multiplying power is used as model input, and the original image is used as a learning target of the main model, so that the implicit neural expression network has the capability of super-resolution with any multiplying power.
3) Background removal model training: the supervised network is trained using simulation-generated wide-field imaging data and background-free neuroimaging data. By using the simplified three-dimensional neural network as a backbone model, a simulation generated wide-field imaging result is used as a model input, and a simulation generated background-free neural imaging data is used as a model learning target.
4) Footprint segmentation model training: the segmentation network is trained using the large amount of neuroimaging data disclosed. By using a three-dimensional neural network to perform sequence segmentation, an input image is a neuron imaging sequence, and a learning target is a truth mask of machine labeling or manual labeling.
The invention provides a microscopic imaging general nerve extraction method based on a large model, and the data processing flow is shown in figure 3. The method comprises the following specific steps:
Step 1, deciding a processing step and parameters of a follow-up model of a general neural analysis model according to original data and data description, wherein the original data information comprises data size, single-frame image size and sequence length; the data description includes the temporal resolution, spatial resolution, imaging system numerical aperture, excitation light wavelength, etc. of the imaging data.
And 2, processing the original data according to the decision step to obtain the two-dimensional footprint and time information of the neuron. The original data is subjected to noise removal through an image enhancement model, the time resolution and the space resolution are remodeled through a pixel alignment model, the neurons are exposed from background signals through a background removal model, and the segmentation result and the time information of the neurons are obtained through a footprint segmentation model.
And 3, if three-position fusion is needed, performing three-dimensional positioning of the neuron based on the optical physical model. And combining the point spread function of the imaging system to obtain the ray trace of the imaging system, and combining the segmentation result to obtain the three-dimensional positioning of the neuron.
The microscopic imaging general nerve extraction method based on the large model provided by the invention has the following technical effects:
The universality is strong: the method can be suitable for nerve imaging data of any mode, including two-photon point scanning imaging data, two-photon light field data, single-photon wide-field imaging data and light field single-photon data. This means that the analysis and extraction of neurons can be performed by this method, regardless of the neuroimaging technique used by the user.
Adaptive configuration: by acquiring the data description corresponding to the neuroimaging data, including parameters such as time resolution, spatial resolution, numerical aperture of the imaging system, and excitation light wavelength, the general neural analysis model can automatically configure appropriate data processing steps and parameters. Such an adaptive configuration increases the flexibility and generalization capability of the system.
Multimodal support: the method can handle different nerve imaging modes, including two-photon imaging, single-photon imaging, wide-field imaging and light field imaging. The multi-mode support enables a user to use the same set of general nerve analysis model under different experimental conditions, so that the operation flow is simplified, and the analysis consistency is improved.
Comprehensive analysis: the generic neural analysis model is composed of a plurality of sub-models including a strategy generation model, an image enhancement model, a pixel alignment model, a background removal model, and a footprint segmentation model. The models work cooperatively to comprehensively and comprehensively analyze the neuroimaging data so as to acquire the spatial information and the time information of the neurons.
Quick and efficient: through the underlying optimization, the generic neural analysis model can achieve fast neuron analysis with less computational resources. This improves the user experience, making the neuron analysis process more efficient.
In general, the method has wide applicability and high efficiency in the field of nerve imaging, provides a convenient, flexible and powerful tool for researchers, and is helpful for deeper understanding of the structure and function of the nerve system.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (5)
1. The general microscopic imaging nerve extraction method based on the large model is characterized by comprising the following steps of:
S1, acquiring nerve imaging data of any mode; the neuroimaging data includes any of the following: two-photon point scanning imaging data, two-photon light field data, single-photon wide-field imaging data and light field single-photon data;
S2, acquiring data description corresponding to the neuroimaging data; the data description includes one or more of the following: time resolution, spatial resolution, imaging system numerical aperture and imaging system excitation light wavelength of imaging data;
s3, inputting the neuroimaging data and the corresponding data description into a trained general nerve analysis model to obtain analysis results of neuron space information and time information; the general nerve analysis model consists of a strategy generation model, an image enhancement model, a pixel alignment model, a background removal model and a footprint segmentation model;
wherein, the step S3 includes:
automatically configuring data processing steps and parameters by a strategy generation model according to the data description corresponding to the neuroimaging data; performing neuron analysis according to the configured steps and parameters;
The neuron analysis process comprises:
when the nerve imaging data are two-photon point scanning imaging data, noise is removed sequentially through an image enhancement model, resolution registration is carried out on a pixel alignment model, and a footprint segmentation model obtains a space footprint;
When the nerve imaging data are two-photon light field imaging data, combining a point spread function of an imaging system to obtain a ray trace of the imaging system, and performing three-dimensional projection to obtain three-dimensional positioning of neurons;
when the nerve imaging data are single photon wide field imaging data, sequentially carrying out resolution alignment through a pixel alignment model, removing the background by a background removal model, and obtaining a two-dimensional footprint of the nerve cells by a footprint segmentation model;
When the nerve imaging data are light field single photon data, combining a point spread function of an imaging system to obtain a ray trace of the imaging system, and fusing a segmentation result of each view angle to obtain a three-dimensional positioning result.
2. The general purpose nerve extraction method for microscopic imaging based on large model according to claim 1, wherein in the step S3, the image enhancement model training uses a self-supervision strategy, the nerve imaging result is split into odd frames and even frames by a three-dimensional convolution U-shaped nerve network, the odd frames are used as input, and the even frames are used as learning targets.
3. The method according to claim 1, wherein in the step S3, the pixel alignment model training uses a self-supervision strategy, and an arbitrary magnification downsampled image is used as an input and an original image is used as a learning target through an implicit neural expression network.
4. The method according to claim 1, wherein in the step S3, the background removal model training trains a supervised network using simulation-generated wide-field imaging data and non-background neuroimaging data; and inputting a wide-field imaging result generated by simulation into a model through a simplified three-dimensional neural network, wherein background-free neural imaging data generated by simulation is used as a model learning target.
5. The general purpose neuro-extraction method based on microscopic imaging of claim 1, wherein in step S3, the footprint segmentation model training is trained using a large amount of public neuro-imaging data, sequence segmentation is performed through a three-dimensional neural network, a neuron imaging sequence is input, and a learning target is a machine-labeled or manually-labeled truth mask.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410282582.0A CN117876377B (en) | 2024-03-13 | 2024-03-13 | Microscopic imaging general nerve extraction method based on large model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410282582.0A CN117876377B (en) | 2024-03-13 | 2024-03-13 | Microscopic imaging general nerve extraction method based on large model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117876377A CN117876377A (en) | 2024-04-12 |
CN117876377B true CN117876377B (en) | 2024-05-28 |
Family
ID=90585013
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410282582.0A Active CN117876377B (en) | 2024-03-13 | 2024-03-13 | Microscopic imaging general nerve extraction method based on large model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117876377B (en) |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106846463A (en) * | 2017-01-13 | 2017-06-13 | 清华大学 | Micro-image three-dimensional rebuilding method and system based on deep learning neutral net |
CN110441271A (en) * | 2019-07-15 | 2019-11-12 | 清华大学 | Light field high-resolution deconvolution method and system based on convolutional neural networks |
CN113674168A (en) * | 2021-08-05 | 2021-11-19 | 清华大学 | Real-time fluorescence imaging intelligent enhancement method and device |
CN113946044A (en) * | 2021-09-09 | 2022-01-18 | 深圳大学 | Multi-focus multi-photon microscopic imaging system and method based on point spread function engineering |
CN114188013A (en) * | 2021-09-01 | 2022-03-15 | 北京智精灵科技有限公司 | Cognitive and brain image data integration evaluation method for Alzheimer's disease |
CN114241031A (en) * | 2021-12-22 | 2022-03-25 | 华南农业大学 | Fish body ruler measurement and weight prediction method and device based on double-view fusion |
CN115220211A (en) * | 2022-07-29 | 2022-10-21 | 江南大学 | Microscopic imaging system and method based on deep learning and light field imaging |
CN116721017A (en) * | 2023-06-20 | 2023-09-08 | 中国科学院生物物理研究所 | Self-supervision microscopic image super-resolution processing method and system |
CN116957930A (en) * | 2023-06-05 | 2023-10-27 | 浙大城市学院 | Deep learning-based voltage imaging neural activity information enhancement method |
CN117557576A (en) * | 2023-10-31 | 2024-02-13 | 浙江工业大学 | Semi-supervised optic nerve segmentation method based on clinical knowledge driving and contrast learning |
WO2024044981A1 (en) * | 2022-08-30 | 2024-03-07 | 深圳华大智造科技股份有限公司 | Super-resolution analysis system and method, and corresponding imaging device and model training method |
-
2024
- 2024-03-13 CN CN202410282582.0A patent/CN117876377B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106846463A (en) * | 2017-01-13 | 2017-06-13 | 清华大学 | Micro-image three-dimensional rebuilding method and system based on deep learning neutral net |
CN110441271A (en) * | 2019-07-15 | 2019-11-12 | 清华大学 | Light field high-resolution deconvolution method and system based on convolutional neural networks |
CN113674168A (en) * | 2021-08-05 | 2021-11-19 | 清华大学 | Real-time fluorescence imaging intelligent enhancement method and device |
CN114188013A (en) * | 2021-09-01 | 2022-03-15 | 北京智精灵科技有限公司 | Cognitive and brain image data integration evaluation method for Alzheimer's disease |
CN113946044A (en) * | 2021-09-09 | 2022-01-18 | 深圳大学 | Multi-focus multi-photon microscopic imaging system and method based on point spread function engineering |
CN114241031A (en) * | 2021-12-22 | 2022-03-25 | 华南农业大学 | Fish body ruler measurement and weight prediction method and device based on double-view fusion |
CN115220211A (en) * | 2022-07-29 | 2022-10-21 | 江南大学 | Microscopic imaging system and method based on deep learning and light field imaging |
WO2024044981A1 (en) * | 2022-08-30 | 2024-03-07 | 深圳华大智造科技股份有限公司 | Super-resolution analysis system and method, and corresponding imaging device and model training method |
CN116957930A (en) * | 2023-06-05 | 2023-10-27 | 浙大城市学院 | Deep learning-based voltage imaging neural activity information enhancement method |
CN116721017A (en) * | 2023-06-20 | 2023-09-08 | 中国科学院生物物理研究所 | Self-supervision microscopic image super-resolution processing method and system |
CN117557576A (en) * | 2023-10-31 | 2024-02-13 | 浙江工业大学 | Semi-supervised optic nerve segmentation method based on clinical knowledge driving and contrast learning |
Non-Patent Citations (2)
Title |
---|
付玲 ; .光学神经成像研究进展.生物物理学报.2007,(第04期),全文. * |
光学神经成像研究进展;付玲;;生物物理学报;20070815(第04期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN117876377A (en) | 2024-04-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wu et al. | Three-dimensional virtual refocusing of fluorescence microscopy images using deep learning | |
US11580640B2 (en) | Identifying the quality of the cell images acquired with digital holographic microscopy using convolutional neural networks | |
CN104285175B (en) | The method and apparatus that individual particle positioning is carried out using wavelet analysis | |
JP2019110120A (en) | Method, device, and system for remote deep learning for microscopic image reconstruction and segmentation | |
Jin et al. | Learning to see through reflections | |
Ning et al. | Deep self-learning enables fast, high-fidelity isotropic resolution restoration for volumetric fluorescence microscopy | |
Li et al. | Fast confocal microscopy imaging based on deep learning | |
WO2018208687A1 (en) | Scanned line angular projection microscopy | |
Fazel et al. | Analysis of super-resolution single molecule localization microscopy data: A tutorial | |
Wijesinghe et al. | Experimentally unsupervised deconvolution for light-sheet microscopy with propagation-invariant beams | |
Chen et al. | Pathological image super-resolution using mix-attention generative adversarial network | |
CN113762484A (en) | Multi-focus image fusion method for deep distillation | |
CN117876377B (en) | Microscopic imaging general nerve extraction method based on large model | |
Verinaz-Jadan et al. | Shift-invariant-subspace discretization and volume reconstruction for light field microscopy | |
US11422355B2 (en) | Method and system for acquisition of fluorescence images of live-cell biological samples | |
Dai et al. | Exceeding the limit for microscopic image translation with a deep learning-based unified framework | |
Le Saux et al. | Isotropic high-resolution three-dimensional confocal micro-rotation imaging for non-adherent living cells | |
Fazli et al. | Toward simple & scalable 3D cell tracking | |
Boland et al. | Improving axial resolution in SIM using deep learning | |
Meirovitch et al. | SmartEM: machine-learning guided electron microscopy | |
Peng et al. | Depth resolution enhancement using light field light sheet fluorescence microscopy | |
Moreschini et al. | Volumetric segmentation for integral microscopy with fourier plane recording | |
US12029594B2 (en) | Systems and methods for enhanced imaging and analysis | |
Yan et al. | Segmentation of Synapses in Fluorescent Images using U-Net++ and Gabor-based Anisotropic Diffusion | |
Ghani et al. | Optimisation of MERFISH Data Analysis and Visualisation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |