CN112070887A - Depth learning-based full-slice digital imaging depth of field extension method - Google Patents
Depth learning-based full-slice digital imaging depth of field extension method Download PDFInfo
- Publication number
- CN112070887A CN112070887A CN202010934437.8A CN202010934437A CN112070887A CN 112070887 A CN112070887 A CN 112070887A CN 202010934437 A CN202010934437 A CN 202010934437A CN 112070887 A CN112070887 A CN 112070887A
- Authority
- CN
- China
- Prior art keywords
- depth
- image
- field
- focus
- full
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
- G06F30/27—Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Data Mining & Analysis (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Computer Hardware Design (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Microscoopes, Condenser (AREA)
Abstract
The invention is based on the field of biomedical instruments, in particular to a depth-learning-based full-slice digital imaging depth-of-field extension method, which mainly designs a virtual depth-of-field extension network; the method comprises the following steps that a pathology scanner pre-scans an image block in the middle of a pathology slice view field, an axial three-dimensional image stack is constructed through axial movement, and an initial in-focus image is found through an evaluation function; under the condition of predicting the quasi-focal distance, carrying out single photographing on all sub-images to obtain a tentative out-of-focus image for single photographing as network input; the output of the network is a depth-of-field extended image; through the end-to-end network design of a single image, the method realizes the depth-of-field extension of full-slice digital imaging based on deep learning, avoids the imaging fuzzy problem caused by the limited depth of field, and performs software virtualization on the traditional full-slice digital pathological imaging hardware. Meanwhile, the method does not need to construct a quasi-focus measuring picture and quasi-focus scanning photographing, and has the advantages of high speed, low cost, offline processing and the like.
Description
Technical Field
The invention relates to a depth-of-field extension method for full-slice digital imaging based on the field of biomedical instruments and computational imaging, which takes a deep learning technology as a core, and can be widely applied to the research in the fields of microscopic instruments, artificial intelligence, medical imaging, automation and the like.
Background
In recent years, advanced digital pathology imaging techniques have been widely studied and applied. The full-slice digital imaging technology (WSI, white Slide Images), i.e., virtual microscopy, can acquire traditional microscopic slices in the form of digitized Images, and can realize arbitrary computer access, easy storage, and remote transmission between researchers and doctors, etc. Full-slice digital imaging techniques are of critical importance in biological imaging research, such as in the fields of cancer analysis and disease prediction. The U.S. food and drug administration has currently adopted the full-slice digital imaging system of philips as the primary means of pathological analysis.
Full-slice digital imaging typically requires a two-step implementation: (1) scanning pathological images according to the sequence of the subareas, and then splicing the pathological images together to generate a complete pathological section image of the full view field; (2) purpose-built software is used to identify and analyze these digital images. Wherein the first step is crucial for the quality of the acquired image. At present, the challenge in full-slice digital imaging techniques is mainly how to quickly produce high quality in-focus images. Generally, full-slice scanning requires a high resolution and micron-length depth-of-field objective lens, which results in defocus due to the non-uniform three-dimensional distribution of the sample obtained during scanning of the system. This out-of-focus phenomenon is a major cause of degradation in full-slice digital imaging performance.
At present, a widely used method is to obtain high-quality full-slice digital images by adopting a method of quasi-focal image matching. The quasi-focus image matching method provides a quasi-focus priori, for the out-of-focus image at each position, a series of out-of-focus images (z-stack) at different quasi-focus distances can be obtained by moving the pathological sample along the optical axis, and finally the corresponding quasi-focus image is determined by maximizing the image contrast of each out-of-focus image or other image quality evaluation methods. This approach requires a one-by-one use for each sequentially scanned sub-region. However, this method results in a significant reduction in imaging speed due to repeated axial measurements. For other methods, for example, a double-camera device can be adopted to realize the self-focusing function, so that the axial layer-by-layer scanning of pathological images is avoided. However, this method is not suitable for adding a new imaging module in a conventional microscopy instrument due to problems of hardware incompatibility, high cost and the like. In the current related research, a full-slice digital imaging system mainly focuses on the prediction of the in-focus distance, and no related research currently realizes the depth-of-field extended imaging on three-dimensionally distributed pathological samples.
For a traditional pathological scanning method, firstly, when a scanner scans each image block, an axial image three-dimensional stack needs to be constructed, an in-focus image is found through evaluation functions such as maximized contrast and the like, and the in-focus position is recorded; secondly, for image blocks at other different positions, a block-by-block scanning method is adopted, and each image block executes a process of acquiring a quasi-focus position; then, integrating the quasi-focal distances corresponding to different image blocks of the whole pathological image into a quasi-focal measurement image according to positions; and finally, according to the coordinate position and the quasi-focus distance of the quasi-focus measurement image, moving a microscope objective lens to perform distance compensation and scanning and photographing to obtain a complete pathological digital image. The conventional scanning method inevitably consumes a lot of time cost because of the need to construct a large number of axial three-dimensional image stacks and the necessary axial mechanical displacement. The field depth extended imaging (full focus image) can change the scanning strategy of the traditional pathological scanner, and effectively solves the problem of imaging blur caused by limited field depth in three-dimensional imaging. Therefore, a new method is found to realize field depth extended imaging, and a novel scanner scanning strategy is designed, so that the method has great scientific research significance and clinical application value.
Disclosure of Invention
In consideration of the limitations of the traditional method, the invention utilizes an advanced machine learning algorithm to solve the problem of depth of field extension of full-slice digital imaging. The invention discloses a deep learning-based full-slice digital imaging depth-of-field extension method. The method provides a depth-of-field extension method based on deep learning, and mainly designs a virtual depth-of-field extension network; the pathological scanner pre-scans an image block in the middle of a pathological section view field, an axial three-dimensional image stack is constructed through axial movement, an initial quasi-focus image is found through an evaluation function, and a predicted quasi-focus distance of the quasi-focus image is obtained; under the condition of predicting the quasi-focal distance, carrying out single photographing on all sub-images to obtain a tentative out-of-focus image for single photographing as network input; the output of the network is a restored depth-of-field extended image; through the end-to-end network design of a single tentative out-of-focus image, the method realizes the depth of field extension function of full-slice digital imaging, avoids the imaging blur problem caused by limited depth of field, replaces the traditional self-focusing method of first distance compensation and then shooting, and performs software virtualization on the traditional full-slice digital pathological imaging hardware. Meanwhile, in the scanning strategy, a quasi-focus measurement diagram and quasi-focus scanning photographing are not required to be constructed, and the method has the advantages of high speed, low cost, offline processing and the like.
The purpose of the invention is realized as follows:
a deep learning-based full-slice digital imaging depth-of-field extension method comprises the following steps:
step a: predicting a quasi-focus image;
step b: tentatively defocusing images are photographed for a single time;
step c: a virtual depth of field extension network;
step d: and (3) directly obtaining pathological focus-aligning images at different positions through offline processing by adopting a neural network method for the depth-of-field extended image.
Further, a sub-image position of the center of the pathological section view field is selected in the step a, a z-stack is obtained through axial scanning movement of an objective lens, a predicted quasi-focus image is found through constructing an axial image stack and adopting evaluation functions such as contrast and the like, and the corresponding predicted quasi-focus position is recorded;
further, the position of the selected first prediction quasi-focus image is the photographing position of all the sub-images, and other sub-images are photographed at the photographing position, so that a tentative out-of-focus image for photographing in a single time is obtained.
Further, the input to the network is a single shot tentative through-focus image.
Further, the output of the network is a depth-of-field extended image, and the acquisition mode is as follows: and fusing the z-stack image stack by using ImageJ open source software.
Further, after the tentative out-of-focus image of the single shot is processed through the neural network, the depth-of-field extended image is output.
Has the advantages that:
the invention realizes a deep learning-based full-slice digital imaging depth-of-field extension method, which is embodied in the following aspects:
firstly, the invention designs a virtual depth of field extended network, and realizes depth of field extended imaging by carrying out end-to-end network processing on a single tentative out-of-focus image.
Secondly, the invention adopts a deep learning network algorithm, the method adopts a digital network structure to carry out simulation modeling, adopts a software method to realize the virtualization of a full-slice digital imaging system, directly obtains the extended depth-of-field image, replaces the traditional method of firstly predicting the distance and then taking a picture by compensation, and effectively saves the cost of instrument experiments.
Thirdly, the invention avoids the problem of imaging blur caused by limited depth of field in the scanning strategy, does not need to construct a quasi-focus measurement image and perform quasi-focus scanning photographing, and has the advantages of high speed, low cost, offline processing and the like.
Drawings
FIG. 1 is a flow chart of the deep learning-based full-slice digital imaging depth-of-field extension method of the present invention;
fig. 2 is a schematic diagram of depth-of-field extended image acquisition as a net truth.
In the figure: microscope objective 1, z-stack image stack 2.
Detailed Description
The following further illustrates embodiments of the process of the present invention.
With reference to fig. 1 and fig. 2, the deep learning-based full-slice digital imaging depth-of-field extension method disclosed in this embodiment includes the following steps:
step a: predicting a quasi-focus image;
step b: tentatively defocusing images are photographed for a single time;
step c: a virtual depth of field extension network;
step d: and (3) directly obtaining pathological focus-aligning images at different positions through offline processing by adopting a neural network method for the depth-of-field extended image.
Specifically, in the step a, a sub-image position of the center of the pathological section view field is selected, a z-stack is obtained through axial scanning movement of an objective lens, a predicted quasi-focus image is found through constructing an axial image stack and adopting evaluation functions such as contrast and the like, and the corresponding predicted quasi-focus position is recorded;
specifically, the position of the selected first prediction quasi-focus image is the photographing position of all sub-images, and other sub-images are photographed at the photographing position, so that a tentative out-of-focus image for single photographing is obtained.
Specifically, the input to the network is a single shot tentatively out-of-focus image.
Specifically, the output of the network is a depth-of-field extended image, and the acquisition mode is as follows: and fusing the z-stack image stack by using ImageJ open source software.
Specifically, after the tentative out-of-focus image for single photographing is processed through the neural network, the depth-of-field extended image is output.
The deep learning-based full-slice digital imaging depth-of-field extension method flow chart is known, and the algorithm comprises the following steps: the method comprises the steps of predicting a quasi-focus image, a single-shot tentative out-of-focus image, a virtual depth-of-field extended network and a depth-of-field extended image.
In the training process, the predicted quasi-focus image is obtained through automatic quasi-focus, namely, in the central view field of the pathological section, a plurality of sub-images are continuously moved axially to obtain z-stack, and the predicted quasi-focus image is determined through maximizing contrast or other evaluation functions. (2) The tentative out-of-focus image for single photographing is obtained by photographing all image blocks with the position of the predicted in-focus image as a reference. (3) The virtual depth of field extended network adopts a U-net form. The input of the network is a tentative out-of-focus image taken once, and a depth-of-field extended image is finally output through network convolution. The method is only exemplified by U-net, including but not limited to various networks, and the depth-of-field extended image is obtained through the network by utilizing the single-shot tentative defocused image.
In the network training process, the method for acquiring the network true depth-of-field extended image is shown in fig. 2. Obtaining a series of out-of-focus images at the same sub-image position as a z-stack image stack 2 by axially moving the microscope objective 1; and then, using ImageJ (NIH open source of national institutes of health) software to perform z-stack fusion to obtain a depth-of-field extended image as a true value of network training.
In this embodiment, a deep learning-based depth-of-field extension method for full-slice digital imaging includes: the system comprises a quasi-focus image predicting module, a single-shot tentative out-of-focus image module, a virtual depth-of-field extended network module and a depth-of-field extended image module. Wherein: the pathological scanner pre-scans an image block in the middle of a pathological section view field, an axial three-dimensional image stack is constructed through axial movement, an initial quasi-focus image is found through an evaluation function, and a predicted quasi-focus distance of the quasi-focus image is obtained; and under the condition of predicting the quasi-focal distance, photographing all the sub-images for a single time to obtain a tentative out-of-focus image for the single-time photographing, wherein the tentative out-of-focus image is used as an input end of a back-end neural network.
In the training process, the virtual depth of field expansion network carries out network training and feature extraction according to the tentative out-of-focus image shot once; the depth-of-field extended image is output by a neural network; and performing end-to-end network training through a single tentative out-of-focus image and a corresponding depth-of-field extended image to obtain a virtual depth-of-field extended network.
The method for acquiring the field depth extended image comprises the following steps: obtaining a series of out-of-focus images at the same sub-image position as a z-stack image stack by axially moving a microscope objective; and then, using ImageJ (NIH open source of national institutes of health) software to perform z-stack fusion to obtain a depth-of-field extended image as a true value of network training.
In the testing stage, a predicted quasi-focus sub-image is found at the center of a sample to obtain a corresponding predicted quasi-focus position; then, all other sub-images are photographed at the position to obtain a tentative out-of-focus image photographed at a single time, and the tentative out-of-focus image is used as a test image of the rear-end virtual depth-of-field expansion network; and finally, obtaining the depth-of-field extended image through the processing of the virtual depth-of-field extended network.
Claims (6)
1. A deep learning-based full-slice digital imaging depth-of-field extension method is characterized by comprising the following steps:
step a: predicting a quasi-focus image;
step b: tentatively defocusing images are photographed for a single time;
step c: a virtual depth of field extension network;
step d: and (3) directly obtaining pathological focus-aligning images at different positions through offline processing by adopting a neural network method for the depth-of-field extended image.
2. The depth-learning-based depth-of-field extension method for full-slice digital imaging according to claim 1, wherein a sub-image position of the center of a field of view of a pathological section is selected in the step a, an objective lens is axially scanned and moved to obtain a z-stack, an axial image stack is constructed, an evaluation function such as contrast is adopted to find a predicted quasi-focus image, and a corresponding predicted quasi-focus position is recorded.
3. The deep learning-based depth-of-field extension method for full-slice digital imaging according to claim 1, wherein: and the position of the selected first prediction quasi-focus image is the photographing position of all sub-images, and other sub-images are photographed at the position to obtain a tentative out-of-focus image for single photographing.
4. The deep learning-based depth-of-field extension method for full-slice digital imaging according to claim 1, wherein: the input to the network is a single shot tentative out-of-focus image.
5. The deep learning-based depth-of-field extension method for full-slice digital imaging according to claim 1, wherein: the output of the network is a depth-of-field extended image, and the acquisition mode is as follows: and fusing the z-stack image stack by using ImageJ open source software.
6. The deep learning-based depth-of-field extension method for full-slice digital imaging according to claim 1, wherein: and (3) outputting a depth-of-field extended image after processing the tentative out-of-focus image obtained by single photographing through the neural network.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010934437.8A CN112070887A (en) | 2020-09-08 | 2020-09-08 | Depth learning-based full-slice digital imaging depth of field extension method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010934437.8A CN112070887A (en) | 2020-09-08 | 2020-09-08 | Depth learning-based full-slice digital imaging depth of field extension method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112070887A true CN112070887A (en) | 2020-12-11 |
Family
ID=73664227
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010934437.8A Pending CN112070887A (en) | 2020-09-08 | 2020-09-08 | Depth learning-based full-slice digital imaging depth of field extension method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112070887A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112633248A (en) * | 2021-01-05 | 2021-04-09 | 清华大学深圳国际研究生院 | Deep learning all-in-focus microscopic image acquisition method |
WO2024113403A1 (en) * | 2022-11-28 | 2024-06-06 | 深圳先进技术研究院 | Imaging system depth-of-field extension method and system, electronic device, and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107103601A (en) * | 2017-04-14 | 2017-08-29 | 成都知识视觉科技有限公司 | A kind of cell mitogen detection method in breast cancer points-scoring system |
CN108830149A (en) * | 2018-05-07 | 2018-11-16 | 深圳市恒扬数据股份有限公司 | A kind of detection method and terminal device of target bacteria |
-
2020
- 2020-09-08 CN CN202010934437.8A patent/CN112070887A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107103601A (en) * | 2017-04-14 | 2017-08-29 | 成都知识视觉科技有限公司 | A kind of cell mitogen detection method in breast cancer points-scoring system |
CN108830149A (en) * | 2018-05-07 | 2018-11-16 | 深圳市恒扬数据股份有限公司 | A kind of detection method and terminal device of target bacteria |
Non-Patent Citations (3)
Title |
---|
QIANG LI ET AL.: "Rapid whole slide imaging via learning-based two-shot virtual autofocusing", 《ARXIV》 * |
张国荣等: "组织学数字切片的制作及意义", 《中国医学教育技术》 * |
葛云皓: "基于卷积神经网络的病理显微镜自动对焦与全局精准成像研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112633248A (en) * | 2021-01-05 | 2021-04-09 | 清华大学深圳国际研究生院 | Deep learning all-in-focus microscopic image acquisition method |
WO2022148132A1 (en) * | 2021-01-05 | 2022-07-14 | 清华大学深圳国际研究生院 | Deep learning-based all-in-focus microscopic image acquiring method |
CN112633248B (en) * | 2021-01-05 | 2023-08-18 | 清华大学深圳国际研究生院 | Deep learning full-in-focus microscopic image acquisition method |
WO2024113403A1 (en) * | 2022-11-28 | 2024-06-06 | 深圳先进技术研究院 | Imaging system depth-of-field extension method and system, electronic device, and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7860292B2 (en) | Creating and viewing three dimensional virtual slides | |
US9297995B2 (en) | Automatic stereological analysis of biological tissue including section thickness determination | |
JP6576921B2 (en) | Autofocus method and system for multispectral imaging | |
US20090231689A1 (en) | Rapid Microscope Scanner for Volume Image Acquisition | |
RU2734447C2 (en) | System for forming a synthesized two-dimensional image of a biological sample with high depth of field | |
CN107850754A (en) | The image-forming assembly focused on automatically with quick sample | |
CN111429562B (en) | Wide-field color light slice microscopic imaging method based on deep learning | |
CN112070887A (en) | Depth learning-based full-slice digital imaging depth of field extension method | |
CN105004723A (en) | Pathological section scanning 3D imaging and fusion device and method | |
JP2024019639A (en) | Microscope system, program, and projection image generation method | |
CN111220615A (en) | Inclined three-dimensional scanning microscopic imaging system and method | |
JP2015114172A (en) | Image processing apparatus, microscope system, image processing method, and image processing program | |
CN110349237B (en) | Fast volume imaging method based on convolutional neural network | |
He et al. | Microscope images automatic focus algorithm based on eight-neighborhood operator and least square planar fitting | |
Li et al. | Deep-3D microscope: 3D volumetric microscopy of thick scattering samples using a wide-field microscope and machine learning | |
CN112053304A (en) | Rapid focusing restoration method for single shooting of full-slice digital imaging | |
CN112070660B (en) | Full-slice digital imaging self-adaptive automatic focusing method based on transfer learning | |
CN112069735B (en) | Full-slice digital imaging high-precision automatic focusing method based on asymmetric aberration | |
CN112070661A (en) | Full-slice digital imaging rapid automatic focusing method based on deep learning | |
CN114155340B (en) | Reconstruction method and device of scanned light field data, electronic equipment and storage medium | |
CN112037154A (en) | High-precision quasi-focus restoration method for full-slice digital imaging and double photographing | |
CN111369553A (en) | Sample slide scanning and image processing method and digital microscope system | |
CN112037153A (en) | Full-slice digital imaging quasi-focus restoration method based on quasi-focus distance prior | |
CN112037152A (en) | Full-slice digital imaging two-step quasi-focus restoration method based on deep learning | |
KR20230103394A (en) | Slide imaging device including focus restoration function and method therefor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20201211 |