CN112070660B - Full-slice digital imaging self-adaptive automatic focusing method based on transfer learning - Google Patents

Full-slice digital imaging self-adaptive automatic focusing method based on transfer learning Download PDF

Info

Publication number
CN112070660B
CN112070660B CN202010935487.8A CN202010935487A CN112070660B CN 112070660 B CN112070660 B CN 112070660B CN 202010935487 A CN202010935487 A CN 202010935487A CN 112070660 B CN112070660 B CN 112070660B
Authority
CN
China
Prior art keywords
image
automatic focusing
transfer learning
training
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010935487.8A
Other languages
Chinese (zh)
Other versions
CN112070660A (en
Inventor
刘贤明
李强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN202010935487.8A priority Critical patent/CN112070660B/en
Publication of CN112070660A publication Critical patent/CN112070660A/en
Application granted granted Critical
Publication of CN112070660B publication Critical patent/CN112070660B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The invention belongs to the field of biomedical instruments, and particularly relates to a full-slice digital imaging self-adaptive automatic focusing method based on transfer learning, which comprises the following steps: the method provides a neural network based transfer learning method, which mainly comprises an automatic focusing network; the designed automatic focusing network can calculate the in-focus distance according to the out-of-focus image; adopting an iteration mode, and gradually adding data under the new data set to carry out transfer learning training; by adopting the mode of transfer learning training, the automatic focusing network can have generalization and adaptability which can meet the requirements, the data acquisition of different biological cell samples can be realized, and software virtualization is carried out on the traditional full-slice digital pathology imaging hardware.

Description

Full-slice digital imaging self-adaptive automatic focusing method based on transfer learning
Technical Field
The invention relates to a full-slice digital imaging self-adaptive automatic focusing method based on the field of biomedical instruments and taking a deep learning technology as a core, in particular to a full-slice digital imaging self-adaptive automatic focusing method based on transfer learning, which can be widely applied to the research in the fields of instrument science, artificial intelligence, medical imaging, automation and the like.
Background
In recent years, advanced digital pathology imaging techniques have been widely studied and applied. The full-slice digital imaging technology (WSI, white Slide Images), i.e., virtual microscopy, can acquire traditional microscopic slices in the form of digitized Images, and can realize arbitrary computer access, easy storage, and remote transmission between researchers and doctors, etc. Full-slice digital imaging techniques are of critical importance in biological imaging research, such as in the fields of cancer analysis and disease prediction. The U.S. food and drug administration has currently adopted the full-section digital imaging system of philips as the primary means of pathological analysis.
Full-slice digital imaging typically requires a two-step implementation: (1) scanning pathological images according to the sequence of the subareas, and then splicing the pathological images together to generate a complete pathological section image of the full view field; (2) purpose-built software is used to identify and analyze these digital images. Wherein the first step is crucial for the quality of the acquired image. At present, the challenge in full-slice digital imaging techniques is mainly how to quickly produce high quality in-focus images. Generally, full-slice scanning requires a high resolution and millimeter-length depth of field objective lens, which results in defocus due to the non-uniform distribution of sub-images obtained during scanning of the system. This out-of-focus phenomenon is a major cause of degradation in full-slice digital imaging performance.
At present, a widely used method is to obtain high-quality full-slice digital images by adopting a method of quasi-focal image matching. The quasi-focus image matching method provides a quasi-focus priori, for the out-of-focus image at each position, a series of out-of-focus images (z-stack) at different quasi-focus distances can be obtained by moving the pathological sample along the optical axis, and finally the corresponding quasi-focus image is determined by maximizing the image contrast of each out-of-focus image or other image quality evaluation methods. This approach requires a one-by-one use for each sequentially scanned sub-region. However, this method results in a significant reduction in imaging speed due to repeated axial measurements. For other methods, for example, a double-camera device can be adopted to realize the self-focusing function, so that the axial layer-by-layer scanning of pathological images is avoided. However, this method is not suitable for adding a new imaging module in a conventional microscopy instrument due to problems of hardware incompatibility, high cost and the like.
Disclosure of Invention
In view of the limitations of the conventional methods, the present invention utilizes advanced machine learning algorithms to solve the autofocus problem of full-slice digital imaging. The method provides a neural network based transfer learning method, which mainly comprises an automatic focusing network; the designed automatic focusing network can calculate the in-focus distance according to the out-of-focus image; gradually adding data under the new data set to perform small-batch data training of transfer learning by adopting an iteration mode; by adopting the mode of transfer learning training, the automatic focusing network can have generalization and adaptability which can meet the requirements, the data acquisition of different biological cell samples can be realized, and software virtualization is carried out on the traditional full-slice digital pathology imaging hardware.
The purpose of the invention is realized as follows:
a full-slice digital imaging self-adaptive automatic focusing method based on transfer learning comprises the following steps:
step a, inputting a defocused image;
step b, automatically focusing the network;
step c, predicting the quasi-focal distance;
step d, training new data,
and a transfer learning method is adopted to realize a self-adaptive automatic focusing method under different data.
Further, the input out-of-focus images are from a stack of z-stack images obtained by axial scan movement at different sub-image lateral positions, each sub-image position obtaining a total of 41 out-of-focus images and one in-focus image, plus and minus respectively.
Furthermore, the automatic focusing network is pre-trained according to the training image, whether the testing image is the same as the training image in category or not is judged in the testing process, and if the testing image is different from the training image, new data input is utilized to carry out small data training.
Further, the new data training adopts data of different types from the training data, and the data are input into the automatic focusing network in a small batch iteration mode to complete the network transfer learning.
Has the advantages that:
the invention realizes a full-slice digital imaging self-adaptive automatic focusing method based on transfer learning, which is embodied in the following aspects:
firstly, the method adopts the automatic focusing network, can realize the quasi-focal distance prediction of the defocused image, and realizes the software substitution of hardware movement compensation.
Secondly, the invention adopts a network algorithm of transfer learning, and in the process of transfer learning, small batch of data under a new data set is needed to further fine tune a pre-training network; the iterative loop mode is adopted to realize the new data learning as less as possible, and meanwhile, the automatic focusing precision can be ensured; the software method is adopted to realize the virtualization of the full-slice digital imaging system and improve the generalization capability and the self-adaptability of the system.
Drawings
FIG. 1 is a block diagram of the full-slice digital imaging adaptive auto-focusing method based on transfer learning of the present invention;
FIG. 2 is a flow chart of an algorithm embodying the present algorithm;
fig. 3 is a diagram of an autofocus network architecture.
In the figure: a first convolution layer 1, a pooling layer 2, a second convolution layer 3, a third convolution layer 4, a smoothing layer 5, and a full connection layer 6.
Detailed Description
The following further illustrates embodiments of the process of the present invention.
With reference to fig. 1 to fig. 3, the full-slice digital imaging adaptive auto-focusing method based on transfer learning disclosed in this embodiment includes the following steps:
step a, inputting a defocused image;
step b, automatically focusing the network;
step c, predicting the quasi-focal distance;
step d, training new data,
and a transfer learning method is adopted to realize a self-adaptive automatic focusing method under different data.
Specifically, the input out-of-focus images are from a stack of z-stack images obtained by axial scan movement at different sub-image lateral positions, and each sub-image position obtains 20 out-of-focus images and one in-focus image, for a total of 41.
Specifically, the autofocus network pre-trains according to the training image, determines whether the test image is of the same type as the training image in the test process, and performs small data training by using new data input if the test image is different from the training image.
Specifically, the new data training adopts data of different types from the training data, and the data is input into the automatic focusing network in a small batch iteration manner to complete the network transfer learning.
The structure of the full-slice digital imaging self-adaptive automatic focusing method based on transfer learning is known, and the method mainly comprises the following steps: and inputting out-of-focus images, an automatic focusing network, a quasi-focus distance and a new training data set. The autofocus network mainly includes: a first convolutional layer (5 × 5, step 1)1, a pooling layer 2, a second convolutional layer (3 × 3, step 1)3, a third convolutional layer (3 × 3, step 2)4, a smoothing layer 5, and a full-link layer 6.
(1) In the testing process, the input defocused image is obtained by adopting the traditional full-slice digital imaging as a reference, namely, the input defocused image is obtained by adopting continuous axial movement under the transverse scanning of a plurality of sub-images, the positions of the plurality of sub-images are uniformly selected, and positive and negative 20 images and a clear image are obtained, wherein the total number of the positive and negative 20 images and the clear image is 41.
(2) And judging whether the input defocused image belongs to the same category as the training set. If the images are the same, inputting the out-of-focus images into a pre-trained automatic focusing network; and if the difference is different, taking the small batch of data under the new data set to perform fine training under the transfer learning. Where N-N +1 indicates that a small set of data sets is added at a time.
(3) Judging whether the automatic focusing network is retrained, if so, returning to the input defocused image for re-input (judging whether iteration is performed for M times, and manually selecting the value M); if not, directly outputting the predicted in-focus distance.
In the testing stage, the input out-of-focus image is sampled by a full-field image under the sampling of a traditional full-slice digital imaging system, a plurality of sub-areas are uniformly sampled, and 41 positive out-of-focus images, negative out-of-focus images and clear images are still extracted; and predicting the automatic focusing network by the defocused image to finally obtain the predicted quasi-focus distance under the test set.
In this embodiment, a full-slice digital imaging adaptive auto-focusing method based on transfer learning includes a deep learning network algorithm composed of an input out-of-focus image, an auto-focusing network, a new training data set, and a predicted in-focus distance 4. In the training process, the input defocused image is obtained by adopting axial movement under sub-image scanning, and each sub-image position obtains positive and negative 20 images and a clear image, wherein the total number is 41; the automatic focusing network parameters are trained independently according to the out-of-focus images; and the predicted quasi-focus distance is the final output of the network, namely the predicted quasi-focus distance corresponding to the out-of-focus image. In the testing stage, the input out-of-focus image is sampled by a full-field image under the sampling of a traditional full-slice digital imaging system, a plurality of sub-areas are uniformly sampled, and 41 positive out-of-focus images, negative out-of-focus images and clear images are still extracted; predicting the defocused image through a unidirectional automatic focusing network to finally obtain a predicted quasi-focus distance under a test set; in the transfer learning process, a small amount of data under a new data set is needed to perform further fine tuning of the pre-training network, new data learning is achieved as little as possible in an iterative loop mode, and meanwhile, automatic focusing precision can be guaranteed.

Claims (1)

1. A full-slice digital imaging self-adaptive automatic focusing method based on transfer learning is characterized by comprising the following steps:
step a, inputting a defocused image; the input out-of-focus images are from z-stack image stacks obtained by axial scanning movement under different sub-image transverse positions, and each sub-image position obtains 20 out-of-focus images and one in-focus image which are positive and negative respectively and total 41 out-of-focus images;
step b, automatically focusing the network; pre-training according to a training image, judging whether the testing image has the same category as the training image in the testing process, and if the testing image is different from the training image, performing small data training by using new data input;
step c, predicting the quasi-focal distance;
step d, new data training, namely adopting data of different types from the training data, and iteratively inputting the data into the automatic focusing network in small batches to finish the transfer learning of the network;
and a transfer learning method is adopted to realize a self-adaptive automatic focusing method under different data.
CN202010935487.8A 2020-09-08 2020-09-08 Full-slice digital imaging self-adaptive automatic focusing method based on transfer learning Active CN112070660B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010935487.8A CN112070660B (en) 2020-09-08 2020-09-08 Full-slice digital imaging self-adaptive automatic focusing method based on transfer learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010935487.8A CN112070660B (en) 2020-09-08 2020-09-08 Full-slice digital imaging self-adaptive automatic focusing method based on transfer learning

Publications (2)

Publication Number Publication Date
CN112070660A CN112070660A (en) 2020-12-11
CN112070660B true CN112070660B (en) 2022-08-12

Family

ID=73664416

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010935487.8A Active CN112070660B (en) 2020-09-08 2020-09-08 Full-slice digital imaging self-adaptive automatic focusing method based on transfer learning

Country Status (1)

Country Link
CN (1) CN112070660B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112633248B (en) * 2021-01-05 2023-08-18 清华大学深圳国际研究生院 Deep learning full-in-focus microscopic image acquisition method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110858391A (en) * 2018-08-23 2020-03-03 通用电气公司 Patient-specific deep learning image denoising method and system
CN112053304A (en) * 2020-09-08 2020-12-08 哈尔滨工业大学 Rapid focusing restoration method for single shooting of full-slice digital imaging

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110858391A (en) * 2018-08-23 2020-03-03 通用电气公司 Patient-specific deep learning image denoising method and system
CN112053304A (en) * 2020-09-08 2020-12-08 哈尔滨工业大学 Rapid focusing restoration method for single shooting of full-slice digital imaging

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Deep learning for single-shot autofocus microscopy;HENRY PINKARD等;《optica》;20190605;第6卷(第6期);794-797 *
Transform- and multi-domain deep learning for single-frame rapid autofocusing in whole slide imaging;Jiang, SW等;《BIOMEDICAL OPTICS EXPRESS》;20180420;第9卷(第4期);1601-1612 *

Also Published As

Publication number Publication date
CN112070660A (en) 2020-12-11

Similar Documents

Publication Publication Date Title
CN111007661B (en) Microscopic image automatic focusing method and device based on deep learning
CN109873948B (en) Intelligent automatic focusing method and device for optical microscope and storage device
JP6576921B2 (en) Autofocus method and system for multispectral imaging
CN109085695B (en) Method for quickly focusing and photographing plane sample
CN111948784B (en) Iterative optimization automatic focusing method based on hill climbing method
CN111462076A (en) Method and system for detecting fuzzy area of full-slice digital pathological image
CN111161272B (en) Embryo tissue segmentation method based on generation of confrontation network
CN111462075A (en) Rapid refocusing method and system for full-slice digital pathological image fuzzy area
CN112070660B (en) Full-slice digital imaging self-adaptive automatic focusing method based on transfer learning
TWI811758B (en) Deep learning model for auto-focusing microscope systems, method of automatically focusing a microscope system, and non-transitory computer readable medium
CN116051411A (en) Microscopic image fuzzy kernel extraction and defocusing restoration method based on depth convolution network
CN116612092A (en) Microscope image definition evaluation method based on improved MobileViT network
He et al. Microscope images automatic focus algorithm based on eight-neighborhood operator and least square planar fitting
WO2022183078A1 (en) Computational refocusing-assisted deep learning
CN112070661A (en) Full-slice digital imaging rapid automatic focusing method based on deep learning
CN112069735B (en) Full-slice digital imaging high-precision automatic focusing method based on asymmetric aberration
CN113705298A (en) Image acquisition method and device, computer equipment and storage medium
CN112053304A (en) Rapid focusing restoration method for single shooting of full-slice digital imaging
CN110739051B (en) Method for establishing eosinophilic granulocyte proportion model by using nasal polyp pathological picture
CN112070887A (en) Depth learning-based full-slice digital imaging depth of field extension method
CN111505816A (en) High-flux electron microscope imaging method and system
CN112037152A (en) Full-slice digital imaging two-step quasi-focus restoration method based on deep learning
CN112037153A (en) Full-slice digital imaging quasi-focus restoration method based on quasi-focus distance prior
CN115019130A (en) Training method of virtual dyeing model and method for generating bright field virtual dyeing image
CN115428037A (en) Method and system for collecting living cell biological sample fluorescence image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant