WO2021073335A1 - 一种基于卷积神经网络的无透镜全息显微微粒表征方法 - Google Patents

一种基于卷积神经网络的无透镜全息显微微粒表征方法 Download PDF

Info

Publication number
WO2021073335A1
WO2021073335A1 PCT/CN2020/115352 CN2020115352W WO2021073335A1 WO 2021073335 A1 WO2021073335 A1 WO 2021073335A1 CN 2020115352 W CN2020115352 W CN 2020115352W WO 2021073335 A1 WO2021073335 A1 WO 2021073335A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
neural network
convolutional neural
particle
microscopic
Prior art date
Application number
PCT/CN2020/115352
Other languages
English (en)
French (fr)
Inventor
曹汛
黄烨
华夏
闫锋
Original Assignee
南京大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 南京大学 filed Critical 南京大学
Publication of WO2021073335A1 publication Critical patent/WO2021073335A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/41Refractivity; Phase-affecting properties, e.g. optical path length
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/695Preprocessing, e.g. image segmentation

Definitions

  • the invention belongs to the field of microscopic images, and in particular relates to a method for characterizing non-lens holographic microparticles based on a convolutional neural network.
  • Lensless holographic microscopes have become a new imaging technology in recent years. In order to obtain high resolution in traditional optical microscopes, objective lenses and eyepieces with magnification must be used to observe tiny biological images.
  • the lensless holographic microscopy completely abandons the optical lens and directly samples the light passing through the object. As a digital holography technology, the light is captured by the photosensitive array of the sensor, and the image information is displayed through photoelectric conversion, and the subsequent image processing can be conveniently carried out.
  • the lensless holographic microscopy has a compact structure and a field of view of the same size as the imaging sensor, providing a possible solution for simultaneous characterization of multiple particles under a large field of view in an environment with limited resources.
  • colloidal particles are of great significance to the research in the fields of biomedicine, fluid mechanics, and soft matter physics.
  • this kind of research uses holographic microscopy technology, combined with light propagation theory and light scattering theory, to obtain useful information from microscopic images.
  • Some of the previous work to characterize and track colloidal particles or other soft substances are based on standard inverted optical microscopy, and use scattering theory and light propagation theory to quantitatively analyze the holographic microscopic image of a single particle, and obtain the particle Accurate spatial information, size information and its refractive index.
  • the purpose of the present invention is to provide a lensless holographic microparticle characterization method based on a convolutional neural network.
  • a method for characterizing lens-free holographic microparticles based on convolutional neural network including the following steps:
  • S1 first collect the dark field image, and then collect the bright field image under the uniform illumination of the light source
  • step S3 performing flat field correction on all the microscopic images collected in step S2;
  • S4 Calculate the position of all particles in the plane for each microscopic image after flat field correction, and use the fixed size as the radius as the center to cut the image of each particle;
  • the particle suspension is first diluted to the extent that there is almost no overlap of the particles, and the particles are randomly distributed in the suspension.
  • the microscopic images taken each time only contain particles of one refractive index, and several drops of the suspension of each refractive index are randomly taken, and multiple sets of images are taken separately.
  • the specific method of flat-field correction is: denote the collected dark-field image as I d , the collected bright-field image as I 0 , and any hologram in the collected image sequence represents Is I, the image b after flat-field correction of hologram I is expressed as:
  • step S4 the center of all particles is calculated using the direction alignment transformation method, and the same size is cut according to the input size of the convolutional neural network to obtain the same size data set; wherein, the direction alignment transformation method
  • the specific steps are:
  • the angle between the direction specified by each pixel and the x-axis, the phase of ⁇ (r) is distributed in [0,2 ⁇ ]; ⁇ (r) is converted to a symmetrical conversion core
  • represents the phase of K(r), and the phase of K(r) and the phase of ⁇ (r) are complementary to each other, that is, corresponding to each pixel, the phase of K(r) and ⁇
  • the sum of the phases of (r) is 2 ⁇ ;
  • r represents the attenuation factor, which ensures that each pixel has an equal weight for the estimation of its center;
  • ⁇ (r) is the complex amplitude map obtained after direction alignment transformation
  • the particle center of the flat-field corrected image b(r) Convert it to the intensity center of the transformed intensity map
  • the brightness center position in a connected area is obtained by the weighted average of the coordinates of the connected pixels.
  • the weighting coefficient is determined by the brightness of each pixel. The greater the brightness, the higher the weight.
  • Each brightness center is Represents the center of a particle.
  • the method of the present invention cleverly converts the problem of characterizing the refractive index of particles into a classification problem.
  • its significant advantages are: (1) Lensless holographic microscopy can meet the needs of large field of view observation of biological samples, and achieve Characterization and identification of biological samples in a large field of view.
  • Fig. 1 is a flowchart of the method for characterizing lens-free holographic microparticles based on a convolutional neural network according to the present invention.
  • Fig. 2 is a schematic diagram of the structure of the device used in the embodiment of the present invention, in which, 1-coherent light source, 2-sample, and 3-sensor.
  • Fig. 3 is a large field of view hologram taken in an embodiment of the present invention.
  • Fig. 4 is a schematic diagram of the result of using direction alignment transformation to mark the center of the particle and perform cutting.
  • S1 Turn off the light source, and use the image sensor 3 to take dark-field images under dark room conditions (without ambient stray light).
  • the lensless holographic microscopic device used to capture images is shown in Fig. 2, and includes a coherent light source 1, an image sensor 3, and so on. Turn on the light source and collect the bright field image under the uniform illumination of the light source under dark room conditions (without ambient stray light).
  • the sample 2 (suspension sample) above the sensor 3.
  • the distance between the sample 2 and the sensor 3 is much smaller than the distance between the sample 2 and the coherent light source 1.
  • the incident wave propagating from the sample 2 to the plane of the sensor 3 can be regarded as a plane wave.
  • it guarantees the single magnification of the lensless holographic microscopy device (that is, there is basically no magnification for the sample 2).
  • a large field-of-view (FOV, field-of-view) with the same size does not require any other optical components.
  • the light source 1 is turned on, the linearly polarized laser beam is incident on the plane of the sample 2, and the sample 2 scatters the incident light.
  • the incident light and the scattered light interfere with the scattered light on the plane of the sensor 3, and the sensor 3 collects the interference pattern or hologram.
  • Each suspension sample contains only particles with the same refractive index, and the size can be different.
  • For each shot select a suspension sample and dilute the suspension to the point where there is almost no overlap of particles. The particles are randomly distributed in the suspension. in.
  • the solution is drawn from the suspension several times, dropped onto the sensor 3, and left to stand for a while to take a holographic image, and annotate the refractive index information corresponding to the particles in the image.
  • step S3 performing flat-field correction on all holographic microscopic images of step S2.
  • Performing flat-field correction on the holographic image to obtain a relative value image will not adversely affect the image processing, and can eliminate the uneven response of each pixel, and alleviate the problem of uneven image values caused by uneven illumination.
  • S4 Calculate the position of all particles in the plane for each microscopic image after flat field correction, and cut the image of each particle with this as the center and the fixed size as the radius.
  • a continuous transformation based on local direction is used to effectively detect the center of the particle, which is called direction alignment transformation.
  • the captured particle image is considered to be the superposition of incident light and scattered light, which appears as concentric circles of light and dark. Therefore, detecting the center of each particle means detecting the center of each concentric circle, and the edge of the circle in the image
  • the intensity gradient direction of the pixel always points to or away from the center of the circle.
  • the particle positioning based on direction alignment transformation mainly includes the following steps:
  • the flat-field corrected microscopic image b(r) is convolved with the Savitzky-Golay filter to obtain a gradient image
  • Each pixel in the picture specifies a direction:
  • ⁇ (r) is the angle with the x-axis.
  • ⁇ (r) is the angle with the x-axis.
  • the factor 2 on the exponent takes into account the nature of the direction information obtained by the gradient, and the phase distribution of ⁇ (r) is [0, 2 ⁇ ]. use This parameter is weighted, and each pixel is weighted according to different gradient values, and the contribution from the high-gradient pixel area is emphasized. According to the symmetry of the directional field, ⁇ (r) is convolved with a symmetrical conversion kernel. The form of the conversion kernel is
  • a larger distance from the center of the circle corresponds to more pixels in b(r), which tends to have a greater impact on the center position of b(r).
  • the factor 1/r ensures that all fringes in the scattering image are as far as possible to their center.
  • the estimates of have equal weights. After the transformation, the essence is to vote for the possible center of the circle (that is, the center of the particle). The larger the value of the pixel, the more likely it is the center of the particle.
  • the direction alignment transformation can be easily calculated by the Fourier convolution theorem:
  • the direction alignment transformation can be done by doing a fast Fourier transform on ⁇ (r), and then multiplying it in the frequency domain by a pre-calculated convolution kernel Then perform an inverse Fourier transform calculation.
  • the local brightness value in the image is selected and the image is binarized.
  • the process of the sub-pixel location algorithm is as follows: Find the connected areas of the binary image according to the eight connected areas, and each connected area is marked with category labels 1, 2, 3,..., n, and the number of connected areas is checked. The number represents the number of centered circles. . Scan each connected area separately to find the connected pixel points in each connected area, that is, the candidate brightness center point, and store its coordinate information and the pixel value after the direction alignment transformation.
  • the position of the center of the circle in each area is calculated.
  • the position of the center of the circle is obtained by the weighted average of the coordinates of the connected pixels.
  • the original image is cut with the center of the detected circle as the center of the particle, and the original image is cut into fixed-size data, and each image has annotated information, that is, the refractive index information of the particle.
  • the schematic diagram of the result of detecting the center of the circle by using the direction alignment transformation and cutting is shown in Fig. 4, the cross line is the center of the circle, and the rectangular frame is the cutting area.
  • S6 After cleaning all the cut images, they are randomly divided into training set, validation set and test set. Use the training set as the input of the convolutional neural network, train the classification network, verify the effect training parameters on the validation set, and finally test the classification effect on the test set.
  • the classification label corresponding to the particle is the result of the refractive index characterization of the particle;
  • the convolutional neural network used is a deep residual network, namely ResNet, with 50 layers.
  • ResNet ResNet
  • the advantage of this network is that as the number of layers increases, the accuracy of the network continues to rise, and there will be no gradient explosion and gradient dispersion.
  • Using the residual as the output of the network speeds up the convergence speed and is more sensitive to the small differences between the scattered images caused by the different refractive indexes of the particles.
  • Image cleaning removes invalid images and dirty images, such as images with overlapping particles, images without particles, contaminated images, etc. Only keep the particles in the center, the pattern is clear, the image with moderate brightness. Then according to the ratio of 7:2:1, all images are randomly divided into training set, validation set and test set. Input the data of the training set into the deep residual classification convolutional neural network, train the parameters of the network, verify the effect on the verification set according to a certain period, and adjust the hyperparameters, and finally test the final effect on the test set.
  • the refractive index information is discretized, and the refractive index label corresponding to the classification result is the refractive index characterization result of the particle, which characterizes its composition.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Biochemistry (AREA)
  • Pathology (AREA)
  • Analytical Chemistry (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Dispersion Chemistry (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Holo Graphy (AREA)

Abstract

一种基于卷积神经网络的无透镜全息显微微粒表征方法,步骤为:S1先采集暗场图像,然后采集光源(1)均匀照射下的明场图像;S2在传感器(3)上方放置样本(2),采集不同折射率的样本(2)的显微图像,并标注每张图像对应的折射率;S3对所有全息显微图像进行平场校正;S4计算图像中所有微粒的中心,并切割各个微粒的图像;S5清洗切割后的所有图像,随机分为训练集、验证集和测试集;将训练集作为卷积神经网络的输入,训练分类网络,在验证集上验证效果训练参数,最后在测试集上测试分类效果,微粒对应的分类标签即为微粒的折射率表征结果。由此,可以对大视场下的生物样本进行快速、方便、准确的表征。

Description

一种基于卷积神经网络的无透镜全息显微微粒表征方法 技术领域
本发明属于显微图像领域,尤其涉及一种基于卷积神经网络的无透镜全息显微微粒表征方法。
背景技术
无透镜全息显微镜近年来已经成为新的成像技术。传统的光学显微镜为了获得高分辨率,必须使用放大倍率的物镜与目镜,才得以观察到微小的生物图像。而无透镜全息显微,完全摒弃了光学镜头,直接对透过物体的光线采样。作为数字全息技术,通过传感器的感光阵列捕捉光线,并通过光电转化显示图像信息,后续可方便地进行图像处理。此外,无透镜全息显微的结构紧凑,并具有与成像传感器一致大小的视场,为资源有限的环境中大视场下多微粒的同时表征提供了一种可能的解决方案。
定位和表征胶体粒子对于生物医学、流体力学、软物质物理学等领域的研究有着重要意义。通常这类研究都利用全息显微技术,结合光传播理论和光散射理论,从显微图像中获取有用信息。先前的一些解决表征和追踪胶体粒子或其他软物质的工作,都以标准倒置光学显微为基础,并利用散射理论、光的传播理论来定量地分析单个粒子的全息显微图像,并得到粒子精确的空间信息、尺寸信息以及其折射率。此方法被延伸使用至胶体凝聚体、蛋白质凝聚体以及水质中各类微粒或其凝聚物的表征中,可以得到它们的等效折射率,以此区别于其他悬浮物。为了将全息图更好地拟合至对应的散射理论,上述技术均采用启发式算法,如最小二乘方法进行迭代计算,这通常是耗时、计算复杂度高的方法。
近年来,随着机器学习和深度学习的普及与发展,机器学习与深度学习也逐渐运用于无透镜显微及对胶体粒子的研究中。得益于卷积神经网络的快速性和高效性,无透镜显微领域中的自动对焦算法可由深度卷积神经网络快速地等效实现,对胶体粒子的追踪也可由深度卷积神经网络完成,而现有对各类微粒的表征仍采用上述的启发式算法,进行复杂且耗时的迭代,这已经远远不能满足需求。
发明内容
针对以上现有技术中存在的缺陷,本发明的目的在于提供一种基于卷积神经网络的无透镜全息显微微粒表征方法。
本发明采用的技术方案为:
一种基于卷积神经网络的无透镜全息显微微粒表征方法,包括如下步骤:
S1,先采集暗场图像,然后采集光源均匀照射下的明场图像;
S2,在传感器上方放置微粒悬浮液,并保证微粒悬浮液到传感器的距离远小于微粒悬浮液到光源的距离;打开光源,采集不同折射率的微粒悬浮液的显微图像,并标注每张图像对应的折射率;
S3,对步骤S2采集的所有显微图像进行平场校正;
S4,对平场校正后的每张显微图像计算所有微粒在平面内的位置,并作为中心,以固定尺寸为半径,切割各个微粒的图像;
S5,将切割后的所有图像清洗后,随机分为训练集、验证集和测试集;将训练集作为卷积神经网络的输入,训练分类网络,在验证集上验证效果训练参数,最后在测试集上测试分类效果,微粒对应的分类标签即为微粒的折射率表征结果。
进一步地,所述步骤S2中,先将微粒悬浮液稀释至几乎没有微粒重合的程度,微粒随机分布于悬浮液中。
进一步地,所述步骤S2中,每次采集拍摄的显微图像只包含一种折射率的微粒,每种折射率的悬浮液随机取若干滴,分别拍摄多组图像。
进一步地,所述步骤S3中,平场校正的具体方法为:将采集的暗场图像表示为I d,采集的明场图像表示为I 0,采集的图像序列中的任意一张全息图表示为I,则对全息图I进行平场校正后的图像b表示为:
Figure PCTCN2020115352-appb-000001
进一步地,所述步骤S4中,利用方向对准变换方法计算所有微粒的中心,并根据卷积神经网络的输入尺寸进行相同尺寸的裁剪,得到相同尺寸的数据集;其中,方向对准变换方法的具体步骤为:
(1)进行方向对准变换
将平场校正后的图像b(r)与Savitzky-Golay滤波器卷积得到梯度图像
Figure PCTCN2020115352-appb-000002
r为表示坐标的矢量,平场校正后的图像b(r)中梯度的空间变化方向ψ(r)为:
Figure PCTCN2020115352-appb-000003
其中
Figure PCTCN2020115352-appb-000004
表示梯度的幅度值,
Figure PCTCN2020115352-appb-000005
表示梯度图
Figure PCTCN2020115352-appb-000006
中每一个像素所指定的方向与x轴的夹角,ψ(r)的相位分布在[0,2π];将ψ(r)与一个对称的转换核
Figure PCTCN2020115352-appb-000007
作卷积,其中θ表示K(r)的相位,并且K(r)的相位与ψ(r)的相位是相互补充的关系,即对应于每个像素点,K(r)的相位与ψ(r)的相位的和为2π;r表示衰减因子,确保每个像素点对其中心 的估计具有相等的权重;
卷积运算后得到
Ψ(r)=∫K(r-r′)ψ(r′)d 2r′
Ψ(r)即为进行方向对准变换后得到的复振幅图;
(2)亚像素定位
对步骤(1)得到的复振幅图Ψ(r)取模的平方|Ψ(r)| 2,表示方向对准变换后的强度图;因此平场校正后的图像b(r)的微粒中心转换成方向对准变换后的强度图|Ψ(r)| 2的亮度中心,通过设定合适的阈值挑选出图像中的局部亮度较大值即前景图像,将图像二值化;然后标记二值图像的八连通区域,每个连通区域标记类别标签;按照标记结果,分别对每个连通区域进行扫描,存储区域内每个像素的坐标和方向对准变换后的强度值;最后计算出每个连通区域内的亮度中心位置,亮度中心的位置通过连通的像素点的坐标的加权平均得到,加权系数由每个像素点的亮度决定,亮度越大则权值越高,每个亮度中心即代表了一个微粒的中心。
本发明的方法巧妙地将微粒的折射率表征问题转化为分类问题,与现有技术相比,其显著优点为:(1)无透镜全息显微能满足生物样本的大视场观测需求,实现大视场下的生物样本的表征、识别。(2)相比于利用复杂的理论和非线性拟合技术对微粒进行表征、计算复杂度大耗时长的传统启发式算法,本发明基于卷积神经网络的微粒表征方法在网络训练完成后,可方便又快速地表征微粒的折射率信息,几乎可以实时地完成一次显微成像中所有微粒的折射率表征,进而统计该悬浮液样本中微粒的分布情况。
附图说明
图1是本发明基于卷积神经网络的无透镜全息显微微粒表征方法的流程图。
图2是本发明实施例中所用装置的结构示意图,其中,1-相干光源,2-样本,3-传感器。
图3是本发明实施例中拍摄的大视场全息图。
图4是利用方向对准变换标定微粒中心并进行切割的结果示意图。
具体实施方式
参见图1,本发明基于卷积神经网络的无透镜全息显微微粒表征方法,具体步骤如下:
S1:关闭光源,在暗室条件下(无环境杂散光的情况)利用图像传感器3拍摄暗场图像。用于拍摄图像的无透镜全息显微装置参见图2,包括相干光源1、图像传感器3等。打开光源,在暗室条件下(无环境杂散光的情况)采集光源均匀照射下的明场图像。
S2,在传感器3的上方放置样本2(悬浮液样本)。样本2到传感器3的距离远小于样本2到相干光源1的距离。这一方面使得从样本2传播到传感器3平面的入射波可以视为平面波,另一方面保证了无透镜全息显微装置的单倍放大率(即对样本2基本不存在放大),提供和芯片大小一致的大视场(FOV,field-of-view,)的同时不需要任何其他光学元件。
打开光源1,线极化的激光束入射到样本2平面,样本2散射入射光,在传感器3平面入射光与散射光发生干涉,传感器3采集干涉图案即全息图。每份悬浮液样本内只包含同一种折射率的微粒,大小尺寸可不相同,每次拍摄,选择一份悬浮液样本,并将悬浮液稀释至几乎没有微粒重合的程度,微粒随机分布于悬浮液中。从悬浮液中多次吸取溶液,滴至传感器3上方,静置片刻拍摄全息图像,标注该图像内微粒对应的折射率信息。擦拭干净后,再次滴落溶液至传感器3上方,以同样的方法拍摄并标注图像信息。对每份悬浮液样本(对应一种折射率)随机采集多组数据后,切换至下一份悬浮液样本,拍摄不同折射率微粒的显微图像,重复上述过程。拍摄过程中,保证图像亮度适中。首次拍摄后,不再手动调整相机参数。拍摄的全息图参见图3。
S3,对步骤S2的所有全息显微图像进行平场校正。
该步骤中,选择全息图序列的任意一张全息图,将按步骤S1中所描述的采集的暗场图像表示为I d,将按步骤S2中所描述的采集的明场图像表示为I 0,将按步骤S3中所描述的采集的全息图序列中需要计算的任意一张全息图表示为I,将按步骤S4中所描述的估计出的背景图像表示为I b,则对I进行平场校正后的图像b表示为:
Figure PCTCN2020115352-appb-000008
对全息图像做平场校正得到一幅相对值图像,并不会对图像处理产生不良影响,并且能够消除各像素响应不一的情况,缓解光照不均匀带来的图像值不均匀的问题。
S4,对平场校正后的每张显微图像计算所有微粒在平面内的位置,并以此为中心,以固定尺寸为半径,切割各个微粒的图像。
该步骤中,利用一种基于局部方向的连续变换来有效检测微粒中心,称之为方向对准变换。根据同轴全息理论,拍摄到的微粒图像认为是入射光与散射光的叠加,其表现为明暗相间的同心圆,因此检测各微粒中心即检测各同心圆的圆心,并且图像中的圆的边缘像素的强度梯度方向总是指向或者远离圆心方向。基于方向对准变换的微粒定位主要包含如下步骤:
(1)方向对准变换
为了降低对图像噪声的敏感度,将平场校正后的显微图像b(r)与Savitzky-Golay滤波器卷 积得到梯度图像
Figure PCTCN2020115352-appb-000009
图中的每一个像素都指定了一个方向:
Figure PCTCN2020115352-appb-000010
φ(r)是与x轴的夹角。定义一个新的参数来表示b(r)中梯度的空间变化方向:
Figure PCTCN2020115352-appb-000011
指数上的因子2是考虑到由梯度获得的方向信息的性质,ψ(r)的相位分布在[0,2π]。利用
Figure PCTCN2020115352-appb-000012
加权该参数,根据不同梯度值对每个像素点进行加权,并且强调来自高梯度像素区域的贡献。根据这个方向场的对称性,将ψ(r)与一个对称的转换核相卷积,转换核的形式为
Figure PCTCN2020115352-appb-000013
卷积后得到结果
Ψ(r)=∫K(r-r′)ψ(r′)d 2r′
K(r)的相位与ψ(r)的相位是相互补充的关系,即对应于每个像素点,K(r)的相位与ψ(r)的相位的和为2π。由此可知,沿着方向为θ=φ(r′)的线r-r′进行积分时,被积函数都是非负实数,而沿着其他方向积分的被积函数都是复数值。因此,沿着b(r)的梯度方向的积分运算在图像的对称中心累加正值结果。相反,复数值的积分运算的累加结果通常会相互抵消。在距离圆心较大距离处对应b(r)中的更多像素点,因而往往会对b(r)的中心位置产生更大影响,因子1/r尽可能确保散射图中所有条纹对其中心的估计具有相等的权重。进行变换后,本质是对可能的圆心(即微粒中心)进行了投票,值越大的像素点代表是微粒中心的可能性更大。
方向对准变换可通过傅里叶卷积定理方便地进行运算:
Figure PCTCN2020115352-appb-000014
其中
Figure PCTCN2020115352-appb-000015
是ψ(r)的傅里叶变换,并且
Figure PCTCN2020115352-appb-000016
是K(r)的傅里叶变换。方向对准变换可以通过对ψ(r)做一个快速傅里叶变换,再在频域乘以一个预先计算好的卷积核
Figure PCTCN2020115352-appb-000017
后执行一次逆傅里叶变换计算。
(2)亚像素定位
在经过以上的方向对准变换后,b(r)的圆心因此转换成B(r)=|Ψ(r)| 2的亮度中心,因此需要识别、定位峰值中心。通过设定合适的阈值挑选出图像中的局部亮度较大值,将图像二值化。亚像素定位算法过程如下:按八连通寻找二值图像连通的区域,每个连通区域标记类别标 签1,2,3,…,n,检查连通区域的数量,其数量即代表了定位的圆心数量。分别对每个连通区域进行扫描,找出每个连通区域内连通的像素点,即候选的亮度中心点,并存储其坐标信息和通过方向对准变换后的像素值。最后计算出每个区域内的圆心位置,圆心的位置通过连通的像素点的坐标的加权平均得到,加权系数由每个像素点的亮度决定,亮度越大即B(r)=|Ψ(r)| 2值越大则权值越高。
最后,以检测的圆心为微粒中心,对原图像进行切割,将原始图像切割为固定尺寸的数据,并且每张图像都带有标注信息,即此微粒的折射率信息。利用方向对准变换检测圆心并进行切割的结果示意图如图4所示,其十字线为圆心,矩形框为切割区域。
S6,将切割下来的所有图像清洗后,随机分为训练集、验证集和测试集。将训练集作为卷积神经网络的输入,训练分类网络,在验证集上验证效果训练参数,最后在测试集上测试分类效果,微粒对应的分类标签即为微粒的折射率表征结果;
该步骤中,采用的卷积神经网络为深度残差网络,即ResNet,层数为50层。这个网络的优点是,随着层数的增加,网络的精度不断上升,不会出现梯度爆炸和梯度弥散的现象。采用残差作为网络的输出,加快了收敛速度,对于微粒折射率不同造成的散射图像之间微小的差异更加敏感。
图像清洗去除无效图像、脏图像,如有微粒重叠的图像、无微粒的图像、被污染的图像等。只保留微粒在中心,图案清晰、明暗适中的图像。然后按照7:2:1的比例将所有图像随机分为训练集、验证集和测试集。将训练集的数据输入深度残差分类卷积神经网络中,训练网络各参数,按照一定的周期在验证集上验证效果,并调整超参数,最后在测试集上测试最终效果。由于此方法中,将折射率信息离散化,分类的结果对应的折射率标签即为微粒的折射率表征结果,表征了其组成成分。

Claims (7)

  1. 一种基于卷积神经网络的无透镜全息显微微粒表征方法,其特征在于,包括如下步骤:
    S1,先采集暗场图像,然后采集光源均匀照射下的明场图像;
    S2,在传感器上方放置微粒悬浮液,并保证微粒悬浮液到传感器的距离远小于微粒悬浮液到光源的距离;打开光源,采集不同折射率的微粒悬浮液的显微图像,并标注每张图像对应的折射率;
    S3,对步骤S2采集的所有显微图像进行平场校正;
    S4,对平场校正后的每张显微图像计算所有微粒在平面内的位置,并作为中心,以固定尺寸为半径,切割各个微粒的图像;
    S5,将切割后的所有图像清洗后,随机分为训练集、验证集和测试集;将训练集作为卷积神经网络的输入,训练分类网络,在验证集上验证效果训练参数,最后在测试集上测试分类效果,微粒对应的分类标签即为微粒的折射率表征结果。
  2. 根据权利要求1所述的一种基于卷积神经网络的无透镜全息显微微粒表征方法,其特征在于,所述步骤S2中,先将微粒悬浮液稀释至几乎没有微粒重合的程度,微粒随机分布于悬浮液中。
  3. 根据权利要求1所述的一种基于卷积神经网络的无透镜全息显微微粒表征方法,其特征在于,所述步骤S2中,每次采集拍摄的显微图像只包含一种折射率的微粒,每种折射率的悬浮液随机取若干滴,分别拍摄多组图像。
  4. 根据权利要求1所述的一种基于卷积神经网络的无透镜全息显微微粒表征方法,其特征在于,所述步骤S3中,平场校正的具体方法为:将采集的暗场图像表示为I d,采集的明场图像表示为I 0,采集的图像序列中的任意一张全息图表示为I,则对全息图I进行平场校正后的图像b表示为:
    Figure PCTCN2020115352-appb-100001
  5. 根据权利要求1所述的一种基于卷积神经网络的无透镜全息显微微粒表征方法,其特征在于,所述步骤S4中,利用方向对准变换方法计算所有微粒的中心,并根据卷积神经网络的输入尺寸进行相同尺寸的裁剪,得到相同尺寸的数据集;其中,方向对准变换方法的具体步骤为:
    (1)进行方向对准变换
    将平场校正后的图像b(r)与Savitzky-Golay滤波器卷积得到梯度图像
    Figure PCTCN2020115352-appb-100002
    r为表示坐标的矢量,平场校正后的图像b(r)中梯度的空间变化方向ψ(r)为:
    Figure PCTCN2020115352-appb-100003
    其中
    Figure PCTCN2020115352-appb-100004
    表示梯度的幅度值,
    Figure PCTCN2020115352-appb-100005
    表示梯度图
    Figure PCTCN2020115352-appb-100006
    中每一个像素所指定的方向与x轴的夹角,ψ(r)的相位分布在[0,2π];将ψ(r)与一个对称的转换核
    Figure PCTCN2020115352-appb-100007
    作卷积,其中θ表示K(r)的相位,并且K(r)的相位与ψ(r)的相位是相互补充的关系,即对应于每个像素点,K(r)的相位与ψ(r)的相位的和为2π;r表示衰减因子,确保每个像素点对其中心的估计具有相等的权重;
    卷积运算后得到
    Figure PCTCN2020115352-appb-100008
    Ψ(r)即为进行方向对准变换后得到的复振幅图;
    (2)亚像素定位
    对步骤(1)得到的复振幅图Ψ(r)取模的平方|Ψ(r)| 2,表示方向对准变换后的强度图;因此平场校正后的图像b(r)的微粒中心转换成方向对准变换后的强度图|Ψ(r)| 2的亮度中心,通过设定合适的阈值挑选出图像中的局部亮度较大值即前景图像,将图像二值化;然后标记二值图像的八连通区域,每个连通区域标记类别标签;按照标记结果,分别对每个连通区域进行扫描,存储区域内每个像素的坐标和方向对准变换后的强度值;最后计算出每个连通区域内的亮度中心位置,亮度中心的位置通过连通的像素点的坐标的加权平均得到,加权系数由每个像素点的亮度决定,亮度越大则权值越高,每个亮度中心即代表了一个微粒的中心。
  6. 根据权利要求1所述的一种基于卷积神经网络的无透镜全息显微微粒表征方法,其特征在于,所述步骤S5中,采用的卷积神经网络为深度残差网络。
  7. 根据权利要求1所述的一种基于卷积神经网络的无透镜全息显微微粒表征方法,其特征在于,所述步骤S5中,按照7:2:1的比例将清洗后的图像随机分为训练集、验证集和测试集。
PCT/CN2020/115352 2019-10-18 2020-09-15 一种基于卷积神经网络的无透镜全息显微微粒表征方法 WO2021073335A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910996372.7A CN110836867A (zh) 2019-10-18 2019-10-18 一种基于卷积神经网络的无透镜全息显微微粒表征方法
CN201910996372.7 2019-10-18

Publications (1)

Publication Number Publication Date
WO2021073335A1 true WO2021073335A1 (zh) 2021-04-22

Family

ID=69575389

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/115352 WO2021073335A1 (zh) 2019-10-18 2020-09-15 一种基于卷积神经网络的无透镜全息显微微粒表征方法

Country Status (2)

Country Link
CN (1) CN110836867A (zh)
WO (1) WO2021073335A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782288A (zh) * 2022-06-22 2022-07-22 深圳市润之汇实业有限公司 基于图像的透镜生产工艺监督方法、装置、设备及介质

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110836867A (zh) * 2019-10-18 2020-02-25 南京大学 一种基于卷积神经网络的无透镜全息显微微粒表征方法
CN111595737B (zh) * 2020-05-15 2021-03-23 厦门大学 一种基于三维分支网络的光学全息粒子场颗粒点检测方法
CN111723848A (zh) * 2020-05-26 2020-09-29 浙江工业大学 一种基于卷积神经网络和数字全息的海洋浮游生物自动分类方法
CN113740214B (zh) * 2021-11-08 2022-01-25 深圳大学 一种基于全息倏逝波光镊的智能分析方法与装置
CN114967397B (zh) * 2022-04-25 2023-04-25 上海交通大学 一种无透镜全息三维成像构建方法及装置

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108254295A (zh) * 2018-01-15 2018-07-06 南京大学 一种定位与表征球形微粒的方法及其装置
CN108562541A (zh) * 2018-04-23 2018-09-21 南京大学 基于矩阵分解的无透镜全息显微散斑噪声去除方法及装置
US20180292784A1 (en) * 2017-04-07 2018-10-11 Thanh Nguyen APPARATUS, OPTICAL SYSTEM, AND METHOD FOR DIGITAL Holographic microscopy
CN109270670A (zh) * 2018-10-31 2019-01-25 上海理鑫光学科技有限公司 Led阵列光源、无透镜显微镜及图像处理方法
CN109389557A (zh) * 2018-10-20 2019-02-26 南京大学 一种基于图像先验的细胞图像超分辨方法及装置
WO2019117453A1 (ko) * 2017-12-15 2019-06-20 주식회사 내일해 측정 대상 물체의 3차원 형상 정보를 생성하는 방법, 결함 검출 방법 및 결함 검출 장치
WO2019171453A1 (ja) * 2018-03-06 2019-09-12 株式会社島津製作所 細胞画像解析方法、細胞画像解析装置、及び学習モデル作成方法
CN110308547A (zh) * 2019-08-12 2019-10-08 青岛联合创智科技有限公司 一种基于深度学习的稠密样本无透镜显微成像装置与方法
CN110836867A (zh) * 2019-10-18 2020-02-25 南京大学 一种基于卷积神经网络的无透镜全息显微微粒表征方法

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2238942B1 (zh) * 1973-07-27 1976-06-18 Thomson Csf
FR3009084B1 (fr) * 2013-07-23 2015-08-07 Commissariat Energie Atomique Procede pour trier des cellules et dispositif associe.
ES2537784B1 (es) * 2013-08-02 2016-04-12 Universitat De Valéncia Método de reconstrucción holográfico basado en microscopía sin lentes en línea con múltiples longitudes de onda, microscopio holográfico sin lentes en línea basado en múltiples longitudes de onda y programa de ordenador
KR102425768B1 (ko) * 2014-02-12 2022-07-26 뉴욕 유니버시티 콜로이드 입자의 홀로그래픽 추적 및 특징화를 위한 고속 특징부 식별
US10054777B2 (en) * 2014-11-11 2018-08-21 California Institute Of Technology Common-mode digital holographic microscope
FR3046238B1 (fr) * 2015-12-24 2018-01-26 Commissariat A L'energie Atomique Et Aux Energies Alternatives Procede d’observation d’un echantillon par imagerie sans lentille
EP3535622A4 (en) * 2016-11-04 2020-05-13 miDiagnostics NV SYSTEM AND METHOD FOR OBJECT DETECTION IN HOLOGRAPHIC LENS-FREE IMAGING BY LEARNING AND CONVOLUTIONAL DICTIONARY CODING
CN109447119A (zh) * 2018-09-26 2019-03-08 电子科技大学 一种结合形态学分割和svm的尿沉渣中管型识别方法
CN110246115A (zh) * 2019-04-23 2019-09-17 西安理工大学 一种远场激光光斑图像的检测方法

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180292784A1 (en) * 2017-04-07 2018-10-11 Thanh Nguyen APPARATUS, OPTICAL SYSTEM, AND METHOD FOR DIGITAL Holographic microscopy
WO2019117453A1 (ko) * 2017-12-15 2019-06-20 주식회사 내일해 측정 대상 물체의 3차원 형상 정보를 생성하는 방법, 결함 검출 방법 및 결함 검출 장치
CN108254295A (zh) * 2018-01-15 2018-07-06 南京大学 一种定位与表征球形微粒的方法及其装置
WO2019171453A1 (ja) * 2018-03-06 2019-09-12 株式会社島津製作所 細胞画像解析方法、細胞画像解析装置、及び学習モデル作成方法
CN108562541A (zh) * 2018-04-23 2018-09-21 南京大学 基于矩阵分解的无透镜全息显微散斑噪声去除方法及装置
CN109389557A (zh) * 2018-10-20 2019-02-26 南京大学 一种基于图像先验的细胞图像超分辨方法及装置
CN109270670A (zh) * 2018-10-31 2019-01-25 上海理鑫光学科技有限公司 Led阵列光源、无透镜显微镜及图像处理方法
CN110308547A (zh) * 2019-08-12 2019-10-08 青岛联合创智科技有限公司 一种基于深度学习的稠密样本无透镜显微成像装置与方法
CN110836867A (zh) * 2019-10-18 2020-02-25 南京大学 一种基于卷积神经网络的无透镜全息显微微粒表征方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782288A (zh) * 2022-06-22 2022-07-22 深圳市润之汇实业有限公司 基于图像的透镜生产工艺监督方法、装置、设备及介质

Also Published As

Publication number Publication date
CN110836867A (zh) 2020-02-25

Similar Documents

Publication Publication Date Title
WO2021073335A1 (zh) 一种基于卷积神经网络的无透镜全息显微微粒表征方法
Beyerer et al. Machine vision: Automated visual inspection: Theory, practice and applications
JP3822242B2 (ja) スライド及び試料の調製品質を評価するための方法及び装置
Horstmeyer et al. Convolutional neural networks that teach microscopes how to image
Sarder et al. Deconvolution methods for 3-D fluorescence microscopy images
CN113960908B (zh) 用于表征样本中的颗粒的全息方法
Biggs 3D deconvolution microscopy
CN108254295B (zh) 一种定位与表征球形微粒的方法及其装置
US20180080760A1 (en) Method for analysing particles
JP6594294B2 (ja) 顕微鏡画像の画像品質評価
US20210279858A1 (en) Material testing of optical test pieces
GB2569751A (en) Static infrared thermal image processing-based underground pipe leakage detection method
Li et al. Automated discrimination between digs and dust particles on optical surfaces with dark-field scattering microscopy
CN113252568A (zh) 基于机器视觉的镜片表面缺陷检测方法、系统、产品、终端
AU2020207942B2 (en) Printed coverslip and slide for identifying reference focal plane for light microscopy
CN108562541A (zh) 基于矩阵分解的无透镜全息显微散斑噪声去除方法及装置
CN109596530A (zh) 光学表面麻点和灰尘缺陷分类的暗场偏振成像装置和方法
Jemec et al. 2D sub-pixel point spread function measurement using a virtual point-like source
WO2019043458A2 (en) SUPER-RESOLUTION METROLOGY METHODS BASED ON SINGULAR DISTRIBUTIONS AND DEEP LEARNING
CN117422699A (zh) 公路检测方法、装置、计算机设备及存储介质
Yang et al. Surface defects evaluation system based on electromagnetic model simulation and inverse-recognition calibration method
Oswald-Tranta et al. Thermographic crack detection and failure classification
CN112798504A (zh) 大视场高通量流式细胞分析系统及分析方法
CN110044932B (zh) 一种曲面玻璃表面和内部缺陷的检测方法
Han Crack detection of UAV concrete surface images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20877044

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20877044

Country of ref document: EP

Kind code of ref document: A1