CN113438413B - Automatic focusing method of visible component analyzer - Google Patents

Automatic focusing method of visible component analyzer Download PDF

Info

Publication number
CN113438413B
CN113438413B CN202110583364.7A CN202110583364A CN113438413B CN 113438413 B CN113438413 B CN 113438413B CN 202110583364 A CN202110583364 A CN 202110583364A CN 113438413 B CN113438413 B CN 113438413B
Authority
CN
China
Prior art keywords
focusing
neural network
deep neural
standard particle
flow cell
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110583364.7A
Other languages
Chinese (zh)
Other versions
CN113438413A (en
Inventor
王巧龙
赵文军
赵学魁
陈海龙
姜云龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun Medicside Medical Technology Co ltd
Original Assignee
Changchun Medicside Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun Medicside Medical Technology Co ltd filed Critical Changchun Medicside Medical Technology Co ltd
Priority to CN202110583364.7A priority Critical patent/CN113438413B/en
Publication of CN113438413A publication Critical patent/CN113438413A/en
Application granted granted Critical
Publication of CN113438413B publication Critical patent/CN113438413B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image

Abstract

The invention discloses an automatic focusing method of a visible component analyzer. Step 1: a focusing motor of the automatic focusing device drives the planar flow cell to make the focused liquid flow through the flow cell; step 2: based on the flow cell through which the focused liquid flows in the step 1, moving forwards by the same step length from the initial position, and taking m images at each position, wherein the n positions are moved; and step 3: performing image segmentation on each image in the m × n images in the step 2 to obtain a standard particle image; and 4, step 4: constructing a deep neural network model based on the standard particle image in the step 3; and 5: training the deep neural network model in the step 4 to obtain network parameters; step 6: and (5) performing linear regression on the trained deep neural network model in the step (5), wherein the intersection point position of a regression line and a horizontal axis is the focus position. The method is used for solving the problem that the focusing position is inaccurate in the traditional evaluation function method.

Description

Automatic focusing method of visible component analyzer
Technical Field
The invention belongs to the field of automatic focusing; in particular to an automatic focusing method of a visible component analyzer.
Background
The traditional auto-focusing method based on image processing generally adopts a focusing evaluation function method, images are shot at different positions, focusing evaluation function values of the images at the different positions are calculated, and the position at the maximum value of the focusing evaluation function is taken as a focus position. Since the focus evaluation function may have multiple peaks, it is easy to make the obtained focus position inaccurate.
Disclosure of Invention
The invention provides an automatic focusing method of a visible component analyzer, which is used for solving the problem of inaccurate focusing position in the traditional evaluation function method.
The invention is realized by the following technical scheme:
an autofocus method for a tangible ingredient analyzer, the autofocus method comprising the steps of:
step 1: a focusing driving motor 5 of the automatic focusing device drives the planar flow cell 2 to make the focused liquid flow through the planar flow cell 2;
step 2: based on the planar flow cell 2 through which the focused liquid flows in the step 1, the planar flow cell moves forwards in the same step length from the initial position, and m images are shot at each position and move for n positions;
and step 3: performing image segmentation on each image in the m × n images in the step 2 to obtain a standard particle image;
and 4, step 4: constructing a deep neural network model based on the standard particle image in the step 3;
and 5: training the deep neural network model in the step 4 to obtain network parameters;
step 6: and (5) performing linear regression on the trained deep neural network model in the step (5), wherein the intersection point position of a regression line and a horizontal axis is the focus position.
Further, the automatic focusing device comprises a light source 1, a plane flow pool 2, an objective lens 3, a camera 4 and a focusing driving motor 5, wherein the objective lens 3 is installed on a lens of the camera 4, the lens of the camera 4 and the light source 1 are arranged on two sides of the plane flow pool 2, the lens of the camera 4 is aligned with the plane flow pool 2 for shooting, and the focusing driving motor 5 drives the plane flow pool 2 to move.
Further, the focusing solution of step 1 contains standard particles at a concentration of 1000 ± 50 per microliter.
Further, the starting position of step 2 is the position of the planar flow cell 2 at which the focus drive motor 5 starts driving.
Further, the step 3 is specifically to cut the shot picture to reserve the standard particles, so as to obtain the standard particle image.
Further, the training of the deep neural network model in the step 5 specifically includes the following steps:
step 5.1: the standard particles at each position are corresponding to a numerical value to serve as labels of the particles, and the range of the numerical value is [ -q, q ], so that a training set is constructed;
step 5.2: and inputting each data of the training set into the deep neural network model, and performing training from bottom to top to full-connection layer output, thereby completing the training of the deep neural network and obtaining the parameters of the deep neural network.
Further, the deep neural network model in the step 5.2 is an input layer, a plurality of convolutional layers, a pooling layer and a full-link layer from bottom to top.
Further, the linear regression of step 6 specifically includes the following steps:
step 6.1: the shot image is divided into m at each position through the imageiA standard particle image;
step 6.2: using the trained deep neural network model to act on the standard particles at each position to obtain the actual output value of the particles;
step 6.3: constructing a rectangular coordinate system by using the position of the standard particle as an abscissa x and the actual output value of the standard particle as an ordinate y, wherein the initial position of the standard particle is a coordinate origin to obtain a data point set (x)i,yi);
Step 6.4: performing linear regression on the data point set to obtain a regression line;
step 6.5: the position of the intersection of the regression line and the abscissa x of the standard particle is the focal position of the autofocus.
Further, the linear regression formula of step 6.4 is that y is b0+b1x
Wherein the content of the first and second substances,
Figure BDA0003087017360000021
Figure BDA0003087017360000022
Figure BDA0003087017360000023
Figure BDA0003087017360000024
wherein, b0Representing the intercept of the regression line, b1Representing the slope of the regression line, xiIs the abscissa, y, of the ith standard particleiIs the ordinate of the ith standard particle, n is the number of standard particles,
Figure BDA0003087017360000025
is the mean value of the abscissa of the standard particle,
Figure BDA0003087017360000026
mean values of the ordinate of the standard particles.
The invention has the beneficial effects that:
the invention improves the focusing precision.
The invention has simple calculation and easy implementation.
Drawings
FIG. 1 is a flow chart of the present invention.
FIG. 2 is a graph of standard particle images at different locations of the present invention, reflecting the difference in morphology of the standard particles at different locations during the focusing process.
FIG. 3 is an exemplary graph of a fitted line of the present invention.
Fig. 4 is a schematic view of an autofocus apparatus of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
An autofocus method for a tangible ingredient analyzer, the autofocus method comprising the steps of:
an autofocus method for a tangible ingredient analyzer, the autofocus method comprising the steps of:
step 1: a focusing driving motor 5 of the automatic focusing device drives the planar flow cell 2 to make the focused liquid flow through the planar flow cell 2;
step 2: based on the planar flow cell 2 through which the focused liquid flows in the step 1, the planar flow cell moves forwards in the same step length from the initial position, and m images are shot at each position and move for n positions;
and step 3: performing image segmentation on each image in the m × n images in the step 2 to obtain a standard particle image;
and 4, step 4: constructing a deep neural network model based on the standard particle image in the step 3;
and 5: training the deep neural network model in the step 4 to obtain network parameters;
step 6: and (5) performing linear regression on the trained deep neural network model in the step (5), wherein the intersection point position of a regression line and a horizontal axis is the focus position.
Further, as shown in fig. 4, the automatic focusing device includes a light source 1, a planar flow cell 2, an objective lens 3, a camera 4 and a focusing driving motor 5, wherein the objective lens 3 is mounted on a lens of the camera 4, the lens of the camera 4 and the light source 1 are disposed on two sides of the planar flow cell 2, the lens of the camera 4 is aligned with the planar flow cell 2 for shooting, and the focusing driving motor 5 drives the planar flow cell 2 to move.
Further, the focusing solution of step 1 contains standard particles at a concentration of 1000 ± 50 per microliter.
Further, the starting position of step 2 is the position of the planar flow cell 2 at which the focus drive motor 5 starts driving.
Further, the step 3 is specifically to cut the shot picture to reserve the standard particles, so as to obtain the standard particle image.
Further, the training of the deep neural network model in the step 5 specifically includes the following steps:
step 5.1: the standard particles at each position are corresponding to a numerical value to serve as labels of the particles, and the range of the numerical value is [ -q, q ], so that a training set is constructed; the label corresponding to the standard particle on the focus is 0, and the deep neural network is trained by using a training set to obtain the parameter of the deep neural network;
step 5.2: and inputting each data of the training set into the deep neural network model, and performing training from bottom to top to full-connection layer output, thereby completing the training of the deep neural network and obtaining the parameters of the deep neural network.
Starting from an initial position, shooting an image at each position by using a high-speed camera, obtaining a standard particle image through image segmentation, taking the position where the standard particle is located as an abscissa x and the label of the standard particle as an ordinate y, and constructing a rectangular coordinate system, wherein the initial position is the origin of the coordinate system. Using the trained deep neural network to act on the standard particle image to obtain a data point set (x)i,yi). A linear regression is performed on the set of data points.
Further, the deep neural network model in the step 5.2 is an input layer, a plurality of convolutional layers, a pooling layer and a full-link layer from bottom to top.
Further, the linear regression of step 6 specifically includes the following steps:
step 6.1: the shot image is divided into m at each position through the imageiA standard particle image;
step 6.2: using the trained deep neural network model to act on the standard particles at each position to obtain the actual output value of the particles;
step 6.3: constructing a rectangular coordinate system by using the position of the standard particle as an abscissa x and the actual output value of the standard particle as an ordinate y, wherein the initial position of the standard particle is a coordinate origin to obtain a data point set (x)i,yi);
Step 6.4: performing linear regression on the data point set to obtain a regression line;
step 6.5: the position of the intersection of the regression line and the abscissa x of the standard particle is the focal position of the autofocus.
Further, the linear regression formula of step 6.4 is that y is b0+b1x
Wherein the content of the first and second substances,
Figure BDA0003087017360000041
Figure BDA0003087017360000042
Figure BDA0003087017360000043
Figure BDA0003087017360000044
wherein, b0Representing the intercept of the regression line, b1Representing the slope of the regression line, xiIs the abscissa, y, of the ith standard particleiIs the ordinate of the ith standard particle, n is the number of standard particles,
Figure BDA0003087017360000045
is the mean value of the abscissa of the standard particle,
Figure BDA0003087017360000046
mean values of the ordinate of the standard particles.
Example 2
Step S1: starting from the starting point, advancing by 500 steps with the step length of 1 micron, and shooting m-2 images at each position;
step S2: taking a fixed threshold T, and segmenting the image by adopting a threshold segmentation method to obtain a standard particle image, wherein the length and the width of the standard particle image are 40 pixels, and m can be obtained at each positioniA standard particle image, mi≥0;
Step S3: assigning labels to the standard particle images, wherein the range of the labels is [ -1.0,1.0], the label value of the standard particle at the focus position is 0, the label values of the standard particles positioned at the left side of the focus are sequentially reduced, the label values of the standard particles positioned at the right side of the focus are sequentially increased, and the increasing or decreasing trend is linear, so as to construct a training set;
step S4: the deep neural network consists of a plurality of convolution layers, a pooling layer and a full-connection layer, the last full-connection layer only has one output node, the input of the neural network is a standard particle image of 40X40, and the standard particle image is shown in the figure;
step S5: the error function of the deep neural network adopts a mean square error function, training is carried out by a gradient descent method, and parameters of the deep neural network are obtained through training;
step S6: and using the trained deep neural network for focusing the visible component analyzer. Shooting images from a starting position, sending the standard particle image of each position into a trained deep neural network to obtain an actual output value, corresponding each particle to one point in a rectangular coordinate system, taking the position of the standard particle as a horizontal coordinate, taking the output value of the deep neural network as a vertical coordinate, and forming a set (x) by all standard particles shot in focusingi,yi) And performing linear regression on the set to obtain a regression line, wherein an intersection point of the regression line and the horizontal axis is the focal position of the current focusing, and as shown in the figure, a fitting line formula of the current focusing is as follows:
y=0.0037x-1.096
the focal position of this time of focusing was found to be 296.

Claims (8)

1. An auto-focusing method of a tangible ingredient analyzer, the auto-focusing method comprising the steps of:
step 1: a focusing driving motor (5) of the automatic focusing device drives the plane flow cell (2) to make the focusing liquid flow through the plane flow cell (2);
step 2: based on the planar flow cell (2) through which the focused liquid flows in the step 1, moving forwards in the same step length from the initial position, and taking m images at each position for k positions;
and step 3: performing image segmentation on each image in the m × k images in the step 2 to obtain a standard particle image;
and 4, step 4: constructing a deep neural network model based on the standard particle image in the step 3;
and 5: training the deep neural network model in the step 4 to obtain network parameters;
step 6: performing linear regression on the trained deep neural network model in the step 5, wherein the intersection point position of a regression line and a horizontal axis is the focus position;
the linear regression of step 6 specifically includes the following steps:
step 6.1: the shot image is divided into a at each position through the imagejA standard particle image;
step 6.2: using the trained deep neural network model to act on the standard particles at each position to obtain the actual output value of the particles;
step 6.3: constructing a rectangular coordinate system by using the position of the standard particle as an abscissa x and the actual output value of the standard particle as an ordinate y, wherein the initial position of the standard particle is a coordinate origin to obtain a data point set (x)i,yi);
Step 6.4: performing linear regression on the data point set to obtain a regression line;
step 6.5: the position of the intersection of the regression line and the abscissa x of the standard particle is the focal position of the autofocus.
2. The automatic focusing method of a tangible composition analyzer according to claim 1, wherein the automatic focusing device in step 1 comprises a light source (1), a planar flow cell (2), an objective lens (3), a camera (4) and a focusing driving motor (5), wherein the lens of the camera (4) is installed with the objective lens (3), the lens of the camera (4) and the light source (1) are arranged at two sides of the planar flow cell (2), the lens of the camera (4) is aligned with the planar flow cell (2) for shooting, and the focusing driving motor (5) drives the planar flow cell (2) to move.
3. A method of auto-focusing a tangible composition analyzer as defined in claim 1, wherein the focusing fluid of step 1 contains standard particles at a concentration of 1000 ± 50 per microliter.
4. The method of automatically focusing a tangible composition analyzer according to claim 1, wherein the starting position of step 2 is a position of the planar flow cell (2) at which the focus driving motor (5) starts driving.
5. The method of auto-focusing a tangible composition analyzer as claimed in claim 1, wherein the step 3 is to cut the captured image to retain standard particles, thereby obtaining a standard particle image.
6. The method of claim 1, wherein the training of the deep neural network model of step 5 comprises the steps of:
step 5.1: the standard particles at each position are corresponding to a numerical value to serve as labels of the particles, and the range of the numerical value is [ -q, q ], so that a training set is constructed;
step 5.2: and inputting each data of the training set into the deep neural network model, and performing training from bottom to top to full-connection layer output, thereby completing the training of the deep neural network and obtaining the parameters of the deep neural network.
7. The method of claim 6, wherein the deep neural network model of step 5.2 comprises an input layer, a plurality of convolutional layers, a pooling layer, and a fully-connected layer from bottom to top.
8. A method of auto-focusing a tangible composition analyzer as defined in claim 1, wherein the linear regression equation of step 6.4 is y ═ b0+b1x
Wherein the content of the first and second substances,
Figure FDA0003515876170000021
wherein, b0Representing regression linesIntercept, b1Representing the slope of the regression line, xiIs the abscissa, y, of the ith standard particleiIs the ordinate of the ith standard particle, n is the number of standard particles,
Figure FDA0003515876170000022
is the mean value of the abscissa of the standard particle,
Figure FDA0003515876170000023
mean values of the ordinate of the standard particles.
CN202110583364.7A 2021-05-27 2021-05-27 Automatic focusing method of visible component analyzer Active CN113438413B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110583364.7A CN113438413B (en) 2021-05-27 2021-05-27 Automatic focusing method of visible component analyzer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110583364.7A CN113438413B (en) 2021-05-27 2021-05-27 Automatic focusing method of visible component analyzer

Publications (2)

Publication Number Publication Date
CN113438413A CN113438413A (en) 2021-09-24
CN113438413B true CN113438413B (en) 2022-04-12

Family

ID=77802956

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110583364.7A Active CN113438413B (en) 2021-05-27 2021-05-27 Automatic focusing method of visible component analyzer

Country Status (1)

Country Link
CN (1) CN113438413B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102062929A (en) * 2010-11-27 2011-05-18 长春迪瑞医疗科技股份有限公司 Automatic focusing method and device for microscope system
CN105652429A (en) * 2016-03-22 2016-06-08 哈尔滨理工大学 Automatic focusing method for microscope cell glass slide scanning based on machine learning
US10341551B1 (en) * 2018-05-21 2019-07-02 Grundium Oy Method, an apparatus and a computer program product for focusing
CN111551117A (en) * 2020-04-29 2020-08-18 湖南国科智瞳科技有限公司 Method and system for measuring focus drift distance of microscopic image and computer equipment
CN112135048A (en) * 2020-09-23 2020-12-25 创新奇智(西安)科技有限公司 Automatic focusing method and device for target object

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101702053B (en) * 2009-11-13 2012-01-25 长春迪瑞实业有限公司 Method for automatically focusing microscope system in urinary sediment examination equipment
JP2014216213A (en) * 2013-04-26 2014-11-17 株式会社日立ハイテクノロジーズ Charged particle microscope device and method for acquiring image by charged particle microscope device
DE102018219867B4 (en) * 2018-11-20 2020-10-29 Leica Microsystems Cms Gmbh Learning autofocus
WO2020188584A1 (en) * 2019-03-21 2020-09-24 Sigtuple Technologies Private Limited Method and system for auto focusing a microscopic imaging system
JPWO2021090574A1 (en) * 2019-11-06 2021-05-14

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102062929A (en) * 2010-11-27 2011-05-18 长春迪瑞医疗科技股份有限公司 Automatic focusing method and device for microscope system
CN105652429A (en) * 2016-03-22 2016-06-08 哈尔滨理工大学 Automatic focusing method for microscope cell glass slide scanning based on machine learning
US10341551B1 (en) * 2018-05-21 2019-07-02 Grundium Oy Method, an apparatus and a computer program product for focusing
CN111551117A (en) * 2020-04-29 2020-08-18 湖南国科智瞳科技有限公司 Method and system for measuring focus drift distance of microscopic image and computer equipment
CN112135048A (en) * 2020-09-23 2020-12-25 创新奇智(西安)科技有限公司 Automatic focusing method and device for target object

Also Published As

Publication number Publication date
CN113438413A (en) 2021-09-24

Similar Documents

Publication Publication Date Title
JP2939647B2 (en) Automatic focus adjustment method for flow imaging cytometer
KR101891364B1 (en) Fast auto-focus in microscopic imaging
CN111007661A (en) Microscopic image automatic focusing method and device based on deep learning
CN112004025B (en) Unmanned aerial vehicle automatic driving zooming method, system and equipment based on target point cloud
CN109873948A (en) A kind of optical microscopy intelligence auto focusing method, equipment and storage equipment
CN104902160B (en) A kind of information processing method and electronic equipment
CN106125246A (en) A kind of from the method seeking laser focal plane
CN107797262A (en) Microscope different multiples camera lens joint focus method based on image texture
CN111626933A (en) Accurate and rapid microscopic image splicing method and system
CN112203012A (en) Image definition calculation method, automatic focusing method and system
CN112866542B (en) Focus tracking method and apparatus, electronic device, and computer-readable storage medium
CN111462075A (en) Rapid refocusing method and system for full-slice digital pathological image fuzzy area
CN110531484A (en) A kind of microscope Atomatic focusing method that focus process model can be set
CN110210333A (en) A kind of focusing iris image acquiring method and device automatically
CN103354599A (en) Automatic focusing method applied to moving light source scene and automatic focusing apparatus
CN113923358A (en) Online automatic focusing method and system in flying shooting mode
CN115308876A (en) Reference focal plane-based microscope rapid focusing method, device, medium and product
CN106154688A (en) A kind of method and device of auto-focusing
CN113438413B (en) Automatic focusing method of visible component analyzer
CN106324820A (en) Image processing-based automatic focusing method applied to dual-channel fluorescence optical microscopic imaging
CN109752831B (en) Automatic focusing method, system and device for microscope with controllable focusing time
CN109325405A (en) A kind of mask method of lens type, device and equipment
CN109318235B (en) Quick focusing method of robot vision servo system
CN114785953B (en) SFR-based camera automatic focusing method and device
CN105959577A (en) Camera focusing method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant