CN114697548B - Microscopic image shooting focusing method and device - Google Patents

Microscopic image shooting focusing method and device Download PDF

Info

Publication number
CN114697548B
CN114697548B CN202210279773.2A CN202210279773A CN114697548B CN 114697548 B CN114697548 B CN 114697548B CN 202210279773 A CN202210279773 A CN 202210279773A CN 114697548 B CN114697548 B CN 114697548B
Authority
CN
China
Prior art keywords
image
feature
classification
category
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210279773.2A
Other languages
Chinese (zh)
Other versions
CN114697548A (en
Inventor
谢浩
张雅俊
熊飞
强军奇
王雅洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Maccura Medical Electronics Co Ltd
Original Assignee
Maccura Medical Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Maccura Medical Electronics Co Ltd filed Critical Maccura Medical Electronics Co Ltd
Priority to CN202210279773.2A priority Critical patent/CN114697548B/en
Publication of CN114697548A publication Critical patent/CN114697548A/en
Application granted granted Critical
Publication of CN114697548B publication Critical patent/CN114697548B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/675Focus control based on electronic image sensor signals comprising setting of focusing regions

Abstract

The application relates to a microscopic image shooting focusing method and a microscopic image shooting focusing device, wherein shooting targets are shot at least two imaging positions to obtain original images corresponding to each imaging position; then, segmenting a characteristic image from the original image corresponding to each imaging position; classifying each characteristic image to obtain classification data corresponding to each characteristic image; and obtaining a focusing definition value corresponding to each imaging position according to the preset weight value corresponding to each classification category and the classification data corresponding to each characteristic image, and determining the imaging position with the largest focusing definition value as a target focusing position. Compared with the prior art, the microscopic image shooting focusing method adopted by the application is simple and flexible, can rapidly and accurately determine the optimal focusing position, and avoids the condition that the optimal focusing position judgment is inaccurate due to fitting of a multimodal definition curve.

Description

Microscopic image shooting focusing method and device
Technical Field
The application relates to the field of image processing, in particular to a microscopic image shooting focusing method and device.
Background
When a microscopic image of flowing liquid including particles is taken, the focusing mode adopted is usually to continuously take a picture of the laminar flow plane of the focusing liquid of the flowing liquid in the stepping process of a focusing motor, then calculate the image definition of each image, and then step the focusing motor to the focusing position with the best definition. In the prior art, a method of first-order gradient operator and second-order gradient operator is generally adopted for calculating the image definition, a stable definition value is difficult to obtain for the same image, a fitted definition curve is easy to fluctuate due to the influence of particle quantity and particle morphology, the fitted definition curve is generally multimodal, and the best focusing position of the definition is difficult to determine according to the definition curve.
Disclosure of Invention
In view of the above, one of the technical problems to be solved by the embodiments of the present application is to provide a microscopic image photographing focusing method and device, which are used for solving the problem that it is difficult to determine the focusing position with the best definition in the prior art.
The first aspect of the embodiment of the application discloses a microscopic image shooting focusing method, which comprises the following steps: shooting a shooting target at least two imaging positions to obtain an original image corresponding to each imaging position, wherein the shooting target is flowing liquid comprising particles;
segmenting a characteristic image from the original image corresponding to each imaging position, wherein the characteristic image comprises an imaging area of particles in the flowing liquid;
classifying each characteristic image to obtain classification data corresponding to each characteristic image;
and obtaining a focusing definition value corresponding to each imaging position according to a preset weight value corresponding to each classification category and classification data corresponding to each characteristic image, and determining the imaging position with the largest focusing definition value as a target focusing position.
The second aspect of the embodiment of the application discloses a microscopic image shooting focusing device, which comprises: the device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for shooting a shooting target at least two imaging positions to acquire an original image corresponding to each imaging position, and the shooting target is flowing liquid comprising particles;
A segmentation module, configured to segment a feature image from the original image corresponding to each imaging position, where the feature image includes an imaging region of particles in the flowing liquid;
the classification module is used for classifying each characteristic image to obtain classification data corresponding to each characteristic image;
the determining module is used for obtaining a focusing definition value corresponding to each imaging position according to a preset weight value corresponding to each classification category and classification data corresponding to each characteristic image, and determining the imaging position with the largest focusing definition value as a target focusing position.
Compared with the prior art, the embodiment of the application firstly shoots a shooting target at least two imaging positions to obtain an original image corresponding to each imaging position; then, segmenting a characteristic image from the original image corresponding to each imaging position; classifying each characteristic image to obtain classification data corresponding to each characteristic image; and obtaining a focusing definition value corresponding to each imaging position according to the preset weight value corresponding to each classification category and the classification data corresponding to each characteristic image, and determining the imaging position with the largest focusing definition value as a target focusing position. The microscopic image shooting focusing method adopted by the embodiment of the application is simple and flexible, can rapidly and accurately determine the optimal focusing position, and avoids the condition that the optimal focusing position judgment is inaccurate due to fitting of a multimodal definition curve.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a microscopic image photographing focusing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an original image acquisition method according to an embodiment of the present application;
fig. 3 is a schematic flow chart of a microscopic image photographing focusing method disclosed in the second embodiment of the application;
FIG. 4 is a schematic diagram of a second embodiment of the present application for constructing a preset classification model;
FIG. 5 is a schematic diagram of determining a preset weight according to a Gaussian curve according to a second embodiment of the present application;
FIG. 6 is a schematic diagram of determining a preset weight according to a Gaussian curve according to a second embodiment of the present application;
fig. 7 is a schematic flow chart of a microscopic image photographing focusing method according to the third embodiment of the present application;
fig. 8 is a schematic flow chart of a microscopic image photographing focusing method according to a fourth embodiment of the present application;
FIG. 9 is a schematic diagram of a fitted focus sharpness curve disclosed in embodiment IV of the application;
fig. 10 is a schematic structural diagram of a microscopic image photographing focusing device according to a fifth embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be noted that the terms "first," "second," "third," and "fourth," etc. in the description and claims of the present application are used for distinguishing between different objects and not for describing a particular sequential order. The terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
Example 1
Referring to fig. 1, fig. 1 is a schematic flowchart of a microscopic image photographing focusing method according to an embodiment of the present application, where the method includes:
and step S101, shooting a shooting target at least two imaging positions to obtain an original image corresponding to each imaging position, wherein the shooting target is flowing liquid comprising particles.
In this embodiment, by adjusting the relative distance between the photographing apparatus and the photographing target, at least two imaging positions can be determined, wherein the relative distance between the photographing apparatus and the photographing target is different at different imaging positions. When the relative distance between the photographing apparatus and the photographing target is adjusted, the moving manner of the two is not limited. For example, the photographing apparatus may move toward the photographing target, and the photographing target is stationary; or the shooting target moves to shooting equipment, and the shooting equipment is not moved; or the photographing apparatus and the photographing object move simultaneously.
In this embodiment, the particles are formed components that are encapsulated by the liquid and that flow with the liquid, and may be cells or other solid sludge. For example, when microscopic images of particles in urine are taken, the particles may be sediment formed by the urine after centrifugation, and the flowing liquid may be sheath liquid for surrounding the sediment.
Alternatively, as shown in fig. 2, the photographing objective may be located in an instrument flow cell employing a sheath flow technique, each particle is surrounded by sheath liquid at its periphery, and the particle flow may pass through the photographing region of the photographing apparatus in a single particle arrangement.
In this embodiment, the number of original images corresponding to each imaging position is not limited, and may be one or more, that is, the photographing apparatus may photograph and obtain one or more original images at each imaging position.
Step S102, segmenting a characteristic image from the original image corresponding to each imaging position, wherein the characteristic image comprises an imaging area of particles in flowing liquid.
In this embodiment, an image segmentation algorithm may be used to segment an original image corresponding to each imaging position, and segment the original image along the edges of the imaging region of the particle as much as possible, so as to obtain one or more feature images corresponding to each imaging position.
The type and number of the preset image segmentation algorithms are not limited, and may be one algorithm or a combination of algorithms. For example, it may be a conventional edge extraction algorithm and/or a deep learning algorithm.
If the particles in the flowing liquid are dispersed, it is preferable to divide the imaging region of one particle into a characteristic image, that is, along the edge of the imaging region of each particle as much as possible.
Step S103, classifying each characteristic image to obtain classification data corresponding to each characteristic image.
In this embodiment, all feature images may be classified according to one or more relevant attributes of each feature image. For example, an attribute for characterizing the size of an image, an attribute for characterizing the sharpness of an image, an attribute for characterizing the shape of an image, and the like are possible, and the present embodiment is not limited thereto.
Alternatively, since it is generally necessary to determine a position satisfying a certain sharpness condition as a target focusing position in the focusing process, each feature image may be classified at least according to an image sharpness value of each feature image, and classification data corresponding to each feature image may be obtained. Wherein the image sharpness value is used to characterize the sharpness of the image.
In this embodiment, the classification mode used for classifying the feature images is not limited, and the number of classified categories is not limited, and may be set according to the actual application requirements. For example, classification may be performed manually, classification may be performed by using a computer algorithm model, or classification may be performed by using a combination of manual and computer algorithm models.
In this embodiment, the classification data corresponding to each feature image is at least used to identify the classification result of each feature image, and the specific identification mode is not limited. For example, a specific classification category corresponding to each feature image may be identified by a single number or letter, or a confidence level that each feature image is classified to each classification category may be identified by vector data.
Step S104, obtaining a focusing definition value corresponding to each imaging position according to a preset weight value corresponding to each classification category and classification data corresponding to each characteristic image, and determining the imaging position with the largest focusing definition value as a target focusing position.
In this embodiment, since the classification of the feature image is based on at least one type of attribute of the feature image, the correspondence between each classification category and the preset weight value can be determined according to the degree of influence of the type of attribute on the determination of the best focus position. The specific setting mode and the value taking mode of the preset weight value are not limited, and the preset weight value can be set according to actual application requirements.
For example, if each feature image is classified only according to the image sharpness value of each feature image, the feature images in different classification categories are in different image sharpness threshold ranges, and if the image sharpness value of the image obtained by shooting at the target focusing position is required to be highest, the preset weight value corresponding to each classification category can be determined in turn according to the image sharpness threshold range corresponding to each classification category. The method comprises the steps of setting preset weight values corresponding to classifications of feature images with highest image definition values as the highest, and setting preset weight values corresponding to classifications of feature images with lowest image definition values as the lowest.
In this embodiment, since the classification data corresponding to each feature image and the preset weight value corresponding to each classification category are obtained, and each feature image has a corresponding imaging position, the focus definition value corresponding to each imaging position can be obtained by calculation according to the preset weight value corresponding to each classification category and the classification data corresponding to each feature image, so that the imaging position with the largest focus definition value is determined as the target focus position, and then the shooting device can be driven by the motor to step to the target focus position to perform microscopic image shooting on the shooting target.
Alternatively, since not only the number of the original images captured at each imaging position may be different, but also the number of the feature images segmented from each original image may be different, if the selected method for calculating the focus sharpness value is not suitable, the calculation accuracy of the focus sharpness values corresponding to different imaging positions may be different. Therefore, in order to improve the calculation accuracy of the focusing definition value corresponding to each imaging position, the focusing definition value corresponding to each imaging position can be obtained by using a preset statistical algorithm according to the preset weight value corresponding to each classification category and the classification data corresponding to each characteristic image.
For example, when calculating the focusing definition value corresponding to one imaging position, a definition intermediate value corresponding to each feature image can be calculated and obtained according to the classification data of each feature image corresponding to the imaging position and the preset weight value corresponding to each classification category; and then determining a value with statistical significance according to the intermediate value of the definition of all the characteristic images corresponding to the imaging position, and taking the value as a focusing definition value corresponding to the imaging position. The value with statistical significance can be one of average value, median value and quantile of the intermediate values of the definition of all the feature images corresponding to the imaging position.
As can be seen from the above embodiments of the present invention, in this embodiment, a shooting target is first shot at least two imaging positions, and an original image corresponding to each imaging position is obtained; then, segmenting a characteristic image from the original image corresponding to each imaging position; classifying each characteristic image to obtain classification data corresponding to each characteristic image; and obtaining a focusing definition value corresponding to each imaging position according to the preset weight value corresponding to each classification category and the classification data corresponding to each characteristic image, and determining the imaging position with the largest focusing definition value as a target focusing position. The microscopic image shooting focusing method adopted by the embodiment is simple and flexible, the optimal focusing position can be rapidly and accurately determined, and the condition that the optimal focusing position judgment is inaccurate due to fitting of a multimodal definition curve is avoided.
Example two
As shown in fig. 3, fig. 3 is a schematic flowchart of a microscopic image photographing focusing method according to a second embodiment of the present application, where the method includes:
in step S201, a shooting target is shot at least two imaging positions, and an original image corresponding to each imaging position is obtained, where the shooting target is a flowing liquid including particles.
In this embodiment, step S201 is substantially the same as or similar to step S101 in the first embodiment, and will not be described herein.
Step S202, segmenting a characteristic image from the original image corresponding to each imaging position, wherein the characteristic image comprises an imaging area of particles in flowing liquid.
In this embodiment, when the shooting target is located in the apparatus adopting the sheath flow technology, some particles may turn over during the flowing process of the flowing liquid, but in practical application, it is generally required to perform focusing shooting on particles that do not turn over during the flowing process of the flowing liquid, so in order to reduce the data processing amount in the subsequent classification processing step, no matter whether abnormal particles are included in the flowing liquid or not, the imaging area of the normal particles may be only segmented from the original image corresponding to each imaging position, so as to obtain the feature image in each original image. The abnormal particles are the particles which turn over in the flowing process of the flowing liquid; normal particles are particles that do not invert during the flow of the flowing liquid.
Specifically, it may be preferable that step S202 includes: and dividing the imaging area of the normal particles from the original image corresponding to each imaging position to obtain a characteristic image in each original image.
Step S203, classifying each feature image according to the feature data of each feature image to obtain classification data corresponding to each feature image, wherein the feature data of each feature image is at least used for representing the image definition value and the image size value of each feature image.
In this embodiment, according to the image capturing principle, when the distance between the particle and the capturing device is gradually changed from small to large, in the captured image, the image sharpness value of the imaging area of the particle tends to change from small to large and then from large to small. Since the depth positions of the plurality of particles in the flowing liquid may be different, when the photographing apparatus photographs the photographing object, the relative distances between the different particles and the photographing apparatus may be different, so that the image sharpness values of the imaging regions of the different particles in the original image obtained by photographing may be also different.
And because there may be different distances between the two particles and the photographing apparatus, but in the case that the image sharpness values of the imaging areas of the two particles in the same original image are the same, that is, the image sharpness values of the two feature images divided from the original image are the same, if all the feature images are classified according to the image sharpness values of each feature image, the finally determined target focusing position may not be the best focusing position for sharpness, so in order to improve the accuracy of judging the best focusing position for sharpness, each feature image may be classified according to the image sharpness value of each feature image and the distance between the corresponding particle and the photographing apparatus, and classification data corresponding to each feature image may be obtained.
Since there are a plurality of particles of the same size and shape according to the image capturing principle, and if the relative directions between the particles and the capturing device are the same, the imaging area of the particles closer to the capturing device in the image captured by the capturing device is larger. Thus, the distance between each particle and the photographing device can be characterized according to the image size value of each feature image, i.e., the size value of the imaging region of each particle in the original image.
Optionally, in order to improve the data processing speed and accuracy of classifying the feature images, a preset classification model may be used to classify each feature image, so as to obtain classification data corresponding to each feature image. Correspondingly, step S200 is further included before step S203: and constructing a preset classification model.
As shown in fig. 4, step S200 may specifically include sub-step S200 a-sub-step S200c:
sub-step S200a, a plurality of feature images are segmented from the sample image.
The sample image is an image obtained by photographing a sample target in advance by a photographing device, and may be one or more. But in order to increase the accuracy of the classification as many sample images as possible can be obtained. The sample target is a flowing liquid including particles, and may be the same as or different from the photographic target.
The method for segmenting the feature image from the sample image is substantially the same as or similar to the method for segmenting the feature image from the original image in step S202, and will not be described here.
Sub-step S200b, determining a classification category and feature data corresponding to each feature image in the sample image, and obtaining a sample dataset.
The determination method of the classification category and the feature data corresponding to each feature image divided in the sample image is not limited, and for example, a manual determination method can be adopted, classification can be performed by using a computer algorithm model, or a combination method of manual and computer algorithm models can be adopted.
The method of identification for each classification category in the sample data set is not limited and may be identified using letters and/or numbers. For example, if the number of classification categories is n, then C1, C2, cn may be used for identification.
Sub-step S200c, constructing a preset classification model from the sample dataset.
The preset classification model is a machine learning model, and the characteristic data corresponding to all or part of the characteristic images in the sample data set can be used as input data, and the classification category corresponding to all or part of the characteristic images can be used as output data, so that the preset classification model is obtained through training.
The algorithm adopted by the machine learning model is not limited, and for example, the algorithm can be at least one of a decision tree algorithm, a support vector machine algorithm, a multi-layer perceptron algorithm, a convolutional neural network algorithm and the like.
Alternatively, in order to improve the accuracy of classifying the feature images by using the preset classification model, a training data set and a test data set may be extracted from the sample data set, and then the training data set is used to train to obtain the preset classification model, and the test data set is used to test and evaluate the accuracy of classification.
Optionally, in order to further improve the accuracy of classifying the feature images by using the preset classification model, a training data set, a test data set and a verification data set may be extracted from the sample data set first, then the training data set is used for training to obtain the preset classification model, and the test data set and the verification data set are used for evaluating and verifying the accuracy of classification.
Step S204, according to the preset weight value corresponding to each classification category and the classification data corresponding to each characteristic image, obtaining a focusing definition value corresponding to each imaging position, and determining the imaging position with the largest focusing definition value as a target focusing position.
In this embodiment, as described above, according to the image capturing principle, when the distance between the particle and the capturing device is gradually changed from small to large, in the captured image, the image sharpness value of the imaging area of the particle tends to change from small to large and then from large to small, so in the case of comprehensively considering the relative distance between the particle and the capturing device and the image sharpness value of the imaging area of the particle, in order to improve the accuracy of determining the best focus position of sharpness, before step S204, it may include:
and determining a preset weight value corresponding to each classification category by utilizing a preset axisymmetric line according to the definition threshold range and the size threshold range corresponding to each classification category. On a two-dimensional coordinate plane, presetting a value increasing or decreasing direction of an axisymmetric line along one coordinate axis, wherein the value of the other coordinate axis meets the change trend from low to high to low; the preset weight value corresponding to the classification category with the highest definition threshold range is the highest.
The method comprises the steps of firstly sorting all classification categories according to the height of a size threshold range corresponding to each classification category, and sequentially arranging the classification categories along the value increasing or decreasing direction of one coordinate axis on a two-dimensional coordinate plane. The interval between the values of the adjacent classification categories on the coordinate axis is not limited, and may be the same, or may be set at intervals according to the width of the size threshold range.
And then setting the classification category with the highest definition threshold range on the vertex of the preset axisymmetric line, namely setting the highest value point of the other coordinate axis of the preset axisymmetric line on the two-dimensional coordinate plane. The preset weight value corresponding to the classification category with the highest definition threshold range is set to be the highest, and the highest image definition value of the characteristic images classified into the classification category is considered, so that the shooting effect is the best.
Alternatively, considering that when the distance between the particle and the photographing apparatus is changed stepwise from small to large, the change rule of the image sharpness value of the imaging region in which the particle is photographed is relatively similar to the change trend of the gaussian curve, it may be preferable that the preset axisymmetric line be a preset gaussian curve.
The coordinate value of each classification category in the X-axis direction of the two-dimensional coordinate plane where the preset Gaussian curve is located can be determined according to the size threshold range corresponding to each classification category; according to the definition threshold range corresponding to each classification category, the classification category with the highest definition threshold range is arranged on the vertex of a preset Gaussian curve, and the positions of other classification categories on the preset Gaussian curve are determined, so that the preset weight value corresponding to each classification category is further determined according to the coordinate value of the Y-axis direction corresponding to the preset Gaussian curve.
For example, as shown in fig. 5, 1 to 11 classification categories may be first ordered according to the height of the size threshold range corresponding to each classification category, and sequentially arranged along the value increasing direction of the X axis; then setting the classification category 6 with the highest definition threshold range on the vertex of a preset Gaussian curve, wherein the coordinate value of the corresponding Y-axis direction is 1.0, namely the corresponding preset weight value is 1.0; other classification categories can determine the position on a preset Gaussian curve according to the distribution position of each X-axis, so that the corresponding preset weight value can be determined according to the coordinate value of the corresponding Y-axis direction. Thus, the corresponding weight values of the 11 classification categories are [0.4559,0.6049,0.7537,0.8819,0.9691,1.0000,0.9691,0.8819,0.7537,0.6049,0.4559] in order.
In addition, since the classification types with the highest definition threshold may be different according to the same classification method due to different types of particles, the classification type with the highest definition threshold may be determined according to the types of particles in practical application, so as to improve the applicability of the method.
For example, as shown in fig. 6, if the classification class with the highest range of the sharpness threshold is the 10 th class, the 10 th class may be determined on the vertex of the preset gaussian curve, so as to obtain the weight value corresponding to the 11 classification classes [0.0785,0.1339,0.2145,0.3227,0.4559,0.6049,0.7537,0.8819,0.9691,1.0000,0.9691] in turn.
The standard deviation sigma calculation formula of the preset Gaussian curve is as follows:
optionally, for easy calculation, when the standard deviation σ of the gaussian curve is preset, the gaussian distribution mean μ=0 may be taken, the range of values of the preset weight values is determined as [0,1], and if the point f (0) =1 exists, the standard deviation σ may be solved according to the above formula as follows:
alternatively, since the vertex of the preset axisymmetric line is unique, only one focus position with optimal definition is actually present during the focus shooting, the number of classification categories may be set to be odd. Among them, the number of classification categories may be preferably 15 for easy operation and good focusing effect.
As can be seen from the above embodiments of the present invention, in order to reduce the data processing amount in the subsequent classification processing step, the embodiments of the present invention may only segment the imaging region of the normal particles from the original image corresponding to each imaging position, so as to obtain the feature image in each original image; in order to improve the judgment accuracy of the best focusing position of definition, classifying each characteristic image according to the image definition value of each characteristic image and the distance between particles and shooting equipment to obtain classification data corresponding to each characteristic image; in order to improve the data processing speed and accuracy of feature image classification, a preset classification model of machine learning can be utilized to classify each feature image; in order to improve the accuracy of determining the high-definition optimal focusing position, and be widely applicable to various different types of particles, a preset weight value corresponding to each classification category can be determined by utilizing a preset axisymmetric line according to a definition threshold range and a size threshold range corresponding to each classification category.
Example III
As shown in fig. 7, fig. 7 is a schematic flowchart of a microscopic image photographing focusing method according to a third embodiment of the present application, the method includes:
in step S301, a shooting target is shot at least two imaging positions, and an original image corresponding to each imaging position is obtained, where the shooting target is a flowing liquid including particles.
In this embodiment, step S301 is substantially the same as or similar to step S101 in the first embodiment, and will not be described herein.
Step S302, segmenting a feature image from the original image corresponding to each imaging position, wherein the feature image includes an imaging region of particles in the flowing liquid.
In this embodiment, the step S302 is substantially the same as or similar to the step S102 in the first embodiment or the step S202 in the second embodiment, and will not be described herein.
Step S303, classifying each feature image according to the feature data of each feature image to obtain confidence vector data corresponding to each feature image, wherein the feature data of each feature image is at least used for representing an image definition value and an image size value of each feature image, and the confidence vector data corresponding to each feature image is used for representing the confidence level of each feature image classified to each classification category.
In this embodiment, when classifying the feature images, if one feature image is classified into one classification category, there may be a case where two feature images with relatively close feature data are classified into different classification categories, and when determining the target focusing position according to the preset weight value corresponding to each classification category in the subsequent step, the determined target focusing position is not the focusing position with the best definition. Therefore, in order to improve accuracy of definition best focus position determination, when classifying each feature image, confidence vector data may be used to characterize a confidence level of each feature image classified to each classification category, and then the confidence vector data may be used to determine a target focus position.
Alternatively, for ease of calculation, the confidence vector data corresponding to each feature image may be normalized vector data, i.e., the confidence vector data corresponding to each feature image may be represented as [ p ] 1 ,p 2 ,...,p n ]And (2) and
for example, if there are 11 classification categories in total, the confidence probabilities that a feature image is classified into each classification category are 1%, 4%, 5%, 50%, 20%, 5%, 3%, 2%, 3%, and 5%, respectively, the confidence vector data corresponding to the feature image may be represented as [0.01,0.04,0.05,0.5,0.2,0.05,0.03,0.02,0.02,0.03,0.05].
Step S304, according to the preset weight value corresponding to each classification category and the confidence vector data corresponding to each characteristic image, obtaining a focusing definition value corresponding to each imaging position, and determining the imaging position with the largest focusing definition value as a target focusing position.
In this embodiment, the setting method of the preset weight value corresponding to each classification category is substantially the same as or similar to that in step S104 in the first embodiment or step S204 in the second embodiment, and will not be described herein.
In this embodiment, when calculating the focus sharpness value corresponding to one imaging position, the preset weight value corresponding to each classification category may be expressed as [ w ] by a vector 1 ,w 2 ,...,w n ]And representing the confidence vector data corresponding to each feature image as [ p ] 1 ,p 2 ,...,p n ]Therefore, the definition intermediate value corresponding to each feature image can be obtained by calculating according to the confidence vector data of each feature image corresponding to the imaging position and the preset weight value corresponding to each classification category by using the following formula:
for example, if the weight vector corresponding to 11 classification categories is represented as [0.0785,0.1339,0.2145,0.3227,0.4559,0.6049,0.7537,0.8819,0.9691,1.0000,0.9691], and the confidence vector data corresponding to one feature image of one imaging position is represented as [0.01,0.04,0.05,0.5,0.2,0.05,0.03,0.02,0.02,0.03,0.05], the sharpness intermediate value corresponding to the feature image can be calculated according to the two vector data as 0.437727.
Further, according to the intermediate value of the definition of all the feature images corresponding to the imaging position, a value with statistical significance can be determined to be used as the focusing definition value corresponding to the imaging position. The value with statistical significance can be one of average value, median value and quantile of the intermediate values of the definition of all the feature images corresponding to the imaging position.
As can be seen from the above embodiments of the present application, in the embodiments of the present application, confidence vector data is used to characterize the confidence level of each feature image classified to each classification category, and the preset weight value corresponding to each classification category is also represented by a vector, so that the target focusing position determined by final calculation is closer to the best focusing position of definition in the actual focusing process, so as to further improve the capturing effect of the microscopic image.
Example IV
As shown in fig. 8, fig. 8 is a schematic flowchart of a microscopic image photographing focusing method according to a fourth embodiment of the present application, the method includes:
in step S401, a shooting target is shot at least two imaging positions, and an original image corresponding to each imaging position is obtained, where the shooting target is a flowing liquid including particles.
In this embodiment, the step S401 is substantially the same as or similar to the step S101 in the first embodiment, and will not be described herein.
Step S402, segmenting a feature image from the original image corresponding to each imaging position, where the feature image includes an imaging region of particles in the flowing liquid.
In this embodiment, the step S402 is substantially the same as or similar to the step S102 in the first embodiment or the step S202 in the second embodiment, and will not be described herein.
Step S403, classifying each feature image according to the feature data of each feature image to obtain feature vector data corresponding to each feature image, where the feature data of each feature image is at least used to represent an image sharpness value, an image size value, and a shape feature value of each feature image, and the feature vector data corresponding to each feature image is used to represent a confidence level that each feature image is classified into a category of each abnormal particle image and a category of each normal particle image.
In this embodiment, on the one hand, when the shooting target is located in the apparatus adopting the sheath flow technology, if an abnormal condition occurs in the operation of the apparatus, a part of particles may overturn in the flowing process of the flowing liquid, so that the detection result of the apparatus is inaccurate; on the other hand, it is considered that it is possible that a complete imaging region of particles cannot be segmented because only a part of the region of individual particles is photographed into the original image, i.e. the imaging region of particles is located just at the edge of the original image, or the brightness of the image, etc. If the imaging area of the particles of the above abnormal situation is segmented and used as a feature image for determining the target focus position in the subsequent step, there is a high possibility that the focus position determination with the best definition is inaccurate. Therefore, in step S403, it is necessary to distinguish between the normal particle image and the abnormal particle image.
Specifically, the feature image includes both an abnormal particle image and a normal particle image, wherein the normal particle image includes a complete imaging region of particles that are not flipped during the flowing of the flowing liquid, and the abnormal particle image includes a particle imaging region that is flipped during the flowing of the flowing liquid, and an incomplete imaging region of particles that are not flipped during the flowing of the flowing liquid.
Specifically, since the shapes of the normal particle images are similar, and the shapes of the abnormal particle images may be varied, in order to distinguish the normal particle images from the abnormal particle images, when classifying each feature image, classification is performed according to the image sharpness value and the image size value of each feature image, and also according to the shape feature value of each feature image. Wherein the shape feature value of each feature image is used to characterize the shape of the imaging region of the particle.
Specifically, since the target focusing position is calculated and determined mainly according to the feature vector data corresponding to the normal particle images in the subsequent step, it is necessary to preset at least two categories of normal particle images and at least one category of abnormal particle images, and the specific number of the present embodiment is not limited herein.
In this embodiment, each feature image may be classified according to an image sharpness value, an image size value, and a shape feature value of each feature image, a confidence level of each feature image classified into a category of each abnormal particle image and a category of each normal particle image may be determined, and the feature vector data may be used for characterization.
Wherein if the confidence level of a feature image classified into one classification category is higher, the feature image is more likely to belong to the classification category, so that according to the feature vector data corresponding to each feature image, whether the feature image belongs to a normal particle image or an abnormal particle image can be determined, and the feature image can be further classified into one of the classification categories.
For example, if 11 classification categories are classified together, wherein 11 th category is a category of the abnormal particle image, the feature vector data corresponding to one feature image is
[0.01,0.04,0.05,0.05,0.2,0.05,0.03,0.02,0.02,0.03,0.5], when the value of class 11 is 0.5 is the largest, it indicates that the feature image is most likely to be classified into the class of the abnormal particle image, and the feature image can be determined to be the abnormal particle image.
For another example, if 11 classification categories are classified together, wherein the 11 th category is a category of an abnormal particle image, the feature vector data corresponding to one feature image is [0.01,0.04,0.5,0.05,0.2,0.05,0.03,0.02,0.02,0.03,0.05], and the value of the 3 rd category is 0.5 maximum, which indicates that the feature image is most likely to be classified into the 3 rd category of a normal particle image, the feature image may be determined to be a normal particle image, and the feature image may be further classified into the 3 rd category of a normal particle image.
Alternatively, for ease of calculation, the feature vector data corresponding to each feature image may be normalized vector data.
For example, if n+1 classification categories are classified together, the 1 st to n th categories are categories of normal particle images, and the n+1 th category is a category of abnormal particle images, the feature vector data corresponding to each feature image may be expressed as [ p ] 1 ,p 2 ,...,p n ,p n+1 ]And (2) and
in this embodiment, when the shooting target is located in the apparatus adopting the sheath flow technology, if the running state of the apparatus is abnormal, a certain amount of particles will turn over during the flowing process of the flowing liquid, and/or a certain gap exists between a part of particles at a larger depth position in the liquid, so that the detection result of the apparatus is inaccurate, and therefore, whether the apparatus is in a normal running state can be determined by turning over the number of the particle images and the cross-gradient particle images. If the instrument is in an abnormal state, early warning prompt is carried out to remind the instrument to overhaul or adjust related parameters of the flowing liquid. Wherein the flipped particle image is an imaging region of particles that are flipped during the flowing of the flowing liquid; the cross-gradient particle image is an image that is not classified into the category including the largest number of normal particle images among normal particle images.
Specifically, the present embodiment may further include at least one of the following steps:
step A, when the duty ratio of the number of the turned particle images in the feature images divided from at least one original image exceeds a first duty ratio threshold value, early warning prompt is carried out;
and B, when the duty ratio of the number of the cross-gradient particle images in the feature images segmented from at least one original image exceeds a second duty ratio threshold, early warning prompt is carried out.
If the duty ratio of the number of the inverted particle images exceeds the first duty ratio threshold, it is indicated that the setting of the thickness of the inner cavity of the flow cell may be problematic, so that more particles are inverted in the flowing process of flowing liquid, and the thickness of the inner cavity of the flow cell needs to be adjusted.
If the ratio of the number of the cross-gradient particle images exceeds the second ratio threshold, the state of the sheath fluid is possibly problematic, so that a certain gap exists between partial particles with larger depth positions in the flowing fluid, and relevant parameters of the sheath fluid need to be adjusted.
The first duty ratio threshold and the second duty ratio threshold can be set according to actual application requirements.
Optionally, considering that in practical application, there may be a difference between depth positions of some particles in flowing liquid and a larger part of particles, but the distance is not large, the situation has little influence on practical application, and early warning prompt is not needed. Thus, to avoid too frequent warning cues, step B may further comprise:
And a sub-step B1 of determining a level value corresponding to the category of each normal particle image according to the size threshold range corresponding to the category of each normal particle image.
As described above, in an image obtained by photographing by the photographing apparatus, the imaging region of particles closer to the photographing apparatus is larger, that is, the image size value of the feature image is larger. Correspondingly, if the size threshold corresponding to the category of the normal particle image is higher, the normal particle corresponding to the category is indicated to be closer to the shooting device; if the size threshold corresponding to the category of the normal particle image is lower, the normal particle corresponding to the category is indicated to be far from the shooting device. Therefore, the categories of all the normal particle images can be ordered according to the size threshold range corresponding to the category of each normal particle image, and the level value corresponding to the category of each normal particle image can be determined in sequence.
The identification mode of the level value corresponding to each normal particle image category is not limited, and the level value can be a number or a letter. For example, the class values corresponding to the categories of 10 normal particle images may be sequentially labeled with 10 integers of 1 to 10; c can also be used 1 To C 10 The class values corresponding to the classes of the 10 normal particle images are marked in turn.
Step B2, determining the level value corresponding to the classification category with the largest number of normal particle images in each original image as the reference level value corresponding to each original image; and determining the grade value corresponding to the classification category of at least one cross-gradient particle image in each original image as the cross-gradient grade value corresponding to each original image.
And B3, determining the number of the cross-gradient particle images corresponding to the at least one original image when the sum of the reference level value corresponding to the at least one original image and the absolute value of the difference of each cross-gradient level value exceeds a preset cross-gradient threshold value.
The preset cross-gradient threshold setting mode is not limited, and flexible setting can be performed according to actual application requirements.
For example, the class values corresponding to the categories of 10 normal particle images may be sequentially marked with 10 integers of 1 to 10, and the preset cross-gradient threshold value may be set to 5. If the feature image segmented from an original image is divided into the most of the 6 th category of the normal particle image, the reference level value corresponding to the original image can be determined to be 6.
If a small number of feature images segmented in the original image are classified into the normal particle image category 1 and the normal particle image category 5, according to the calculation method in the above sub-step B3, the sum of the absolute value of the reference level value corresponding to the original image and each cross-gradient level value difference is calculated as: 6-1+|6-5|=6. The value exceeds the cross-gradient threshold value 5, so that the sum of the number of feature images classified into the normal particle image category 1 and the normal particle image category 5 can be determined as the number of cross-gradient particle images corresponding to the original image.
If a small number of feature images segmented in the original image are classified into the 4 th category of the normal particle image and the 5 th category of the normal particle image, according to the calculation method in the above sub-step B3, the sum of the absolute value of the reference level value corresponding to the original image and each cross-gradient level value difference is calculated as: 6-4+|6-5|=3. This value does not exceed the cross-gradient threshold 5. Therefore, the number of the cross-gradient particle images corresponding to the original image is not required to be determined, or the number of the cross-gradient particle images corresponding to the original image is determined to be 0, so that early warning prompt can not be carried out.
Step S404, obtaining confidence vector data corresponding to each normal particle image according to the feature vector data corresponding to each normal particle image, wherein the confidence vector data corresponding to each normal particle image is used for representing the confidence level of each normal particle image classified into the category of each normal particle image.
In this embodiment, since the feature vector data corresponding to each normal particle image is further used to characterize the confidence level of the category classified into each abnormal particle image, but only the confidence level of each normal particle image classified into the category of each normal particle image needs to be considered in the subsequent step of determining the target focusing position, the feature vector data corresponding to each normal particle image needs to be processed to obtain the confidence vector data corresponding to each normal particle image.
For example, if n+1 classification categories are classified together, the 1 st to n th categories are categories of normal particle images, the n+1 th category is a category of abnormal particle images, and feature vector data corresponding to each feature image is expressed as [ p ] 1 ,p 2 ,...,p n ,p n+1 ]ThenP can be removed from the vector data n+1 To use [ p ] 1 ,p 2 ,...,p n ]And the confidence vector data corresponding to each characteristic image is represented.
Optionally, for ease of calculation, the confidence vector data corresponding to each feature image may be further processed into normalized vector data, i.e., the confidence vector data corresponding to each feature image may be represented as [ p ] 1 ,p 2 ,...,p n ]And (2) and
step S405, obtaining a focusing definition value corresponding to each imaging position according to a preset weight value corresponding to each normal particle image category and confidence vector data corresponding to each normal particle image, and determining the imaging position with the largest focusing definition value as a target focusing position.
In this embodiment, the method for setting the preset weight value corresponding to each type of the normal particle image is substantially the same as or similar to that in step S104 in the first embodiment or step S204 in the second embodiment, and will not be described herein.
In this embodiment, the method for obtaining the focus definition value corresponding to each imaging position according to the preset weight value corresponding to the category of each normal particle image and the confidence vector data corresponding to each normal particle image is basically the same as or similar to that in step S304 in the third embodiment, and will not be described herein.
In this embodiment, if a focusing definition curve is fitted in a two-dimensional coordinate system according to focusing definition values and imaging distance identification values corresponding to all imaging positions, a certain fluctuation may occur in the fitted focusing definition curve, so that the focusing definition curve may be subjected to filtering processing, and the imaging position with the largest focusing definition value is determined as the target focusing position according to the filtered focusing definition curve.
For example, according to the focus definition value corresponding to the imaging position and the step number of the stepper motor calculated in the foregoing steps, a focus definition curve obtained after fitting and filtering is shown in fig. 9, after determining the step number of the stepper motor corresponding to the point with the largest Y-axis coordinate value on the curve, the photographing device may be moved to the position corresponding to the step number by using the stepper motor as the target focus position, so as to photograph the photographing target.
The embodiment of the invention can distinguish the abnormal particle images according to the image definition value, the image size value and the shape characteristic value of each characteristic image, and can improve the definition-optimal focusing position determination accuracy; besides determining the target focusing position, the number of the turnover particle images and the cross-gradient particle images can be determined to judge whether the instrument is in a normal running state, and if the instrument is not in normal running, early warning prompt can be carried out.
Example five
Fig. 10 is a schematic structural diagram of a microscopic image photographing focusing device according to a fifth embodiment of the present application, where the device includes:
an obtaining module 501, configured to take a photograph of a photographic target at least two imaging positions, and obtain an original image corresponding to each imaging position, where the photographic target is a flowing liquid including particles;
a segmentation module 502, configured to segment a feature image from an original image corresponding to each imaging location, where the feature image includes an imaging region of particles in a flowing liquid;
a classification module 503, configured to classify each feature image to obtain classification data corresponding to each feature image;
the determining module 504 is configured to obtain a focus definition value corresponding to each imaging position according to a preset weight value corresponding to each classification category and classification data corresponding to each feature image, and determine an imaging position with a maximum focus definition value as a target focus position.
In this embodiment, the classification module 503 includes:
the classification unit is used for classifying each characteristic image according to the characteristic data of each characteristic image to obtain classification data corresponding to each characteristic image;
Wherein the feature data of each feature image is used to characterize at least an image sharpness value and an image size value of each feature image.
In this embodiment, the microscopic image photographing focusing device further includes:
the preset weight value module is used for determining a preset weight value corresponding to each classification category by utilizing a preset axisymmetric line according to the definition threshold range and the size threshold range corresponding to each classification category;
on a two-dimensional coordinate plane, presetting a value increasing or decreasing direction of an axisymmetric line along one coordinate axis, wherein the value of the other coordinate axis meets the change trend from low to high to low; the preset weight value corresponding to the classification category with the highest definition threshold range is the highest.
In this embodiment, the predetermined axisymmetric line is a gaussian curve.
In this embodiment, the classification data includes confidence vector data, where the confidence vector data corresponding to each feature image is used to characterize a confidence level of each feature image classified into each classification category.
In this embodiment, the determining module 504 includes:
and the determining unit is used for obtaining a focusing definition value corresponding to each imaging position according to the preset weight value corresponding to each classification category and the confidence vector data corresponding to each characteristic image.
In this embodiment, the feature image is an abnormal particle image or a normal particle image, where the normal particle image includes a complete imaging region of particles that do not flip during the flowing of the flowing liquid; the abnormal particle image comprises an imaging area of particles which are turned over during flowing of flowing liquid and an incomplete imaging area of particles which are not turned over during flowing of the flowing liquid; the classification category comprises a category of at least one abnormal particle image and a category of at least two normal particle images; the feature data of each feature image is also used for representing the shape feature value of each feature image;
correspondingly, the classification data comprises feature vector data, wherein the feature vector data corresponding to each feature image is used for representing the confidence level that each feature image is classified into the category of each abnormal particle image and the category of each normal particle image.
In this embodiment, the determining module 504 includes:
the confidence vector data obtaining unit is used for obtaining the confidence vector data corresponding to each normal particle image according to the feature vector data corresponding to each normal particle image, wherein the confidence vector data corresponding to each normal particle image is used for representing the confidence level of each normal particle image classified to the category of each normal particle image;
And the focusing definition value obtaining unit is used for obtaining the focusing definition value corresponding to each imaging position according to the preset weight value corresponding to the category of each normal particle image and the confidence vector data corresponding to each normal particle image.
In this embodiment, the confidence vector data is normalized vector data.
In this embodiment, the classification module 503 includes:
the classification unit is used for classifying each characteristic image by utilizing a preset classification model to obtain classification data corresponding to each characteristic image;
the microscopic image shooting focusing device further comprises:
a segmentation unit for segmenting a plurality of characteristic images from the sample image;
the sample data set obtaining unit is used for determining the classification category and the characteristic data corresponding to each characteristic image in the sample image and obtaining a sample data set;
the construction unit is used for constructing a preset classification model according to the sample data set.
In this embodiment, the microscopic image photographing focusing device further includes:
the early warning prompt module is used for carrying out early warning prompt when the duty ratio of the number of the turnover particle images exceeds a first duty ratio threshold value in the feature images separated from at least one original image; wherein the flipped particle image is an imaging region of particles that are flipped during the flowing of the flowing liquid;
The early warning prompt module is also used for carrying out early warning prompt when the duty ratio of the number of the cross-gradient particle images in the characteristic images divided from at least one original image exceeds a second duty ratio threshold; the cross-gradient particle image is an image that is not classified into a category including the largest number of normal particle images among normal particle images.
In this embodiment, the early warning prompt module includes:
the level value determining unit is used for determining a level value corresponding to the category of each normal particle image according to the size threshold range corresponding to the category of each normal particle image;
the cross-gradient level value determining unit is used for determining a level value corresponding to a classification category with the largest number of normal particle images in each original image as a reference level value corresponding to each original image; determining a grade value corresponding to the classification category of at least one cross-gradient particle image in each original image as a cross-gradient grade value corresponding to each original image;
and the cross-gradient particle image quantity determining unit is used for determining the number of the cross-gradient particle images corresponding to the at least one original image when the sum of the reference level value corresponding to the at least one original image and the absolute value of the difference of each cross-gradient level value exceeds a preset cross-gradient threshold value.
In this embodiment, the flowing liquid includes abnormal particles and/or normal particles therein; the segmentation module 502 includes:
and the original image segmentation unit is used for segmenting an imaging region of the normal particles from the original image corresponding to each imaging position so as to obtain a characteristic image in each original image.
The microscopic image shooting focusing device of the embodiment can realize the corresponding microscopic image shooting focusing method in the method embodiments, has the beneficial effects of the corresponding method embodiments and is not described herein.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (14)

1. A microscopic image photographing focusing method, the method comprising:
shooting a shooting target at least two imaging positions to obtain an original image corresponding to each imaging position, wherein the shooting target is flowing liquid comprising particles;
segmenting a characteristic image from the original image corresponding to each imaging position, wherein the characteristic image comprises an imaging area of particles in the flowing liquid;
classifying each characteristic image according to the characteristic data of each characteristic image to obtain classification data corresponding to each characteristic image; the feature data of each feature image is at least used for representing an image definition value of each feature image, and the classification data corresponding to each feature image is at least used for marking a classification result of each feature image;
and obtaining a focusing definition value corresponding to each imaging position according to a preset weight value corresponding to each classification category and classification data corresponding to each characteristic image, and determining the imaging position with the largest focusing definition value as a target focusing position.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the feature data of each of the feature images is used to characterize at least an image sharpness value and an image size value of each of the feature images.
3. The method according to claim 2, wherein the method further comprises:
determining the preset weight value corresponding to each classification category by utilizing a preset axisymmetric line according to the definition threshold range and the size threshold range corresponding to each classification category;
the preset axisymmetric line extends in the increasing or decreasing direction of the value of one coordinate axis on the two-dimensional coordinate plane, and the value of the other coordinate axis meets the change trend from low to high to low; the preset weight value corresponding to the classification category with the highest definition threshold range is the highest.
4. A method according to claim 3, wherein the predetermined axisymmetric line is a gaussian.
5. The method of claim 2, wherein the classification data includes confidence vector data, wherein the confidence vector data for each of the feature images is used to characterize a confidence level that each of the feature images is classified into each of the classification categories.
6. The method according to claim 5, wherein the obtaining the focus sharpness value corresponding to each imaging position according to the preset weight value corresponding to each classification category and the classification data corresponding to each feature image includes:
and obtaining a focusing definition value corresponding to each imaging position according to the preset weight value corresponding to each classification category and the confidence vector data corresponding to each characteristic image.
7. The method of claim 2, wherein the feature image is an abnormal particle image or a normal particle image, wherein the normal particle image includes a complete imaging region of particles that are not flipped during the flowing of the flowing liquid; the abnormal particle image comprises an imaging area of particles which are turned over during the flowing process of the flowing liquid and an incomplete imaging area of particles which are not turned over during the flowing process of the flowing liquid; the classification category comprises at least one category of the abnormal particle image and at least two categories of the normal particle image; the feature data of each feature image is also used for representing a shape feature value of each feature image;
Correspondingly, the classification data comprises feature vector data, wherein the feature vector data corresponding to each feature image is used for representing the confidence level of each feature image classified into the category of each abnormal particle image and the category of each normal particle image.
8. The method of claim 7, wherein the obtaining the focus sharpness value corresponding to each imaging position according to the preset weight value corresponding to each classification category and the classification data corresponding to each feature image comprises:
obtaining confidence vector data corresponding to each normal particle image according to the feature vector data corresponding to each normal particle image, wherein the confidence vector data corresponding to each normal particle image is used for representing the confidence level of each normal particle image classified into the category of each normal particle image;
and obtaining a focusing definition value corresponding to each imaging position according to the preset weight value corresponding to each category of the normal particle image and the confidence vector data corresponding to each normal particle image.
9. The method of claim 5 or 8, wherein the confidence vector data is normalized vector data.
10. The method of claim 7, further comprising at least one of the following steps:
when the duty ratio of the number of the turnover particle images in the feature images segmented from at least one original image exceeds a first duty ratio threshold, early warning prompt is carried out; wherein the flipped particle image is an imaging region of particles that are flipped during the flowing of the flowing liquid;
when the duty ratio of the number of the cross-gradient particle images in the characteristic images segmented from at least one original image exceeds a second duty ratio threshold value, early warning prompt is carried out; the cross-gradient particle image is an image which is not classified into the category with the largest number of the normal particle images in the normal particle image.
11. The method of claim 10, wherein the alerting when the duty cycle of the number of cross-gradient particle images in the feature image segmented from at least one of the original images exceeds a second duty cycle threshold comprises:
Determining a level value corresponding to each category of the normal particle image according to a size threshold range corresponding to each category of the normal particle image;
determining the level value corresponding to the classification category with the largest number of the normal particle images in each original image as a reference level value corresponding to each original image; determining the grade value corresponding to the classification category of at least one cross-gradient particle image in each original image as a cross-gradient grade value corresponding to each original image;
and when the sum of the reference level value corresponding to at least one original image and the absolute value of each cross-gradient level value difference exceeds a preset cross-gradient threshold value, determining the number of the cross-gradient particle images corresponding to at least one original image.
12. The method of claim 1, wherein classifying each of the feature images to obtain classification data corresponding to each of the feature images comprises: classifying each characteristic image by using a preset classification model to obtain classification data corresponding to each characteristic image;
Correspondingly, the method further comprises the steps of:
segmenting a plurality of characteristic images from a sample image;
determining the classification category and the characteristic data corresponding to each characteristic image in the sample image to obtain a sample data set;
and constructing the preset classification model according to the sample data set.
13. The method according to claim 1, wherein the flowing liquid comprises abnormal particles and/or normal particles therein; the segmenting the characteristic image from the original image corresponding to each imaging position comprises the following steps:
and dividing an imaging region of the normal particles from the original image corresponding to each imaging position to obtain the characteristic image in each original image.
14. A microscopic image photographing focusing device, comprising:
the device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for shooting a shooting target at least two imaging positions to acquire an original image corresponding to each imaging position, and the shooting target is flowing liquid comprising particles;
a segmentation module, configured to segment a feature image from the original image corresponding to each imaging position, where the feature image includes an imaging region of particles in the flowing liquid;
The classification module is used for classifying each characteristic image according to the characteristic data of each characteristic image to obtain classification data corresponding to each characteristic image; the feature data of each feature image is at least used for representing an image definition value of each feature image, and the classification data corresponding to each feature image is at least used for marking a classification result of each feature image;
the determining module is used for obtaining a focusing definition value corresponding to each imaging position according to a preset weight value corresponding to each classification category and classification data corresponding to each characteristic image, and determining the imaging position with the largest focusing definition value as a target focusing position.
CN202210279773.2A 2022-03-21 2022-03-21 Microscopic image shooting focusing method and device Active CN114697548B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210279773.2A CN114697548B (en) 2022-03-21 2022-03-21 Microscopic image shooting focusing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210279773.2A CN114697548B (en) 2022-03-21 2022-03-21 Microscopic image shooting focusing method and device

Publications (2)

Publication Number Publication Date
CN114697548A CN114697548A (en) 2022-07-01
CN114697548B true CN114697548B (en) 2023-09-29

Family

ID=82139628

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210279773.2A Active CN114697548B (en) 2022-03-21 2022-03-21 Microscopic image shooting focusing method and device

Country Status (1)

Country Link
CN (1) CN114697548B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101702053A (en) * 2009-11-13 2010-05-05 长春迪瑞实业有限公司 Method for automatically focusing microscope system in urinary sediment examination equipment
CN111798414A (en) * 2020-06-12 2020-10-20 北京阅视智能技术有限责任公司 Method, device and equipment for determining definition of microscopic image and storage medium
CN112135048A (en) * 2020-09-23 2020-12-25 创新奇智(西安)科技有限公司 Automatic focusing method and device for target object
WO2020259179A1 (en) * 2019-06-28 2020-12-30 Oppo广东移动通信有限公司 Focusing method, electronic device, and computer readable storage medium
CN112911133A (en) * 2019-12-03 2021-06-04 精微视达医疗科技(武汉)有限公司 Endoscope focusing method and device
CN112907500A (en) * 2019-12-03 2021-06-04 精微视达医疗科技(武汉)有限公司 Endoscope focusing method and device
WO2021134179A1 (en) * 2019-12-30 2021-07-08 深圳市大疆创新科技有限公司 Focusing method and apparatus, photographing device, movable platform and storage medium
CN113837079A (en) * 2021-09-24 2021-12-24 苏州贝康智能制造有限公司 Automatic focusing method and device for microscope, computer equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI324015B (en) * 2006-12-22 2010-04-21 Ind Tech Res Inst Autofocus searching method
WO2012073729A1 (en) * 2010-11-30 2012-06-07 富士フイルム株式会社 Imaging device and focal position detection method
CN109688351B (en) * 2017-10-13 2020-12-15 华为技术有限公司 Image signal processing method, device and equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101702053A (en) * 2009-11-13 2010-05-05 长春迪瑞实业有限公司 Method for automatically focusing microscope system in urinary sediment examination equipment
WO2020259179A1 (en) * 2019-06-28 2020-12-30 Oppo广东移动通信有限公司 Focusing method, electronic device, and computer readable storage medium
CN112911133A (en) * 2019-12-03 2021-06-04 精微视达医疗科技(武汉)有限公司 Endoscope focusing method and device
CN112907500A (en) * 2019-12-03 2021-06-04 精微视达医疗科技(武汉)有限公司 Endoscope focusing method and device
WO2021134179A1 (en) * 2019-12-30 2021-07-08 深圳市大疆创新科技有限公司 Focusing method and apparatus, photographing device, movable platform and storage medium
CN111798414A (en) * 2020-06-12 2020-10-20 北京阅视智能技术有限责任公司 Method, device and equipment for determining definition of microscopic image and storage medium
CN112135048A (en) * 2020-09-23 2020-12-25 创新奇智(西安)科技有限公司 Automatic focusing method and device for target object
CN113837079A (en) * 2021-09-24 2021-12-24 苏州贝康智能制造有限公司 Automatic focusing method and device for microscope, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吴文杰 ; 肖照林 ; 王庆 ; 杨恒 ; .基于共聚焦图像序列的深度估计方法.计算机应用与软件.2012,(第12期),全文. *

Also Published As

Publication number Publication date
CN114697548A (en) 2022-07-01

Similar Documents

Publication Publication Date Title
CN106960195B (en) Crowd counting method and device based on deep learning
CN115082683B (en) Injection molding defect detection method based on image processing
WO2020239015A1 (en) Image recognition method and apparatus, image classification method and apparatus, electronic device, and storage medium
Prinyakupt et al. Segmentation of white blood cells and comparison of cell morphology by linear and naïve Bayes classifiers
CN105975941B (en) A kind of multi-direction vehicle detection identifying system based on deep learning
CN116205919B (en) Hardware part production quality detection method and system based on artificial intelligence
CN110602484B (en) Online checking method for shooting quality of power transmission line equipment
EP1986046B1 (en) A method for determining an in-focus position and a vision inspection system
US6330350B1 (en) Method and apparatus for automatically recognizing blood cells
CN109389105B (en) Multitask-based iris detection and visual angle classification method
CN116664559B (en) Machine vision-based memory bank damage rapid detection method
CN111462075B (en) Rapid refocusing method and system for full-slice digital pathological image fuzzy region
CN111899241A (en) Quantitative on-line detection method and system for defects of PCB (printed Circuit Board) patches in front of furnace
KR100868884B1 (en) Flat glass defect information system and classification method
CN112036384B (en) Sperm head shape recognition method, device and equipment
CN115170550A (en) Deep learning-based battery defect detection method and system
CN114463843A (en) Multi-feature fusion fish abnormal behavior detection method based on deep learning
Mandyartha et al. Global and adaptive thresholding technique for white blood cell image segmentation
CN111950559A (en) Pointer instrument automatic reading method based on radial gray scale
CN114697548B (en) Microscopic image shooting focusing method and device
CN113781419A (en) Defect detection method, visual system, device and medium for flexible PCB
CN116740652B (en) Method and system for monitoring rust area expansion based on neural network model
CN113705564A (en) Pointer type instrument identification reading method
CN106682604B (en) Blurred image detection method based on deep learning
CN112036334A (en) Method, system and terminal for classifying visible components in sample to be detected

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant