CN117095362A - Liquid drop monitoring method, system and storage medium of cell sorter - Google Patents

Liquid drop monitoring method, system and storage medium of cell sorter Download PDF

Info

Publication number
CN117095362A
CN117095362A CN202311362902.5A CN202311362902A CN117095362A CN 117095362 A CN117095362 A CN 117095362A CN 202311362902 A CN202311362902 A CN 202311362902A CN 117095362 A CN117095362 A CN 117095362A
Authority
CN
China
Prior art keywords
liquid drop
drop
liquid
satellite
droplet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311362902.5A
Other languages
Chinese (zh)
Other versions
CN117095362B (en
Inventor
王策
马玉婷
钟金凤
王耀
何帅
裴智果
陈建生
严心涛
宋飞飞
陈忠祥
陈梦丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Institute of Biomedical Engineering and Technology of CAS
Original Assignee
Suzhou Institute of Biomedical Engineering and Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Institute of Biomedical Engineering and Technology of CAS filed Critical Suzhou Institute of Biomedical Engineering and Technology of CAS
Priority to CN202311362902.5A priority Critical patent/CN117095362B/en
Publication of CN117095362A publication Critical patent/CN117095362A/en
Application granted granted Critical
Publication of CN117095362B publication Critical patent/CN117095362B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume, or surface-area of porous materials
    • G01N15/10Investigating individual particles
    • G01N15/14Electro-optical investigation, e.g. flow cytometers
    • G01N15/1434Electro-optical investigation, e.g. flow cytometers using an analyser being characterised by its optical arrangement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations

Abstract

The application discloses a liquid drop monitoring method, a liquid drop monitoring system and a liquid drop monitoring storage medium of a cell sorter, which belong to the field of cell sorting, wherein a camera parameter and a liquid drop parameter are set to generate liquid drops, and a plurality of images of each liquid drop are collected; dividing each image to generate a binary image of a liquid drop, identifying a liquid drop root part, a normal liquid drop and a satellite liquid drop according to the area and the position of a liquid drop area, extracting a plurality of quantization features representing the stability of the liquid drop according to the identified liquid drop root part, the identified normal liquid drop and the identified satellite liquid drop area, and generating a feature matrix of a plurality of images of the liquid drop; normalizing the characteristic matrix row, calculating equivalent characteristics of the stability of the liquid drops, and average value and variance of the equivalent characteristics to generate a quality control chart; the liquid drop quality evaluation result is given according to the quality control chart, and through the steps, the liquid drop monitoring method has the characteristics of intuitiveness, accuracy, effectiveness and the like, can be automatically completed without excessive manual operation, is simple and convenient, and can be used for directly monitoring the generation quality of liquid drops in advance so as to avoid wasting materials.

Description

Liquid drop monitoring method, system and storage medium of cell sorter
Technical Field
The application relates to the field of cell sorting, in particular to a liquid drop monitoring method, a liquid drop monitoring system and a storage medium of a cell sorter.
Background
The flow cell sorting technology is to irradiate single cells or particles dyed with fluorescent pigment in high-speed flow state with high-energy laser and to separate and recover target cells based on the scattered light and the emitted fluorescent light. The separated cells can be used for culture, transplantation, nucleic acid extraction, single-cell PCR amplification or in situ hybridization, and the like, and can also be further used for heterogeneity research of cell genes, proteins and functional levels.
The conventional flow cell sorting is realized by wrapping cells by liquid drops, namely, continuous liquid flow flowing through a nozzle is broken as required by a liquid drop high-frequency oscillation generating device, so that the wrapping of the cells is realized while the generation of the liquid drops is finished. Essentially, flow cell sorting is performed on droplets containing cells. Therefore, stable generation of liquid drops is an important precondition guarantee for the flow cell sorter to realize effective acquisition of target cells.
The greatest advantage of flow cytometry is the high throughput analysis or sorting, generally speaking, of droplets ejected from a fluid cell featuring two: (1) fast flow rates, up to tens of meters per second, such as 40 m/s; (2) The volume is small, nanoliter/picoliter, such as a 100 um diameter nozzle, producing droplets with an average volume of about 0.5 nanoliter. How to ensure stable generation of nano-liter/pico-liter high-speed liquid drops is an important problem to be solved by the flow cell sorting technology.
The existing method or strategy is mainly aimed at the whole system of the cell sorter, namely, whether the generation of liquid drops is proper or not is evaluated according to performance indexes (such as purity, yield and the like) of the sorted cells by setting and adjusting parameters of a high-frequency oscillator. Considering the numerous factors that influence the flow cell sorting performance, such indirect backtracking methods bring (1) that it is often methodologically impossible to monitor and evaluate whether the current flow is stable; (2) Backtracking after sorting causes waste of cell/particulate material, especially some rare and valuable cell samples, which may directly lead to irreversible experimental failure.
Therefore, it is highly necessary to directly monitor the quality of the droplets generated during the flow cell sorting in advance in the cell sorting experiment.
Disclosure of Invention
In order to overcome the defects of the prior art, one of the purposes of the application is to provide a monitoring method for directly monitoring the generation quality of liquid drops in front in the cell sorting process.
In order to overcome the defects in the prior art, the second aim of the application is to provide a monitoring system for directly monitoring the generation quality of liquid drops in front in the cell sorting process.
To overcome the deficiencies of the prior art, it is a third object of the present application to provide a computer readable storage medium for pre-directly monitoring the quality of droplet generation during cell sorting.
One of the purposes of the application is realized by adopting the following technical scheme:
a liquid drop monitoring method of a cell sorter comprises the following steps of
S1: setting camera parameters and drip parameters, generating drip, and collecting a plurality of images of each drip;
s2: dividing each image to generate a binary image of a liquid drop, identifying a liquid drop root part, a normal liquid drop and a satellite liquid drop according to the area and the position of a liquid drop area, extracting a plurality of quantization features representing the stability of the liquid drop according to the identified liquid drop root part, the identified normal liquid drop and the identified satellite liquid drop area, and generating a feature matrix of a plurality of images of the liquid drop;
s3: normalizing the characteristic matrix row, calculating equivalent characteristics of the stability of the liquid drops, and average value and variance of the equivalent characteristics to generate a quality control chart;
s4: and (5) giving out a drop quality evaluation result according to the quality control chart.
Further, in step S1, the camera parameters and the drip parameters are specifically set as follows: setting the fluid speed and the liquid drop oscillation frequency of the liquid drop generating device, setting the exposure time, the image resolution and the shooting frame rate of camera imaging, and setting the strobe frequency of the illumination light source according to the liquid drop oscillation frequency.
Further, the oscillation frequency of the liquid drop generation isf drop The stroboscopic frequency of the illumination source isf light The method comprises the steps of carrying out a first treatment on the surface of the Frequency of oscillationf drop And strobe frequencyf ligh The phase difference of the two signals is 0.
Further, in step S1, the method further includes setting a region of interest shot by the camera, where the region of interest is specifically: the center line of the vertical line of the drop sequence is the vertical center line of the image, the vertical resolution of the image can cover a plurality of normal drops, and the number of pixels with the transverse width is larger than the width of cells/particles so as to improve the image data processing speed.
Further, in step S1, the capturing of the plurality of images of each drop specifically includes: and shooting the liquid drop image according to the given time resolution, generating a liquid drop monitoring sampling video, and analyzing the liquid drop image frame by frame according to the imaging frame rate set by the camera.
Further, the step S2 specifically includes the following steps:
s21: manually marking a liquid drop area on a liquid drop image to construct a liquid drop image training set, a verification set and a test set;
s22: training a semantic segmentation model of the liquid drop image according to the liquid drop image data set with the mark, performing precision optimization on the trained semantic segmentation model, and converting a model weight file;
s23: generating a binary image by using the optimized semantic segmentation model for each drip image;
s24: identifying liquid drop roots, normal drops and satellite drops according to the areas and the positions of the drop areas;
s25: according to the recognized droplet root, normal droplet and satellite droplet areas, 5 quantitative characteristics representing stable generation of droplets are calculated, namely Drop1, gap, droplet spacing, satellite droplet number and fuzzy droplet number, wherein Drop1 is as follows: after merging the satellite liquid drops, the longitudinal pixel coordinate value of the mass center of the first liquid drop in the liquid drop image; gap is the blank longitudinal pixel distance between the lower boundary of the Drop root region and the upper boundary of the Drop1 region after satellite Drop merging;
s26: a feature matrix of a plurality of images of the drop is generated.
Further, in step S24, the identification of the root of the droplet is specifically: and traversing the liquid drop foreground areas on the binary image, calculating the pixel area and centroid position coordinates of each foreground area, and judging the root of the liquid drop when the pixel area of the liquid drop area is more than 0.45 and less than 0.80 and the longitudinal coordinate value of the geometric center position is minimum in all liquid drop foreground areas compared with the total pixel area of the image foreground.
Further, in step S24, the identification of the normal droplet and the satellite droplet is specifically: and (3) removing the root area of the liquid drop in the binary image, setting one half of the sum of the maximum area and the minimum area in the residual liquid drop area as a judgment threshold value of the satellite liquid drop, judging the residual liquid drop area after removing the root area of the liquid drop as a normal liquid drop when the pixel area is larger than the threshold value, and judging the residual liquid drop as the satellite liquid drop when the pixel area is smaller than or equal to the threshold value.
Further, in step S24, when the pixel area of the droplet area is equal to or smaller than 0.45 or equal to or larger than 0.80 than the total pixel area of the foreground of the image, an abnormal warning is given.
Further, when calculating the centroid position coordinates, it is necessary to calculate the satellite droplet assignment in the vicinity together with the calculation, and the satellite droplet assignment calculation is specifically: traversing the centroid longitudinal coordinates of the satellite liquid drops, calculating Cartesian Euclidean distances between the satellite liquid drops and the nearest upper normal liquid drops and the nearest lower normal liquid drops aiming at each satellite liquid drop, enabling the satellite liquid drops to belong to the shortest distance between the satellite liquid drops, enabling the satellite liquid drops to belong to the nearest lower normal liquid drops when the satellite liquid drops are not provided with the normal liquid drops, and enabling the satellite liquid drops to belong to the nearest upper normal liquid drops when the satellite liquid drops are not provided with the normal liquid drops.
Further, the liquid drop is a revolution body with uniform density distribution formed along the center line of the liquid drop, and the liquid drop foreground area in the binary image is a two-dimensional section with clear boundary of the liquid drop revolution body, so that the longitudinal pixel coordinate of the mass center of the liquid drop isWhereiny j For the distance of the rotary volume piece from the reference line, i.e. the longitudinal direction of the rotary volume pieceCoordinates;d j is the length of the cross section of the rotary volume piece.
Further, in step S3, the equivalent feature of calculating the droplet stability is specifically: the equivalent characteristics of Drop stability are Drop1, gap and Drop pixel spacing, each weighted 30%, satellite Drop number, fuzzy satellite Drop number, each weighted 5%,
and gives the mean +.about.of the droplet equivalent characteristics>Standard deviation SD, respectively calculate and plot the center line +.>Upper quality control margin->Lower quality control margin->Upper warning limit->Lower warning limit->And generating a quality control chart.
The second purpose of the application is realized by adopting the following technical scheme:
the liquid drop monitoring system of the cell sorter is used for implementing the liquid drop monitoring method of the cell sorter, and comprises a processor, a memory and a camera, wherein the camera and the memory are in communication connection with the processor, and the memory stores instructions executable by the processor, and the instructions are executed by the processor to perform the liquid drop monitoring method of the cell sorter.
The third purpose of the application is realized by adopting the following technical scheme:
a computer readable storage medium having stored thereon a computer program which when executed by a processor implements the droplet monitoring method of a cell sorter as described above.
Compared with the prior art, the liquid drop monitoring method of the cell sorter provided by the application has the advantages that camera parameters and liquid drop parameters are set, liquid drops are generated, and a plurality of images of each liquid drop are acquired; dividing each image to generate a binary image of a liquid drop, identifying a liquid drop root part, a normal liquid drop and a satellite liquid drop according to the area and the position of a liquid drop area, extracting a plurality of quantization features representing the stability of the liquid drop according to the identified liquid drop root part, the identified normal liquid drop and the identified satellite liquid drop area, and generating a feature matrix of a plurality of images of the liquid drop; normalizing the characteristic matrix row, calculating equivalent characteristics of the stability of the liquid drops, and average value and variance of the equivalent characteristics to generate a quality control chart; the liquid drop quality evaluation result is given according to the quality control chart, and through the steps, the liquid drop monitoring method has the characteristics of intuitiveness, accuracy, effectiveness and the like, can be automatically completed without excessive manual operation, is simple and convenient, and can be used for directly monitoring the generation quality of liquid drops in advance so as to avoid wasting materials.
Drawings
FIG. 1 is a flow chart of a method of droplet monitoring of a cell sorter of the present application;
FIG. 2 is a flow chart of step S2 of the droplet monitoring method of the cell sorter of the present application;
FIG. 3 is a schematic diagram showing the overall structure of a method for monitoring droplets in a cell sorter according to the present application;
FIG. 4 is a diagram of an image semantic segmentation model of a droplet monitoring method of a cell sorter of the present application;
FIG. 5 is a flow chart showing the network structure of example 1 of the droplet monitoring method of the cell sorter of the present application;
FIG. 6 is a schematic diagram of the image semantic segmentation result in embodiment 1;
FIG. 7 is a graph showing the interpretation of droplet features and satellite droplet merging results in example 1;
FIG. 8 is a graph of the L-J quality control chart of example 1;
FIG. 9 is a graph of the L-J quality control chart of example 2.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It will be understood that when an element is referred to as being "fixed to" another element, it can be directly on the other element or be present as another intermediate element through which the element is fixed. When an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may also be present. When an element is referred to as being "disposed on" another element, it can be directly on the other element or intervening elements may also be present. The terms "vertical," "horizontal," "left," "right," and the like are used herein for illustrative purposes only.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
The principle of the liquid drop monitoring method of the cell sorter is that cerebrospinal fluid around the spinal cord protects spinal cord tissues inside, and the flowing parameters of the cerebrospinal fluid can reflect the compression degree of the spinal cord. The spinal cord can thus be monitored indirectly by monitoring the flow parameters of the cerebrospinal fluid, which beats with the heartbeat, so that the cerebrospinal fluid has a characteristic motion signal synchronized with the cardiac cycle. The cerebrospinal fluid is influenced by the cardiac cycle and has the physiological characteristic of periodical bidirectional flow, and the flow drives the membrane structures at two sides to beat at the same time.
Fig. 1 is a droplet monitoring method of a cell sorter according to the present application, comprising the steps of:
s1: setting camera parameters and drip parameters, generating drip, and collecting a plurality of images of each drip;
s2: dividing each image to generate a binary image of a liquid drop, identifying a liquid drop root part, a normal liquid drop and a satellite liquid drop according to the area and the position of a liquid drop area, extracting a plurality of quantization features representing the stability of the liquid drop according to the identified liquid drop root part, the identified normal liquid drop and the identified satellite liquid drop area, and generating a feature matrix of a plurality of images of the liquid drop;
s3: normalizing the characteristic matrix row, calculating equivalent characteristics of the stability of the liquid drops, and average value and variance of the equivalent characteristics to generate a quality control chart;
s4: and (5) giving out a drop quality evaluation result according to the quality control chart.
With continued reference to fig. 3, step S1 specifically includes: the cells/particles are loaded into the fluid pool at a certain speed and flow through the nozzle under the extrusion of the sheath liquid, and high-frequency oscillation is applied to the nozzle at the moment to break the microfluidic flow from a continuous state into liquid drops, so that the loading speed of a sample and the flow rate of the sheath liquid are required to be initialized and configured for effectively generating the liquid drops, the nozzle with a proper size is selected, and the oscillation frequency meeting the sorting requirement is set.
In order to clearly capture the state of the drops falling from the nozzles, a set of visual monitoring devices including conventional CMOS cameras, objective lens sets and illumination sources are required. Firstly, installing and debugging a CMOS camera according to the falling position of liquid drops and the depth of field of an objective lens group to ensure that the CMOS camera is in a focusing position when shooting images; the illumination source is configured in a stroboscopic transmission imaging manner in view of the high-speed initial state of droplet discharge from the nozzle. Assuming that the oscillation frequency of the droplet generation isf drop The frequency of the illumination source isf light The phase difference of the two signals can be ensured to be 0 by a computer, and the following conditions are satisfied in terms of values:
in order to realize global monitoring of the liquid drops as much as possible, the shooting area can select the longitudinal central line of the dropping sequence of the liquid drops as the longitudinal central line of the image, ensure that the longitudinal resolution of the image can cover a plurality of normal liquid drops, and ensure that the transverse resolution (the number of pixels with transverse width) is slightly larger than the width of cells/particles so as to improve the processing speed of image data; the camera may be set to an auto-exposure mode to improve imaging quality, and the frame rate may be set with reference to the illumination source frequency, preferably as large a value as possible, so as to capture the droplet state changes within as small a time resolution as possible. It should be noted that, for the monitoring result to be more statistically significant, the number of image frames in the droplet sampling video should be at least not less than 1000. In addition, exposure time, image resolution, and shooting frame rate of camera imaging are also required to be set. And shooting the liquid drop image according to the given time resolution, generating a liquid drop monitoring sampling video, and analyzing the liquid drop image frame by frame according to the imaging frame rate set by the camera.
With continued reference to fig. 2, step S2 specifically includes the following steps:
s21: manually marking a liquid drop area on a liquid drop image to construct a liquid drop image training set, a verification set and a test set;
s22: training a semantic segmentation model of the liquid drop image according to the liquid drop image data set with the mark, performing precision optimization on the trained semantic segmentation model, and converting a model weight file;
s23: generating a binary image by using the optimized semantic segmentation model for each drip image;
s24: identifying liquid drop roots, normal drops and satellite drops according to the areas and the positions of the drop areas;
s25: according to the recognized droplet root, normal droplet and satellite droplet areas, 5 quantitative characteristics representing stable generation of droplets are calculated, namely Drop1, gap, droplet spacing, satellite droplet number and fuzzy droplet number, wherein Drop1 is as follows: after merging the satellite liquid drops, the longitudinal pixel coordinate value of the mass center of the first liquid drop in the liquid drop image; gap is the blank longitudinal pixel distance between the lower boundary of the Drop root region and the upper boundary of the Drop1 region after satellite Drop merging;
s26: a feature matrix of a plurality of images of the drop is generated.
In step S21, labeling the image by using an open source labeling tool Labelme; randomly dividing the marked data set into a training set, a verification set and a test set according to the proportion of 3:1:1; the final constructed data set complies with the data format standard of CityScapes to meet the requirements of subsequent model training and evaluation. The real-time semantic segmentation model of the liquid drop image can be used for finishing training on the GPU based on a hundred-degree PaddlePaddle platform, but reasoning deployment is realized at the CPU end so as to get rid of the severe dependence of the AI model on the GPU, and the hardware cost of the cell sorter is reduced to a certain extent. Firstly, converting a weight file obtained based on Paddleseg training into an ONNX model by means of a Paddle2ONNX tool; then, an ONNX model is converted into an IR model by means of an Intel OpenVINO tool, an xml file and a bin file related to the model are obtained and are used for a CPU end to call, and meanwhile the accuracy of model weight is quantized into FP16 so as to improve the reasoning and predicting speed.
With continued reference to fig. 4, in step S22, a lightweight model is selected and specifically applied to the real-time semantic segmentation task. The image semantic segmentation method can be divided into 4 parts of encoder, aggregator, decoder and binarization. Specifically, firstly, a convolutional backbone network is selected in an encoder stage to extract layered characteristics of a droplet image, the encoder can be composed of a plurality of stages, the convolutional step distance of each stage is set, and the final output characteristic diagram size is 1/32 of the input image; a pyramid pooling module is introduced to model random correlations, and a feature representation containing global context information is generated with the feature map output by the encoder as input. Then, multi-level feature fusion is performed by a flexible and lightweight decoder and an output image is generated. The decoder is composed of two unified attention fusion modules and a segmentation head, the unified attention fusion modules take low-level features extracted at different stages of the encoder and high-level features generated by a pyramid module or a depth fusion module as inputs based on a spatial attention mechanism, and the output of the second unified attention fusion module is subjected to 1/8 downsampling. And in the segmentation head, convolution is combined with batch normalization and Relu operation, the channel number of 1/8 downsampling characteristics is reduced to the category number, then upsampling operation is carried out, the size of a characteristic diagram is enlarged to the size of an input image, label prediction is carried out on each pixel, and a prediction result is output. And converting the model weight file by adopting an Intel OpenVINO tool.
In step S23, the output image result is binarized, so as to obtain a binary image after the droplet segmentation.
The step S24 specifically includes: removing burrs and tiny noise points at the edges of a liquid drop foreground region on a binary image through morphological OPEN operation, and separating objects at the tiny points, wherein a specific OpenCV instruction is morphyodex (bin, morph_open, element), element=getstructureelement (morph_rect, size (3, 3)), the morphological operation type is OPEN operation, and a structural element is square 3×3; then, the liquid drop foreground areas on the binary image after deburring and noise removal are traversed one by one, the pixel area and centroid position coordinates of each foreground area are calculated, and three types of liquid drop roots, normal liquid drops and satellite liquid drops are identified according to the areas and positions of the liquid drop areas. Firstly, identifying the root of a liquid drop, and judging the root of the liquid drop if the pixel area of the liquid drop area is more than 0.45 and less than 0.80 than the total pixel area of the foreground image, and the longitudinal coordinate value of the geometric center position is the smallest in all the foreground areas; and then identifying normal liquid drops and satellite liquid drops, removing the liquid drop root area in the binary image, setting half of the sum of the maximum area and the minimum area in the residual liquid drop area as a judgment threshold value of the satellite liquid drops, judging the residual liquid drop area after removing the liquid drop root, if the pixel area is larger than the threshold value, judging the liquid drop area as the normal liquid drops, and otherwise, judging the liquid drop area as the satellite liquid drops. So far, three types of droplet root, normal droplet and satellite droplet in one frame of droplet image are detected. It should be noted that, in the process of identifying the root of the droplet, if the pixel area of the droplet area is smaller than 0.45 or larger than 0.80 than the total pixel area of the image foreground, an alarm can be given, and abnormal working conditions, such as unloaded oscillation frequency or overlarge frequency, occur in the droplet generating device.
The step S25 specifically includes: satellite droplets are identified as micro-droplets during the fall of normal dropletsThe mass separation section, therefore, needs to calculate the mass center coordinates of the normal droplet together with the assignment of the satellite droplets in the vicinity. The attribution of satellite droplets may be defined according to the following rules: traversing the centroid and longitudinal coordinates of the satellite liquid drops, calculating Cartesian Euclidean distances between the nearest upper normal liquid drop and the nearest lower normal liquid drop for each satellite liquid drop, and attributing the satellite liquid drop to the shortest distance. If no normal liquid drop exists above the satellite liquid drop, the satellite liquid drop belongs to the nearest normal liquid drop below; similarly, if the satellite drops without normal drops, the satellite drops fall to the nearest upper normal drop. Drop1 refers to the centroid longitudinal pixel coordinate value of the first droplet in the droplet image after satellite droplet merging; similarly, drop pitch refers to the average pixel distance between the longitudinal coordinates of the centroid of a drop in a drop image after satellite drops merge; gap refers to the blank vertical pixel distance between the lower boundary of the Drop root region to the upper boundary of the Drop1 region after satellite Drop merger. The calculation of the longitudinal coordinates of the centroid of the droplet is performed in a volume integral manner, and assuming that the droplet is a solid of revolution with a uniform distribution of density formed along the center line thereof, the foreground region of the droplet in the binary image given in step S35 is a well-defined two-dimensional section of the solid of revolution of the droplet, so that the longitudinal pixel coordinates of the centroid of the droplety drop Calculated according to the following formula:
the minimum longitudinal pixel coordinate of the drop contour boundary isy pxmin The maximum longitudinal pixel coordinate isy pxmax jFor the longitudinal coordinate index of the drop zone, an integer,j∈[y pxmin y pxmax ]。
the number of the blurred satellites in the blurred droplet number is determined by quantitatively calculating the edge sharpening degree of the satellite droplet partial image. Firstly, cutting out a local image from a source input gray image according to a boundary mask of satellite liquid drops in a binary image, extracting an edge image of the local image by using an OpenCV Laplacian (subImg_Gray, CV_8UC1), calculating standard deviation of the edge image, referring to a given threshold value of 3.50, and if the standard deviation is lower than the threshold value, blurring the satellite liquid drops; otherwise, the satellite liquid drop is focused.
The step S3 specifically comprises the following steps: carrying out dispersion standardization processing on each characteristic of a characteristic matrix corresponding to the liquid drop video through the following standardization processing formula,
wherein,iis a feature index. Considering that Drop1, gap and Drop pixel spacing are strong characteristics representing stable generation of drops, the number of satellite drops and the number of fuzzy satellite drops are relatively weak characteristics, so that according to the correlation importance degree of the characteristics, 30% of weight is respectively assigned to the Drop1, gap and Drop pixel spacing characteristics, 5% of weight is assigned to the satellite Drop number and fuzzy satellite Drop number characteristics, equivalent characteristics of liquid Drop generation quality can be calculated according to the following formula, and the mean value and standard deviation of the equivalent characteristics of the drops are given, so that element values required by a single-value L-J quality control graph curve can be further generated.
The step S4 specifically comprises the following steps: from means of equivalent featuresAnd standard deviationSDCalculate and plot the center line +.>Upper quality control margin->Lower quality control margin->Upper warning limit->Lower warning limit->The method comprises the steps of carrying out a first treatment on the surface of the Establishing a matched quality control rule, and in an L-J quality control chart, if equivalent characteristic points with the liquid drop sampling video frame number ratio higher than 1 per mill exceed a quality control limit ∈>Giving a conclusion that the quality stability of the current system droplet generation is out of control; if the equivalent feature point of the liquid drop sampling video frame number ratio higher than 1%o exceeds the warning limit%>If the quality control limit is not exceeded, giving a warning conclusion of the quality stability of the current system liquid drop generation; if all equivalent feature points are at the warning limit +.>Within this, the current system drop generation quality stability is controlled.
The technical scheme of the application is described below by two specific examples:
example 1
In the embodiment, the microsphere is used as a sample to be sampled, liquid drops are generated by the driving of sheath liquid flow and given proper oscillation frequency, and visual monitoring of liquid drop stability generation quality evaluation is carried out. Wherein, the sheath fluid flow rate is controlled to be 3 m/s, the nozzle diameter of the fluid pool is 100 mu m, the oscillation frequency generated by liquid drops is 30 kHz, and the frequency of the stroboscopic light source is set to be 30 kHz; the camera was set to auto-exposure mode with a resolution of 1612 x 384 pixels, an imaging frame rate of 30 FPS, and a video sampling of 1000 frames of images.
In the present embodiment, the calculation of the inference prediction is configured as: intel (R) Core (TM) i7CPU@2.80GHz,16G memory, and implemented as Visual Studio 2019C ++ programming; the computation of the semantic segmentation of the image is configured to: intel (R) Xeon (R) Gold 5217 CPU @ 3.0 ghz,128g memory; RTX 3090TI GPU, 24GB of video memory; hundred degree Paddleseg framework, python3.9.5 programming language.
First, a droplet image dataset was constructed by camera shooting, containing 452 images in total, and the images were labeled using the open source labeling tool Labelme, the distribution of which is shown in table 1.
The image semantic segmentation method based on the PP-LiteSeg-T1 is adopted, and the PP-LiteSeg-T1 is a lightweight model proposed by researchers such as Juncai Peng in 2022 and is specially applied to real-time semantic segmentation tasks. The model proposes a flexible and lightweight decoder (Flexible and Lightweight Decoder, FLD) in terms of reduced computational burden on the decoder. In addition, to enhance feature representation capability, PP-LiteSeg-T1 introduces a unified attention fusion module (Unified Attention Fusion Module, UAFM). Meanwhile, the model also introduces a simple pyramid pooling module (Simple Pyramid Pooling Module, SPPM) to effectively aggregate global context information at a lower calculation cost, and the network structure flow of the PP-LiteSeg-T1 is shown in FIG. 5.
Model training is performed by adopting a mode of fine tuning after pre-training weight initialization, a weight attenuation coefficient (Decay) is set to be 0.00005, a Batch Size value is 48, and a Learning rate (Learning rate) value is 0.005. The image size scale (resolution) of the input model is 1024×512 pixels and the number of training iterations is set to 8000. Specific training hyper-parameter values are shown in table 2.
Based on the test set, the detection performance of PP-LiteSeg-T1, PP-LiteSeg-B1, STDC1-Seg50 and STDC2-Seg50 was evaluated as shown in Table 3, with the evaluation indexes of accuracy (Acc), average cross-over ratio (Mean Intersection over Union, mIoU), kappa coefficient and Dice coefficient.
Comparing the time-consuming reasoning of PP-LiteSeg-T1, PP-LiteSeg-B1, STDC1-Seg50 and STDC2-Seg50 in GPU, CPU and OpenVINO formats, the data are shown in Table 4.
The trained and translated semantic segmentation model is invoked to generate a binary image, as shown in FIG. 6.
Three types of liquid Drop roots, normal drops and satellite drops are identified on each binary image, the mass centers of the drops are calculated after the satellite drops are merged, and 5 characteristics of Drop1, gap, drop spacing, satellite Drop number and fuzzy Drop number are given, as shown in fig. 7.
And generating a corresponding feature matrix aiming at the whole liquid drop sampling video, and carrying out dispersion normalization on each type of features.
And then, according to the relevant importance degree of the features, weighting the features of Drop1, gap and the pixel spacing of the liquid drops by 30%, weighting the features of the satellite liquid Drop number and the fuzzy satellite liquid Drop number by 5%, calculating the equivalent features of the liquid Drop generation quality, and giving the average value and standard deviation of the equivalent features of the liquid drops.
Then, according to the mean value of the equivalent featureAnd standard deviationSDCalculate and plot the center line +.>Upper quality control limitLower quality control margin->Upper warning limit->Lower warning limit->Drawing single-value L-J quality control graph curveThe line, ordinate, is dimensionless after normalization, and abscissa is the image number, as shown in fig. 8.
Finally, according to the established quality control rule, in the L-J quality control diagram, the sampled 1000 equivalent feature points are all at the warning limitIn the above, the system liquid drop generation quality stability is controlled. As can be seen from fig. 8, in the present embodiment, the system droplet generation quality stability is controlled.
Example 2
The initialization configuration of the droplet generation and visual monitoring apparatus is identical to that of embodiment 1 described above.
In the continuous sampling and shooting process of the liquid drops, fluctuation interference is applied to the liquid drop states by manually controlling the factors such as the flow rate of the liquid, the oscillation frequency of the liquid drops and the like, and the sampling video comprises 1000 frames of liquid drop images as in the embodiment 1.
The procedures of droplet sampling video analysis, image semantic segmentation, 5 feature extraction and dispersion normalization, equivalent feature generation, equivalent feature mean and standard deviation calculation, and the like are identical to those of the above embodiment 1.
Generating the L-J quality control map of this example, as shown in FIG. 9, 7 out of 1000 equivalent feature points sampled exceeded the quality control limitThere are 2 warning limits +.>And upper quality control margin->Between them. According to the established quality control rule, if the equivalent characteristic point of the liquid drop sampling video frame number ratio higher than 1 per mill exceeds the quality control limitAnd (5) giving a conclusion that the quality stability of the current system droplet generation is out of control. From this, in the present embodiment, the system droplet generation quality stability is not controlled.
As is clear from the above description of example 1 and example 2, the droplet generation quality monitoring method in this example is intuitive and effective, and the monitoring result is representative.
From the above description of embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus a general purpose hardware platform, or may be implemented by hardware. Based on such understanding, the foregoing technical solutions may be embodied essentially or in part in a form of a software product, which may exist in a computer-readable storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer electronic device (which may be a personal computer, a server, or a network electronic device, etc.) to perform the various embodiments or methods of some parts of the embodiments.
The application also relates to a liquid drop monitoring system of the cell sorter, which is used for implementing the liquid drop monitoring method of the cell sorter, and comprises a processor, a memory and a camera, wherein the camera and the memory are in communication connection with the processor, and the memory stores instructions which can be executed by the processor so as to realize the liquid drop monitoring method of the cell sorter.
In particular, the system includes one or more processors and memory.
In the case of a processor, the processor and memory may be connected by a bus or other means.
The processor is used to implement various control logic for the system, which may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a single-chip, ARM (Acorn RISC Machine) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination of these components. Also, the processor may be any conventional processor, microprocessor, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
The memory is used as a non-volatile computer readable storage medium for storing non-volatile software programs, non-volatile computer executable programs and modules, such as program instructions corresponding to the gating method for flow cytometer data in the embodiments of the present application. The processor executes various functional applications of the system and data processing by running non-volatile software programs, instructions and units stored in the memory, i.e., implementing the gating method for flow cytometer data in the above-described method embodiments.
The memory comprises a memory program area and a memory data area, wherein the memory program area can store an operating system and application programs required by at least one function; the storage data area may store data created according to system usage, etc. In addition, the memory includes high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some embodiments, the memory includes memory remotely located with respect to the processor, the remote memory being connected to the system via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
One or more units are stored in memory that, when executed by one or more processors, perform the droplet stability generation visual monitoring method for a flow cytometer in any of the method embodiments described above.
The application also relates to a computer readable storage medium having stored thereon a computer program which when executed by a processor implements the droplet monitoring method of a cell sorter as described above.
The non-volatile storage medium can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically erasable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in many forms such as Synchronous RAM (SRAM), dynamic RAM, (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchlinkDRAM (SLDRAM), and Direct Rambus RAM (DRRAM). The disclosed memory components or memories of the operating environments described herein are intended to comprise one or more of these and/or any other suitable types of memory.
The foregoing examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that, for those skilled in the art, it is possible to make several modifications and improvements without departing from the concept of the present application, which are equivalent to the above embodiments according to the essential technology of the present application, and these are all included in the protection scope of the present application.

Claims (14)

1. A method for monitoring liquid drops of a cell sorter, which is characterized by comprising the following steps of: comprises the following steps
S1: setting camera parameters and drip parameters, generating drip, and collecting a plurality of images of each drip;
s2: dividing each image to generate a binary image of a liquid drop, identifying a liquid drop root part, a normal liquid drop and a satellite liquid drop according to the area and the position of a liquid drop area, extracting a plurality of quantization features representing the stability of the liquid drop according to the identified liquid drop root part, the identified normal liquid drop and the identified satellite liquid drop area, and generating a feature matrix of a plurality of images of the liquid drop;
s3: normalizing the characteristic matrix row, calculating equivalent characteristics of the stability of the liquid drops, and average value and variance of the equivalent characteristics to generate a quality control chart;
s4: and (5) giving out a drop quality evaluation result according to the quality control chart.
2. The method for monitoring droplets of a cell sorter according to claim 1, characterized in that: in step S1, the camera parameters and the drip parameters are specifically set as follows: setting the fluid speed and the liquid drop oscillation frequency of the liquid drop generating device, setting the exposure time, the image resolution and the shooting frame rate of camera imaging, and setting the strobe frequency of the illumination light source according to the liquid drop oscillation frequency.
3. The method for monitoring droplets of a cell sorter according to claim 2, characterized in that: the oscillation frequency of the liquid drop generation isf drop The stroboscopic frequency of the illumination source isf light The method comprises the steps of carrying out a first treatment on the surface of the Frequency of oscillationf drop And strobe frequencyf ligh The phase difference of the two signals is 0.
4. The method for monitoring droplets of a cell sorter according to claim 1, characterized in that: in step S1, the method further includes setting a region of interest shot by the camera, where the region of interest is specifically: the center line of the vertical line of the drop sequence is the vertical center line of the image, the vertical resolution of the image can cover a plurality of normal drops, and the number of pixels with the transverse width is larger than the width of cells/particles so as to improve the image data processing speed.
5. The method for monitoring droplets of a cell sorter according to claim 1, characterized in that: in step S1, the capturing of a plurality of images of each drop specifically includes: and shooting the liquid drop image according to the given time resolution, generating a liquid drop monitoring sampling video, and analyzing the liquid drop image frame by frame according to the imaging frame rate set by the camera.
6. The method for monitoring droplets of a cell sorter according to claim 1, characterized in that: the step S2 specifically comprises the following steps:
s21: manually marking a liquid drop area on a liquid drop image to construct a liquid drop image training set, a verification set and a test set;
s22: training a semantic segmentation model of the liquid drop image according to the liquid drop image data set with the mark, performing precision optimization on the trained semantic segmentation model, and converting a model weight file;
s23: generating a binary image by using the optimized semantic segmentation model for each drip image;
s24: identifying liquid drop roots, normal drops and satellite drops according to the areas and the positions of the drop areas;
s25: according to the recognized droplet root, normal droplet and satellite droplet areas, 5 quantitative characteristics representing stable generation of droplets are calculated, namely Drop1, gap, droplet spacing, satellite droplet number and fuzzy droplet number, wherein Drop1 is as follows: after merging the satellite liquid drops, the longitudinal pixel coordinate value of the mass center of the first liquid drop in the liquid drop image; gap is the blank longitudinal pixel distance between the lower boundary of the Drop root region and the upper boundary of the Drop1 region after satellite Drop merging;
s26: a feature matrix of a plurality of images of the drop is generated.
7. The method for monitoring droplets in a cell sorter according to claim 6, characterized in that: in step S24, the identification of the root of the droplet is specifically: and traversing the liquid drop foreground areas on the binary image, calculating the pixel area and centroid position coordinates of each foreground area, and judging the root of the liquid drop when the pixel area of the liquid drop area is more than 0.45 and less than 0.80 and the longitudinal coordinate value of the geometric center position is minimum in all liquid drop foreground areas compared with the total pixel area of the image foreground.
8. The method for monitoring droplets in a cell sorter according to claim 7, characterized in that: in step S24, the identification of the normal droplet and the satellite droplet specifically includes: and (3) removing the root area of the liquid drop in the binary image, setting one half of the sum of the maximum area and the minimum area in the residual liquid drop area as a judgment threshold value of the satellite liquid drop, judging the residual liquid drop area after removing the root area of the liquid drop as a normal liquid drop when the pixel area is larger than the threshold value, and judging the residual liquid drop as the satellite liquid drop when the pixel area is smaller than or equal to the threshold value.
9. The method for monitoring droplets in a cell sorter according to claim 7, characterized in that: in step S24, when the pixel area of the droplet area is less than or equal to 0.45 or greater than or equal to 0.80 than the total pixel area of the foreground of the image, an abnormal warning is given.
10. The method for monitoring droplets in a cell sorter according to claim 7, characterized in that: when calculating the centroid position coordinates, it is necessary to calculate the satellite droplet attribution of the vicinity together, and the satellite droplet attribution is defined as: traversing the centroid longitudinal coordinates of the satellite liquid drops, calculating Cartesian Euclidean distances between the satellite liquid drops and the nearest upper normal liquid drops and the nearest lower normal liquid drops aiming at each satellite liquid drop, enabling the satellite liquid drops to belong to the shortest distance between the satellite liquid drops, enabling the satellite liquid drops to belong to the nearest lower normal liquid drops when the satellite liquid drops are not provided with the normal liquid drops, and enabling the satellite liquid drops to belong to the nearest upper normal liquid drops when the satellite liquid drops are not provided with the normal liquid drops.
11. The method for monitoring droplets in a cell sorter according to claim 7, characterized in that: the liquid drop is a revolution body with uniform density distribution formed along the center line of the liquid drop, the liquid drop foreground area in the binary image is a two-dimensional section with clear boundary of the liquid drop revolution body, so that the longitudinal pixel coordinate of the mass center of the liquid drop isWhereiny j The distance from the rotary volume piece to a reference line is the longitudinal coordinate of the rotary volume piece;d j is the length of the cross section of the rotary volume piece.
12. The method for monitoring droplets in a cell sorter according to claim 7, characterized in that: in step S3, the equivalent for calculating the drop stability is specifically: the equivalent characteristics of Drop stability are Drop1, gap and Drop pixel spacing, each weighted 30%, satellite Drop number, fuzzy satellite Drop number, each weighted 5%,and gives the mean +.about.of the droplet equivalent characteristics>Standard deviation SD, respectively calculate and plot the center line +.>Upper quality control margin->Lower quality control margin->Upper warning limit->Lower warning limit->And generating a quality control chart.
13. A droplet monitoring system of a cell sorter for performing the droplet monitoring method of the cell sorter of any of claims 1 to 12, characterized in that: the cell sorter of any of claims 1-12 comprising a processor, a memory, and a camera, the camera and the memory being communicatively coupled to the processor, the memory storing instructions executable by the processor to implement a droplet monitoring method of the cell sorter.
14. A computer readable storage medium having stored thereon a computer program, which when executed by a processor implements a method of droplet monitoring of a cell sorter as claimed in any of claims 1 to 12.
CN202311362902.5A 2023-10-20 2023-10-20 Liquid drop monitoring method, system and storage medium of cell sorter Active CN117095362B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311362902.5A CN117095362B (en) 2023-10-20 2023-10-20 Liquid drop monitoring method, system and storage medium of cell sorter

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311362902.5A CN117095362B (en) 2023-10-20 2023-10-20 Liquid drop monitoring method, system and storage medium of cell sorter

Publications (2)

Publication Number Publication Date
CN117095362A true CN117095362A (en) 2023-11-21
CN117095362B CN117095362B (en) 2024-02-02

Family

ID=88772231

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311362902.5A Active CN117095362B (en) 2023-10-20 2023-10-20 Liquid drop monitoring method, system and storage medium of cell sorter

Country Status (1)

Country Link
CN (1) CN117095362B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903296A (en) * 2019-02-15 2019-06-18 领航基因科技(杭州)有限公司 A kind of digital pcr drop detection method based on LBP-Adaboost algorithm
CN114596521A (en) * 2022-02-10 2022-06-07 合肥师范学院 Venous transfusion monitoring method and system based on vision measurement
CN115457041A (en) * 2022-11-14 2022-12-09 安徽乾劲企业管理有限公司 Road quality identification and detection method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903296A (en) * 2019-02-15 2019-06-18 领航基因科技(杭州)有限公司 A kind of digital pcr drop detection method based on LBP-Adaboost algorithm
CN114596521A (en) * 2022-02-10 2022-06-07 合肥师范学院 Venous transfusion monitoring method and system based on vision measurement
CN115457041A (en) * 2022-11-14 2022-12-09 安徽乾劲企业管理有限公司 Road quality identification and detection method

Also Published As

Publication number Publication date
CN117095362B (en) 2024-02-02

Similar Documents

Publication Publication Date Title
CN112418117B (en) Small target detection method based on unmanned aerial vehicle image
Molina-Cabello et al. Vehicle type detection by ensembles of convolutional neural networks operating on super resolved images
Kharchenko et al. Detection of airplanes on the ground using YOLO neural network
US20090297016A1 (en) Classifying image features
CN109815945B (en) Respiratory tract examination result interpretation system and method based on image recognition
CN111832608B (en) Iron spectrum image multi-abrasive particle identification method based on single-stage detection model yolov3
CN103344583B (en) A kind of praseodymium-neodymium (Pr/Nd) component concentration detection system based on machine vision and method
US8060353B2 (en) Flow cytometer remote monitoring system
CN112348034A (en) Crane defect detection system based on unmanned aerial vehicle image recognition and working method
JPH10318904A (en) Apparatus for analyzing particle image and recording medium recording analysis program therefor
CN105931225A (en) Method for analyzing crystal growth shape and size distribution based on real-time image detection technology
CN112036384B (en) Sperm head shape recognition method, device and equipment
Daood et al. Sequential recognition of pollen grain Z-stacks by combining CNN and RNN
CN106933861A (en) A kind of customized across camera lens target retrieval method of supported feature
Campbell et al. The Prince William Sound Plankton Camera: a profiling in situ observatory of plankton and particulates
CN117095362B (en) Liquid drop monitoring method, system and storage medium of cell sorter
CN117428199B (en) Alloy powder atomizing device and atomizing method
Chaussonnet et al. Towards deepspray: Using convolutional neural network to post-process shadowgraphy images of liquid atomization
Sosa-Trejo et al. Vision-based techniques for automatic marine plankton classification
CN114782326A (en) System for classifying cervical cell images
Hayali et al. Transfer learning on semantic segmentation for sugar crystal analysis
Chen et al. Real-time instance segmentation of metal screw defects based on deep learning approach
Alawi et al. Performance Analysis of Deep Dense Neural Networks on Traffic Signs Recognition
Wang et al. A Visual Monitoring Approach to Droplet Stability Generation for Flow Cell Sorter
Lu et al. Crystal morphology monitoring based on in-situ image analysis of L-glutamic acid crystallization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant