CN115588191A - Cell sorting method and system based on image acoustic flow control cell sorting model - Google Patents

Cell sorting method and system based on image acoustic flow control cell sorting model Download PDF

Info

Publication number
CN115588191A
CN115588191A CN202211121058.2A CN202211121058A CN115588191A CN 115588191 A CN115588191 A CN 115588191A CN 202211121058 A CN202211121058 A CN 202211121058A CN 115588191 A CN115588191 A CN 115588191A
Authority
CN
China
Prior art keywords
image
cell
images
prediction
cell sorting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211121058.2A
Other languages
Chinese (zh)
Inventor
周甜
苑金金
胡聪
代珺
朱爱军
许川佩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Electronic Technology
Guilin University of Aerospace Technology
Original Assignee
Guilin University of Electronic Technology
Guilin University of Aerospace Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Electronic Technology, Guilin University of Aerospace Technology filed Critical Guilin University of Electronic Technology
Priority to CN202211121058.2A priority Critical patent/CN115588191A/en
Publication of CN115588191A publication Critical patent/CN115588191A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/698Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a cell sorting method, a system and a computer readable storage medium based on an image acoustic flow control cell sorting model, wherein the image acoustic flow control cell sorting model comprises a cell image recognition module and a cell ejection module, and the cell sorting method comprises the following steps: acquiring an original cell image set and a predetermined sample cell image set; inputting the sample cell image set into a cell image recognition module for feature extraction to obtain image type information of the cell image; inputting the image type information and the sample image set into an image acousto-fluidic cell sorting model for training; inputting the original cell image into a trained image acoustic flow control cell sorting model for image prediction, and determining a target image corresponding to the image category; and ejecting the cells corresponding to the target image to a preset collecting area based on a cell ejection module. The embodiment of the invention can adopt a label-free method to automatically classify the cells, thereby realizing the purification and collection of the target cells.

Description

Cell sorting method and system based on image acoustic flow control cell sorting model
Technical Field
The invention relates to the technical field of cell observation, in particular to a cell sorting method and system based on an image acoustic flow control cell sorting model and a computer readable storage medium.
Background
With the development of cell therapy and gene therapy technologies, cell sequencing and cell culture have increasingly required high purity of cell harvest, wherein cell sorting is to separate a specific cell population from a mixture of multiple cell populations according to the characteristics of the cells. The traditional separation method comprises a split-flow Cell Fluorescence Sorting technology (FACS) and an immunomagnetic Cell Sorting method, wherein the FACS has the working principle that cells to be detected are placed in a sample tube after being stained by a specific fluorescent dye, and enter a flowing chamber filled with sheath fluid under the pressure of gas, so that the cells need to be subjected to complex marking treatment in the early stage to influence the Cell activity; the principle of the immunomagnetic cell sorting method is that the antibody combined with the magnetic beads is used for labeling target cells and moves under the action of magnetic field force emitted by a magnetic field so as to be separated from the cells not combined with the magnetic beads, and the method can only sort specific types of cells although the sorting speed is high; both methods therefore often require cumbersome pre-processing and also require additional signal tags to identify the target cells, reducing cell viability.
Disclosure of Invention
The invention aims to solve at least one technical problem in the prior art, and provides a cell sorting method, a cell sorting system and a computer-readable storage medium based on an image acoustic flow control cell sorting model, which can adopt a label-free method to automatically sort cells so as to realize the purification and collection of target cells.
In a first aspect, the present invention provides a Cell sorting method based on an Image acoustic fluidic Cell Sorter (IACS) model, where the Image acoustic fluidic Cell Sorter model includes a Cell Image recognition module and a Cell ejection module, and the Cell sorting method includes:
acquiring a primary cell image set and a predetermined sample cell image set, wherein the sample cell image set comprises a plurality of cell images carrying detection marks, and the primary cell image set comprises a plurality of primary cell images;
inputting the sample cell image set into the cell image recognition module, so that the cell image recognition module performs feature extraction on a plurality of cell images in the sample cell image set based on a preset convolutional neural network and the detection identifier to obtain image type information of the cell images;
inputting the image type information and the sample cell image set into the image acousto-fluidic cell sorting model for training;
inputting the original cell image into the trained image acoustic flow control cell sorting model for image prediction, determining the image category of the original cell image, screening the original cell image set according to the image category, and determining a target image corresponding to the image category;
and ejecting the cells corresponding to the target image to a preset collecting area based on the cell ejection module.
The cell sorting method based on the image acousto-fluidic cell sorting model provided by the embodiment of the invention has at least the following beneficial effects: firstly, an original cell image set and a predetermined sample cell image set carrying a detection identifier are obtained, training of an image acoustic flow control cell sorting model is facilitated, the sample cell image set is input into a cell image recognition module in the image acoustic flow control cell sorting model, the cell image recognition module performs feature extraction on a plurality of cell images in the sample cell image set based on a predetermined convolutional neural network and the detection identifier to obtain image type information of the cell images, the image type information of the cell images is accurately determined, accurate classification of the cell images is achieved, then the image type information and the sample cell image set are input into the image acoustic flow control cell sorting model for training, accuracy of the image acoustic flow control cell sorting model for image classification is improved, the original cell images are input into the trained image acoustic flow control cell sorting model for image prediction, the image types of the original cell images are determined, the images in the original cell image set are screened according to the image types, target images corresponding to the image types are obtained, accurate screening of the image types is achieved, accuracy of cell sorting is improved, finally, cell sorting is achieved based on the cell image types, and the cell sorting collection area without ejection marks, and cell sorting is achieved.
According to some embodiments of the invention, the set of sample cell images is obtained by:
obtaining a cell mixed sample;
carrying out cell separation on the cell mixed sample based on the acoustic radiation force to obtain a sample cell set;
collecting images of the sample cell set to obtain cell images;
performing size adjustment on the cell image;
and carrying out image marking on the cell image after size adjustment to obtain the sample cell image set carrying the detection identification, facilitating subsequent training of an image acoustic flow control cell sorting model, realizing accurate cell separation through acoustic radiation force, realizing non-contact cell separation and improving cell separation efficiency.
According to some embodiments of the invention, the cell image recognition module comprises a feature extractor; the cell image recognition module performs feature extraction on the cell images in the sample cell image set based on a preset convolutional neural network and the detection identifier to obtain image type information of the cell images, and the feature extraction comprises the following steps:
inputting the cell images into the convolution layer of the preset convolution neural network for coding to obtain image characteristics of a plurality of cell images;
inputting the image characteristics of all the cell images into the characteristic extractor, enabling the characteristic extractor to perform dimension reduction processing on the cell images according to the detection identification and the convolution kernel, and performing characteristic prediction on the image characteristics subjected to dimension reduction to obtain a prediction result;
and obtaining the image type information of the cell image according to the prediction result and the image characteristics, so that the subsequent prediction of the type of the original image is facilitated, and the accuracy of the prediction of the original image is improved.
According to some embodiments of the invention, the preset convolutional neural network comprises channel dimensions and a feature layer; the inputting the cell image into the convolution layer of the preset convolution neural network for coding to obtain the image characteristics of a plurality of cell images comprises:
inputting the cell image into the preset convolutional neural network, so that the preset convolutional neural network performs up-sampling on the cell image to obtain a plurality of prediction characteristic maps;
channel splicing is carried out on the channel dimensions to obtain a prediction branch;
tensor splicing is carried out on the multiple predicted characteristic images based on the predicted branch and the characteristic layer, the image characteristics of the multiple cell images are obtained, and the efficiency of extracting the image characteristics is improved.
According to some embodiments of the present invention, the tensor stitching the plurality of predicted eigenmaps based on the predicted branch and the eigenlayer to obtain the image features of the plurality of cell images includes:
inputting the prediction characteristic diagram into the characteristic layer for calculation to obtain a prediction characteristic value;
and integrating the predicted characteristic values according to the channel dimensions of the predicted branches to obtain the image characteristics of the cell image, so that the accuracy of image category judgment is improved.
According to some embodiments of the invention, the detection identification includes position coordinate information and preset category information of the cell in the cell image; inputting the image features of all the cell images into the feature extractor, so that the feature extractor performs dimension reduction processing on the cell images according to the detection marks and the convolution layer, and performs feature prediction on the image features after dimension reduction to obtain a prediction result, wherein the step of inputting the image features of all the cell images into the feature extractor comprises the following steps:
inputting image features of the cell image into the feature extractor, so that the feature extractor generates a target anchor frame carrying a detection identifier on the cell image according to the position coordinates and the preset category information;
and performing dimension reduction processing on the cell image based on the convolution layer, and performing feature prediction on the dimension-reduced cell image according to the target anchor frame to obtain a prediction result, so that the accuracy of feature prediction on the cell image is improved.
According to some embodiments of the invention, the inputting the image category information and the sample cell image set into the image acousto-fluidic cell sorting model for training comprises:
inputting the image type information and the sample cell image set into the image acoustic flow control cell sorting model, so that the image acoustic flow control cell sorting model calculates the position coordinate information and the preset category information to obtain a confidence value;
carrying out normalization processing on the confidence coefficient value according to the target anchor frame to obtain a confidence coefficient index value;
comparing the confidence coefficient index value with a preset threshold value to obtain a comparison result;
and training the image acoustic flow control cell sorting model according to the comparison result, improving the prediction capability of the image acoustic flow control cell sorting model on cell categories, and realizing accurate prediction on cell images.
According to some embodiments of the present invention, the inputting the original cell image into the trained image acousto-fluidic cell sorting model for image prediction, and determining the image category of the original cell image, includes:
inputting the original cell image into the trained image acoustic flow control cell sorting model, so that the cell image recognition module predicts the original cell image to obtain predicted position information and predicted type information of the original cell image;
determining a prediction anchor frame of the original cell image according to the prediction position information and the prediction type information, and obtaining a prediction index value according to the prediction anchor frame;
and determining the image type of the original cell image according to the prediction index value, thereby realizing accurate judgment of the image type and avoiding the situations of misjudgment and the like.
In a second aspect, the present invention provides a cell sorting system based on an image acousto-fluidic cell sorting model, comprising:
the system comprises a sample acquisition module, a detection module and a processing module, wherein the sample acquisition module is used for acquiring a primary cell image set and a predetermined sample cell image set, the sample cell image set comprises a plurality of cell images carrying detection marks, and the primary cell image set comprises a plurality of primary cell images;
the cell image recognition module is used for receiving the sample cell image set and extracting the characteristics of the plurality of cell images in the sample cell image set based on a preset convolutional neural network and the detection identifier to obtain the image type information of the cell images;
the model training module is used for inputting the image type information and the sample cell image set into the image acousto-optic flow control cell sorting model for training;
the image determining module is used for inputting the original cell images into the trained image acoustic flow control cell sorting model for image prediction, determining the image types of the original cell images, screening the original cell image sets according to the image types, and determining target images corresponding to the image types;
and the cell ejection module is used for ejecting the cells corresponding to the target image to a preset collection area.
In a third aspect, the present invention provides a computer-readable storage medium storing computer-executable instructions for causing a computer to perform the method for cell sorting based on an image acoustic flow control cell sorting model according to the first aspect.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention may be realized and attained by the structure particularly pointed out in the written description and drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the present invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the example serve to explain the principles of the invention and do not constitute a limitation thereof.
Fig. 1 is a schematic architecture diagram of a sorting system based on an image acousto-fluidic cell sorting model according to an embodiment of the present invention;
FIG. 2 is a flow chart of a cell sorting method based on an image acousto-fluidic cell sorting model according to an embodiment of the present invention;
FIG. 3 is a flow chart of acquiring a sample cell image set according to one embodiment of the present invention;
FIG. 4 is a flowchart of a specific method of step S200 in FIG. 2;
FIG. 5 is a flowchart of a detailed method of step S210 in FIG. 4;
fig. 6 is a flowchart of a detailed method of step S213 in fig. 5;
FIG. 7 is a flowchart of a detailed method of step S220 in FIG. 4;
FIG. 8 is a flowchart of a specific method of step S300 in FIG. 2;
FIG. 9 is a flowchart of a specific method of step S400 in FIG. 2;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
It should be noted that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different from that in the flowcharts. The terms first, second and the like in the description and in the claims, as well as in the drawings described above, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The invention provides a cell sorting method, a system and a computer readable storage medium based on an image acoustic flow control cell sorting model, which comprises the steps of firstly, obtaining an original cell image set and a predetermined sample cell image set carrying a detection mark, facilitating the training of the image acoustic flow control cell sorting model, inputting the sample cell image set into the cell image recognition of the image acoustic flow control cell sorting model, enabling the cell image recognition module to perform feature extraction on a plurality of cell images in the sample cell image set based on a preset convolutional neural network and the detection mark so as to obtain the image category information of the cell image, thereby accurately determining the image category information of the cell image, realizing the accurate sorting of the cell image, then inputting the image category information and the sample cell image set into the image acoustic flow control cell sorting model for training, thereby improving the accuracy of the image acoustic flow control cell sorting model for image sorting, inputting the original cell image into the trained image acoustic flow control cell sorting model for image prediction, determining the image category of the original cell image, screening the image in the original cell image set according to obtain a target image corresponding to the image category, thereby realizing the automatic cell sorting, and the target cell sorting module can perform the cell sorting.
The embodiments of the present invention will be further explained with reference to the drawings.
Fig. 1 is a schematic diagram of an architecture of a sorting system based on an image acousto-fluidic cell sorting model according to an embodiment of the present invention.
In the example of fig. 1, the sorting system includes, but is not limited to, a sample acquisition module 100, a cell image recognition module 200, a model training module 300, an image determination module 400, and a cell ejection module 500.
In some embodiments, the sample acquiring module 100 is configured to acquire a raw cell image set and a predetermined sample cell image set, and the sample cell image set includes a plurality of cell images carrying the detection identifier, and the raw image set includes a plurality of raw cell images, wherein each of the images in the raw cell image set and the sample cell image set is obtained by taking a 20-fold image of cells in a cell mixture solution flowing in a polydimethylsiloxane chip under a microscope by using a CCD (Charge Coupled Device) camera.
It should be noted that the cell mixed solution may be a mixed solution of a plurality of types of cells, for example, breast cancer cells, prostate cells, platelet cells, and the like, and the embodiment is not particularly limited.
In some embodiments, the cell image recognition module 200 is configured to receive the sample cell image set, perform feature extraction on a plurality of cell images in the sample cell image set based on a preset convolutional neural network and a detection identifier, obtain image category information of the cell images, and accurately obtain the image category information of the cell images, so as to facilitate subsequent training of an image acoustic flow control cell sorting model and improve the classification capability of the image acoustic flow control cell sorting model.
The cell image recognition module can recognize and classify the cell images based on deep learning.
In some embodiments, the model training module 300 is configured to input the image category information and the sample cell image set into the image acousto-fluidic cell sorting model for training, so as to improve the performance of image classification of the image acousto-fluidic cell sorting model and enhance the accuracy of image classification.
In some embodiments, the image determining module 400 is configured to input the original cell image into a trained image acoustic flow control cell sorting module for image prediction, so as to determine an image category of the original cell image, then screen the original cell image set according to the image category, determine a target image corresponding to the image category, implement accurate determination of the target image, and implement identification of a cell type without a marker.
It should be noted that the image determining module 400 identifies the image by improving a deep network of the residual network structure, wherein the deep network integrates different network layers to extract features such as size, shape, and contour of cells in the image, thereby improving the accuracy of determining the image type.
In some embodiments, the cell ejection module 500 is configured to eject the cells corresponding to the target image to a preset collection area, so as to complete a sorting process of the cells, achieve sorting of specific cells, and complete separation and purification of the specific cells.
It should be noted that the cell ejection module is a Focused inter-digital Transducer (FIDT) for generating a Focused Traveling Surface Acoustic Wave (FTSAW), wherein the FIDT generates an Acoustic Radiation Force (ARF) under the driving of a high-voltage pulse signal emitted by a signal generator and amplified by a power amplifier, and drives the cell to move along the direction of the Acoustic Radiation Force, thereby realizing the sorting of the cell.
It is understood that FIDTs can produce surface acoustic waves of higher intensity and narrower beam width than SIDTs. High energy intensity can produce a higher driving force for cell sorting, thereby increasing the efficiency of cell sorting.
The sorting system and the application scenario described in the embodiment of the present invention are for more clearly illustrating the technical solution of the embodiment of the present invention, and do not limit the technical solution provided in the embodiment of the present invention.
Based on the sorting system, various embodiments of the cell sorting method based on the image acoustic flow control cell sorting model are provided below.
As shown in fig. 2, fig. 2 is a flowchart of a cell sorting method based on an image acoustic-fluidic cell sorting model according to an embodiment of the present invention, and the cell sorting method based on the image acoustic-fluidic cell sorting model includes, but is not limited to, steps S100 to S500.
It should be noted that the image acousto-fluidic cell sorting model includes a cell image recognition module and a cell ejection module.
Step S100: acquiring an original cell image set and a predetermined sample cell image set;
it should be noted that the sample cell image set includes a plurality of cell images carrying the detection markers, and the raw cell image set includes a plurality of raw cell images.
In some embodiments, the images in the original cell image set and the sample cell image set are obtained by taking an image of the cells in the cell mixture solution flowing in the PDMS chip under a microscope at 20 times by a CCD camera.
Step S200: inputting the sample cell image set into a cell image recognition module, and enabling the cell image recognition module to perform feature extraction on a plurality of cell images in the sample cell image set based on a preset convolutional neural network and a detection identifier to obtain image type information of the cell images;
in some embodiments, the sample cell image set is input to the cell image recognition module, so that the cell image recognition module performs feature extraction on the cell image based on the preset convolutional neural network, thereby obtaining the image type of the cell image, and facilitating subsequent training of the image acoustic flow control cell sorting model.
It should be noted that the convolutional neural network is preset as a trained YOLOv3 (young Only Look on Version 3, YOLOv 3) algorithm, and the cell image recognition module recognizes the image through a deep network with an improved residual network structure, wherein the deep network integrates features such as size, form, contour and the like of cells in the image extracted by different network levels, so that the accuracy of determining the image type is improved.
Step S300: inputting the image type information and the sample cell image set into an image acousto-fluidic cell sorting model for training;
in some embodiments, the image type information and the sample cell image set are input into the image acoustic flow control cell sorting model for training, so that the prediction capability of the image acoustic flow control cell sorting model on cell images is improved, and high-precision sorting of cells is realized.
Step S400: inputting the original cell image into a trained image acoustic flow control cell sorting model for image prediction, determining the image category of the original cell image, screening an original cell image set according to the image category, and determining a target image corresponding to the image category;
in some embodiments, the original cell image is input into a trained image acoustic flow control cell sorting model for image prediction, so that the image category of the original cell image can be determined, and finally, the images in the original cell image set are screened according to the image category to determine all target images corresponding to the image category, so that the classification of different image categories in the original cell image set is realized, and the subsequent cell sorting is facilitated.
Step S500: and ejecting the cells corresponding to the target image to a preset collecting area based on a cell ejection module.
In some embodiments, a cell ejection module based on an image acousto-fluidic cell sorting model ejects cells corresponding to a target image to a preset collection region, thereby enabling label-free sorting of the target cells.
It should be noted that after the cell corresponding to the target image is determined, the position of the cell needs to be determined, and when the cell is determined to be located in the preset ejection region, the cell is ejected to the preset collection region, so that the sorting and collection of the target cell are completed.
In some embodiments, the cell ejection module combines the interdigital electrode to generate an acoustic radiation force to perform ejection collection of cells, wherein the interdigital electrode can shift different types of target cells by adjusting different design structures and parameters, so as to achieve a cell sorting effect.
As shown in fig. 3, fig. 3 is a flowchart of acquiring a sample cell image set according to an embodiment of the present invention, and the method of acquiring a sample cell image set includes, but is not limited to, steps S110 to S150.
Step S110: obtaining a cell mixed sample;
step S120: carrying out cell separation on the cell mixed sample based on the acoustic radiation force to obtain a sample cell set;
step S130: collecting images of the sample cell set to obtain cell images;
step S140: performing size adjustment on the cell image;
step S150: and carrying out image marking on the cell image after size adjustment to obtain a sample cell image set carrying the detection mark.
In some embodiments, obtaining a sample cell image set requires first obtaining a cell mixed sample, then performing cell separation on the cell mixed sample based on acoustic radiation force to obtain a sample cell set, and implementing accurate cell separation, then performing image acquisition on the sample cell set under a microscope through a CCD camera to obtain a cell image, then performing size adjustment on the cell image, and adjusting the size of the image, thereby accelerating the speed of subsequently performing image acquisition on the cell image, improving the efficiency of feature extraction, and finally performing image labeling on the cell image after size adjustment to obtain the sample cell image set carrying a detection identifier.
It should be noted that the mixed cell sample may be a mixed sample with different cell diameters, a mixed sample with different cell types, or a mixed sample with different cell functions, and this embodiment is not particularly limited.
As shown in fig. 4, fig. 4 is a flowchart of a specific method of step S200 in fig. 2, and step S200 includes, but is not limited to, steps S210-S230.
It should be noted that the cell image recognition module includes a feature extractor.
Step S210: inputting the cell images into a convolution layer of a preset convolution neural network for coding to obtain image characteristics of a plurality of cell images;
in some embodiments, the cell image is input into a plurality of convolutional layers of a preset convolutional neural network, so that the first convolutional layer encodes size information and shape information of the cell image to obtain a first image feature, then the first image feature is input into a second convolutional layer to be encoded, and the like, the cell image is transmitted layer by layer until the last convolutional layer, and thus the image feature of each cell image is obtained.
Step S220: inputting the image characteristics of all cell images into a characteristic extractor, enabling the characteristic extractor to perform dimension reduction processing on the cell images according to the detection identification and the convolution kernel, and performing characteristic prediction on the image characteristics subjected to dimension reduction to obtain a prediction result;
in some embodiments, the image features of all cell images are input into the feature extractor, so that the feature extractor performs dimension reduction processing on the cell images based on the convolution kernel of the internal convolution kernel structure, performs feature prediction on the image features after dimension reduction according to the detection identifier to obtain a final prediction result, and can change the color of the cell images from color to black and white and from three-dimensional to one-dimensional through dimension reduction, thereby accelerating the efficiency of feature extraction.
The convolution kernel structure inside the feature extractor is a convolution Set (also called a Convolutional Set), and the feature extractor extracts information such as the contour, roundness, transparency, and the like of the cell image, thereby increasing the accuracy of feature extraction.
Step S230: and obtaining image type information of the cell image according to the prediction result and the image characteristics.
In some embodiments, the image type information of the cell image is finally determined according to the prediction result and the image characteristics, so that the type of the cell image is accurately judged, and the subsequent training of the image acoustic flow control cell sorting model is facilitated.
As shown in fig. 5, fig. 5 is a flowchart of a specific method of step S210 in fig. 4, and step S210 includes, but is not limited to, steps S211-S213.
It should be noted that the preset convolutional neural network includes a channel dimension and a feature layer.
Step S211: inputting the cell image into a preset convolution neural network, so that the preset convolution neural network performs up-sampling on the cell image to obtain a plurality of prediction characteristic maps;
in some embodiments, the cell image is input into the preset convolutional neural network, so that an upsampling rolling block in the preset convolutional neural network performs an upsampling operation on the cell image to obtain a plurality of predicted feature maps, thereby reducing the feature dimension and retaining the effective information of the cell image.
Step S212: channel splicing is carried out on the channel dimensions to obtain a prediction branch;
step S213: tensor splicing is carried out on the multiple predicted characteristic images based on the predicted branch and the characteristic layer, and image characteristics of the multiple cell images are obtained.
In some embodiments, a cell image is firstly input into a preset convolutional neural network, so that the preset convolutional neural network performs upsampling operation on the cell image, a plurality of predicted characteristic maps with different scales are output, then channel splicing is performed on channel dimensions, the input of a current characteristic layer is integrated with a part output from a previous characteristic layer, so that the channel dimensions of tensor are expanded, predicted branches are obtained, and finally tensor splicing is performed on the predicted characteristic maps on the basis of the predicted branches and the characteristic layers, so that image characteristics of the plurality of cell images are obtained.
In this embodiment, the yollov 3 backbone network has a structure of Darknet-53. The upsampled convolution block (Darknetconv 2d _ BN _ leak, DBL) is the smallest component of YOLOv3, consisting of a two-dimensional convolution layer, a Batch Normalization layer (BN), and an activation function layer (Rectified Linear Units, leak relu). The residual error Unit (also called Res-Unit) can make the network structure deeper, and is composed of two DBLs with convolution kernels of 1 × 1 and 3 × 3, respectively.
As shown in fig. 6, fig. 6 is a flowchart of the detailed method of step S213 in fig. 5, and step S213 includes, but is not limited to, steps S2131-S2132.
Step S2131: inputting the prediction characteristic diagram into a characteristic layer for calculation to obtain a prediction characteristic value;
step S2132: and integrating the predicted characteristic values according to the channel dimensions of the predicted branches to obtain the image characteristics of the cell image.
In some embodiments, the predicted feature map is input into a feature layer for calculation, so that the feature map calculates the whole predicted feature map, and the target and the background of the predicted feature map are accurately distinguished, wherein the feature layer detects the predicted feature map based on a preset convolutional neural network, divides the whole image into regions, calculates the predicted feature map in each region to obtain a predicted feature value, and integrates the predicted feature values according to the channel dimensions of the predicted branches, thereby obtaining the image features of the cell image.
As shown in fig. 7, fig. 7 is a flowchart of a specific method of step S220 in fig. 4, and step S220 includes, but is not limited to, steps S221-S222.
It should be noted that the detection identifier includes position coordinate information and preset category information of the cell in the cell image.
Step S221: inputting the image characteristics of the cell image into a characteristic extractor, and enabling the characteristic extractor to generate a target anchor frame carrying a detection identifier on the cell image according to the position coordinates and preset category information;
step S222: and performing dimension reduction processing on the cell image based on the convolution layer, and performing feature prediction on the dimension-reduced cell image according to the target anchor frame to obtain a prediction result.
In some embodiments, the image features of the cell image are input into the feature extractor, so that the feature extractor generates an anchor frame at the position of the cell according to the position coordinate information of the cell, generates a detection identifier in the anchor frame according to preset category information, thereby obtaining a target anchor frame carrying the detection identifier, then performs dimension reduction processing on the cell image based on convolution kernel in the convolution layer, performs feature prediction on the dimension-reduced cell image according to the target anchor frame in a pre-divided area, and obtains a prediction result, thereby improving the accuracy of the cell image prediction and facilitating the subsequent training of an image acoustic flow control cell sorting model.
As shown in fig. 8, fig. 8 is a flowchart of a specific method of step S300 in fig. 2, and step S300 includes, but is not limited to, steps S310-S340.
Step S310: inputting the image type information and the sample cell image set into an image acoustic flow control cell sorting model, so that the image acoustic flow control cell sorting model calculates the position coordinate information and the preset type information to obtain a confidence coefficient value;
step S320: carrying out normalization processing on the confidence coefficient value according to the target anchor frame to obtain a confidence coefficient index value;
step S330: comparing the confidence coefficient index value with a preset threshold value to obtain a comparison result;
step S340: and training the image acoustic flow control cell sorting model according to the comparison result.
In some embodiments, in the training operation of the image acoustic flow control cell sorting model, firstly, image type information and a sample cell image set are input into the image acoustic flow control cell sorting model for training, so that the image acoustic flow control cell sorting model calculates position coordinate information and preset category information of cells to obtain a confidence value, then, the confidence value is normalized according to a target anchor frame to obtain a confidence index value, thereby improving the training precision and the image category prediction capability of the image acoustic flow control cell sorting model, then, the confidence index value is compared with a preset threshold value to obtain a comparison result, and finally, the image acoustic flow control cell sorting model is trained according to the comparison result, so that the training of the image acoustic flow control cell sorting model is completed.
After the image type information and the sample cell image set are input into the image acoustic flow control cell sorting model, the image acoustic flow control cell sorting model gradually extracts the image features of the cell images in the sample cell image set through the convolution layer, outputs a multi-scale feature map for prediction, calculates the multi-scale feature map to obtain a confidence coefficient value, gradually adjusts the coordinates of the target anchor frame according to the extracted features and the loss calculation result, performs category judgment on the target anchor frame to obtain a confidence coefficient index value, and finally obtains a prediction result of the target in the image. The method mainly comprises the steps of training an image acoustic-fluidic cell sorting model, calculating model loss by using model parameters in each iteration process, namely a comparison value between a confidence coefficient index value and a preset threshold value, and reversely propagating the loss value to update model parameters, so that the model is gradually fitted to input data.
As shown in fig. 9, fig. 9 is a flowchart of a specific method of step S400 in fig. 2, and step S400 includes, but is not limited to, steps S410-S430.
Step S410: inputting the original cell image into a trained image acoustic flow control cell sorting model, so that a cell image recognition module predicts the original cell image to obtain predicted position information and predicted type information of the original cell image;
step S420: determining a prediction anchor frame of the original cell image according to the prediction position information and the prediction type information, and obtaining a prediction index value according to the prediction anchor frame;
step S430: and determining the image category of the original cell image according to the prediction index value.
In some embodiments, the original cell image is input into a trained image acoustic flow control cell sorting model, so that the image acoustic flow control cell sorting model predicts the original cell image, thereby obtaining predicted position information and predicted type information of the original cell image, and a predicted anchor frame of the original cell image is determined according to the predicted position information and the predicted type information, thereby realizing accurate identification of the cell, and then a predicted index value is obtained according to the predicted anchor frame, and an image category of the original cell image is determined according to the predicted index value, thereby realizing identification of the cell type.
In order to more clearly illustrate the flow of the cell sorting method based on the image acousto-fluidic cell sorting model provided by the embodiment of the invention, a specific example is described below.
Example one:
the present example is a specific example of a cell sorting method based on an image acousto-fluidic cell sorting model.
The method comprises the following steps: acquiring an original cell image set and a predetermined sample cell image set;
note that the cell image was obtained by taking a 20-fold image of the mixed solution of the white blood cells and the cancer cells under a microscope using a CCD camera. Wherein, 360 pictures are randomly selected as a sample cell image set, 120 pictures are selected as a verification set, and 120 pictures are selected as an original cell image set. The average number of cells per picture was about 20.
In some embodiments, in the process of acquiring the original cell Image set and the predetermined sample cell Image set, a Tagged Image File Format (TIFF) Image captured by the CCD camera needs to be uniformly converted into a jit (point Photographic Experts Group) Image compression algorithm Format with lossy Image quality.
Note that in order to adapt the data image to the improved YOLOv3 model, the image size needs to be adjusted to 416 × 416. However, any stretching transformation may cause deformation of the target image, in order to retain image information as much as possible in the convolution process, in this embodiment, an input image required for training and testing is first scaled to 416 × n (n < 416), then centered, a part with a side length of n is black-filled on the upper side and the lower side of the image to generate an image with a size of 416 × 416, after scaling, a data set is made, target image marking is performed on the transformed cell image by using YOLO _ mark to indicate the image category, and the x and y coordinates of the center point of the labeling frame, and the width w and the height h of the image obtained after cropping are recorded to obtain a predetermined sample cell image set.
Step two: inputting the sample cell image set into a cell image recognition module, so that the cell image recognition module performs feature extraction on a plurality of cell images in the sample cell image set based on a YOLOv3 network and detection identification to obtain image type information of the cell images;
it should be noted that, in this embodiment, the DBL + upsampling process is continuously performed on the image features, and tensor splicing operation is performed on the image features and the shallow network, so that a larger feature map is obtained, and thus a better small target detection effect is obtained. In this embodiment, four feature output layers responsible for prediction are provided, in which a 13 × 13 feature map is responsible for predicting large targets, a 26 × 26 feature map is responsible for predicting medium targets, a 52 × 52 feature map is responsible for predicting small targets, an extra-small target detection layer is added, and a 104 × 104 feature map is responsible for predicting extra-small targets. Although the increase of the detection layer causes the increase of the calculation amount and slows down the detection and classification speed, the identification effect of the small target is obviously improved.
It can be understood that the core of the convolutional layer is to map the features of the previous layer to the next layer by using a convolution kernel, and the mathematical expression of the next layer is shown in the following formula (1):
H i =W i ·H i-1 +b i (1)
wherein H i Representing the output after the convolution operation, H i-1 Represents an input feature, W i Representing a convolution kernel, b i And representing deviation, filling zero values on the periphery of the input matrix by adopting the same filling mode in each convolution layer in order to keep the output size of convolution consistent with the input size, and setting the step length of the convolution kernel to be 1 every time the convolution kernel moves. In the convolution calculation process, the mapping region of the convolution kernel on the input image is called receptive field, and the size of the mapping region is the same as that of the convolution kernel. And traversing the receptive field on the image in a sliding manner, and calculating the inner product of the current receptive field and the convolution kernel matrix as the characteristic value of the position to output each sliding. And obtaining an output characteristic diagram after convolution operation is carried out on the image until the receptive field traverses the whole image. In order to eliminate redundant information and reduce the overfitting phenomenon, a ReLU activation function is used in the residual error network, and the expression of the ReLU activation function is shown in the following formula (2):
ReLU(x)=max(0,x) (2)
step three: inputting the image type information and the sample image set into an image acousto-fluidic cell sorting model for training;
after the image type information and the sample cell image set are input into the image acoustic flow control cell sorting model, the image acoustic flow control cell sorting model gradually extracts the image features of the cell images in the sample cell image set through the convolution layer, outputs a multi-scale feature map for prediction, calculates the multi-scale feature map to obtain a confidence coefficient value, gradually adjusts the coordinates of the target anchor frame according to the extracted features and the loss calculation result, performs category judgment on the target anchor frame to obtain a confidence coefficient index value, and finally obtains a prediction result of the target in the image. The method mainly comprises the steps of training an image acoustic-fluidic cell sorting model, calculating model loss by using model parameters in each iteration process, namely a comparison value between a confidence coefficient index value and a preset threshold value, and reversely propagating the loss value to update model parameters, so that the model is gradually fitted to input data. The final detection process is to input the image into the trained model and directly output the prediction result by using the parameters of the existing model. Because the number of the preset anchor frames is large, and one target may be recognized as different categories with different confidence degrees, the number of the directly obtained prediction frames is large, and a large part of the outputs are poor or even wrong results, so that a filtering algorithm needs to be designed to extract a truly meaningful detection result.
In some embodiments, the present embodiment screens out prediction boxes with confidence levels less than 0.5 using a Non-Maximum Suppression (NMS) method.
Step four: inputting the original cell image into a trained image acoustic flow control cell sorting model for image prediction, determining the image category of the original cell image, screening an original cell image set according to the image category, and determining a target image corresponding to the image category;
in some embodiments, since the model prediction process scales and fills the original image, the detection result needs to be biased first, and then inversely scaled to draw the rectangular frame and the text. Cells were tested using a pre-trained Re-YOLOv3 model.
It should be noted that Recall (R) and Precision (P) of a real target are important indexes for measuring the performance of an image acousto-fluidic cell sorting model in target image detection, and Re-YOLOv3 is evaluated by using the two indexes, where the following formulas (3) and (4) are shown:
Figure BDA0003847038450000111
Figure BDA0003847038450000112
where TP represents true positive samples, i.e., the number of positive samples for which the true value is positive and which are classified as positive samples. FP represents false positive samples, i.e. the number of true negative samples classified as positive, and FN represents false negative samples, i.e. the number of true positive samples classified as negative.
Step five: and ejecting the cells corresponding to the target image to a preset collecting area based on a cell ejection module.
In some embodiments, the cell ejection module adopts unilateral FIDT to eject cells, and unlike bidirectional ARF of surface acoustic waves, unidirectional force generated by the surface acoustic waves continuously pushes particles to propagate along the waves, so that the cell sorting performance is improved.
It should be noted that when the sound wave encounters an obstacle during propagation, scattering of the sound wave at the interface generates a positive sound radiation pressure along the sound propagation direction. In a fluid medium, the acoustic radiation pressure is proportional to the acoustic energy density, and the acoustic radiation force generated to the particles or cells exposed to the acoustic field can be expressed as the following formula (5):
Figure BDA0003847038450000121
wherein, Y T Is the acoustic radiation factor, which depends on the density, and size of the droplets and also the speed of sound. d is the diameter of the particles,<E>time-averaged acoustic energy density.
In some embodiments, the geometry of the FIDT is determined by the finger radius R and the degree of the arc of the innermost FIDT, and as the width of the sorting signal decreases, the energy intensity of the surface acoustic wave decreases.
It should be noted that, in the embodiment, the FIDT radian may be 10 °,20 °, or 30 °, and the present embodiment is not limited in particular, where when the FIDT radian is 10 °, the frequency is from 10MHz to 70MHz, and the voltage is from 1 to 5V, the ejection effect of the particles is very weak, and it is determined that Jiao Jing is too short to support the particles to eject at the designed position. When the arc is 20 deg., frequency is 38.4MHz, and voltage is 1V, it is sufficient to eject 12 μm particles to the collection area. When the arc is 30 deg., frequency is 39.6MHz, and voltage is 2.2V, it is sufficient to eject 12 μm particles to the collection area. In comprehensive comparison, FIDT with 20-degree circular arc is applied to IACS, and when the frequency of FTSAW is 38.4MHz, cells are ejected, so that the cell sorting efficiency is improved.
In some embodiments, firstly, a sample cell image set is identified and feature extracted to obtain image type information of a cell image, then an image acoustic-fluidic cell sorting model is trained according to the image type information and the sample cell image set, so that features such as size, shape and the like of cells in the image are extracted from different network layers through deep network fusion, the detection precision of small targets is improved, finally, an original cell image is input into the trained image acoustic-fluidic cell sorting model for image prediction and screening, a target image corresponding to the image type is determined, then the cells corresponding to the target image are ejected to a preset collection area based on a cell ejection module, and a label-free method can be adopted for automatic classification of the cells, so that the purification and collection of the target cells are realized.
Referring to fig. 10, fig. 10 illustrates a hardware structure of an electronic device according to another embodiment, where the electronic device includes:
specifically, the electronic device includes: one or more processors and memory, one for example in fig. 10. The processor and memory may be connected by a bus or other means, such as by a bus in FIG. 10.
The memory, as a non-transitory computer readable storage medium, may be used to store a non-transitory software program and a non-transitory computer executable program, such as the cell sorting method based on the image acoustic flow control cell sorting model in the above-mentioned embodiments of the present invention. The processor implements the cell sorting method based on the image acoustic flow control cell sorting model in the embodiment of the present invention by running the non-transitory software program and the program stored in the memory.
The memory may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data and the like necessary for executing the cell sorting method based on the image acoustic streaming control cell sorting model in the above-described embodiment of the present invention. Further, the memory may include high speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the processor, and these remote memories may be connected to the electronic device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Furthermore, an embodiment of the present invention also provides a computer-readable storage medium, which stores computer-executable instructions, which are executed by a processor or a controller, for example, by a processor in fig. 10, and can make the processor execute the cell sorting method based on the image acoustic flow control cell sorting model in the above embodiment.
The embodiments described in the embodiments of the present application are for more clearly illustrating the technical solutions of the embodiments of the present application, and do not constitute a limitation to the technical solutions provided in the embodiments of the present application, and it is obvious to those skilled in the art that the technical solutions provided in the embodiments of the present application are also applicable to similar technical problems with the evolution of technology and the emergence of new application scenarios.
It will be appreciated by those skilled in the art that the solutions shown in fig. 2-9 are not intended to limit the embodiments of the present application and may include more or fewer steps than those shown, or some of the steps may be combined, or different steps may be included.
The above described system embodiments are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may also be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
One of ordinary skill in the art will appreciate that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof.
The terms "first," "second," "third," "fourth," and the like in the description of the application and the above-described figures, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be implemented in sequences other than those illustrated or described herein. Moreover, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" is used to describe the association relationship of the associated object, indicating that there may be three relationships, for example, "a and/or B" may indicate: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of single item(s) or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the above-described units is only one type of logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes multiple instructions for causing a computer device (which may be a personal computer, a server, or a network device) to perform all or part of the steps of the method of the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing programs, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The preferred embodiments of the present application have been described above with reference to the accompanying drawings, and the scope of the claims of the embodiments of the present application is not limited thereto. Any modifications, equivalents and improvements that may occur to those skilled in the art without departing from the scope and spirit of the embodiments of the present application are intended to be within the scope of the claims of the embodiments of the present application.

Claims (10)

1. A cell sorting method based on an image acoustic flow control cell sorting model is characterized in that the image acoustic flow control cell sorting model comprises a cell image recognition module and a cell ejection module, and the cell sorting method comprises the following steps:
acquiring a primary cell image set and a predetermined sample cell image set, wherein the sample cell image set comprises a plurality of cell images carrying detection marks, and the primary cell image set comprises a plurality of primary cell images;
inputting the sample cell image set into the cell image recognition module, so that the cell image recognition module performs feature extraction on a plurality of cell images in the sample cell image set based on a preset convolutional neural network and the detection identifier to obtain image type information of the cell images;
inputting the image type information and the sample cell image set into the image acousto-fluidic cell sorting model for training;
inputting the original cell image into the trained image acoustic flow control cell sorting model for image prediction, determining the image category of the original cell image, screening the original cell image set according to the image category, and determining a target image corresponding to the image category;
and ejecting the cells corresponding to the target image to a preset collecting area based on the cell ejection module.
2. The image acousto-fluidic cell sorting model-based cell sorting method according to claim 1, wherein the sample cell image set is obtained by the following steps:
obtaining a cell mixed sample;
carrying out cell separation on the cell mixed sample based on the acoustic radiation force to obtain a sample cell set;
collecting images of the sample cell set to obtain cell images;
performing size adjustment on the cell image;
and carrying out image marking on the cell image after size adjustment to obtain the sample cell image set carrying the detection identification.
3. The image acousto-fluidic cell sorting model-based cell sorting method according to claim 1, wherein the cell image recognition module comprises a feature extractor; the cell image recognition module performs feature extraction on the cell images in the sample cell image set based on a preset convolutional neural network and the detection identifier to obtain image type information of the cell images, and the feature extraction comprises the following steps:
inputting the cell images into the convolution layer of the preset convolution neural network for coding to obtain image characteristics of a plurality of cell images;
inputting the image characteristics of all the cell images into the characteristic extractor, enabling the characteristic extractor to perform dimension reduction processing on the cell images according to the detection identification and the convolution kernel, and performing characteristic prediction on the image characteristics subjected to dimension reduction to obtain a prediction result;
and obtaining the image type information of the cell image according to the prediction result and the image characteristics.
4. The image acousto-fluidic cell sorting method based on the model of claim 3, wherein the preset convolutional neural network includes channel dimensions and a feature layer; the inputting the cell image into the convolution layer of the preset convolution neural network for coding to obtain the image characteristics of a plurality of cell images comprises:
inputting the cell image into the preset convolutional neural network, so that the preset convolutional neural network performs up-sampling on the cell image to obtain a plurality of prediction characteristic maps;
channel splicing is carried out on the channel dimensions to obtain a prediction branch;
tensor splicing is carried out on the multiple predicted characteristic images based on the predicted branch and the characteristic layer, and image characteristics of the multiple cell images are obtained.
5. The cell sorting method based on the image acousto-fluidic cell sorting model according to claim 4, wherein tensor stitching is performed on the plurality of predicted feature maps based on the prediction branch and the feature layer to obtain image features of the plurality of cell images, and the tensor stitching method comprises:
inputting the prediction characteristic diagram into the characteristic layer for calculation to obtain a prediction characteristic value;
and integrating the predicted characteristic values according to the channel dimensions of the predicted branches to obtain the image characteristics of the cell image.
6. The image acousto-fluidic cell sorting method according to claim 3, wherein the detection identifier includes position coordinate information and preset category information of the cell in the cell image; inputting the image features of all the cell images into the feature extractor, so that the feature extractor performs dimension reduction processing on the cell images according to the detection marks and the convolution layer, and performs feature prediction on the image features after dimension reduction to obtain a prediction result, wherein the step of inputting the image features of all the cell images into the feature extractor comprises the following steps:
inputting image features of the cell image into the feature extractor, so that the feature extractor generates a target anchor frame carrying a detection identifier on the cell image according to the position coordinates and the preset category information;
and performing dimension reduction processing on the cell image based on the convolution layer, and performing feature prediction on the dimension-reduced cell image according to the target anchor frame to obtain a prediction result.
7. The image acousto-fluidic cell sorting method according to claim 6, wherein the inputting the image type information and the sample cell image set into the image acousto-fluidic cell sorting model for training comprises:
inputting the image type information and the sample cell image set into the image acoustic flow control cell sorting model, so that the image acoustic flow control cell sorting model calculates the position coordinate information and the preset category information to obtain a confidence value;
carrying out normalization processing on the confidence coefficient value according to the target anchor frame to obtain a confidence coefficient index value;
comparing the confidence coefficient index value with a preset threshold value to obtain a comparison result;
and training the image acoustic flow control cell sorting model according to the comparison result.
8. The cell sorting method based on the image acoustic flow control cell sorting model according to claim 7, wherein the inputting the original cell image into the trained image acoustic flow control cell sorting model for image prediction to determine the image category of the original cell image comprises:
inputting the original cell image into the trained image acoustic flow control cell sorting model, so that the cell image recognition module predicts the original cell image to obtain predicted position information and predicted type information of the original cell image;
determining a prediction anchor frame of the original cell image according to the prediction position information and the prediction type information, and obtaining a prediction index value according to the prediction anchor frame;
and determining the image category of the original cell image according to the prediction index value.
9. A cell sorting system based on an image acousto-fluidic cell sorting model is characterized by comprising:
the system comprises a sample acquisition module, a detection module and a processing module, wherein the sample acquisition module is used for acquiring a primary cell image set and a predetermined sample cell image set, the sample cell image set comprises a plurality of cell images carrying detection marks, and the primary cell image set comprises a plurality of primary cell images;
the cell image recognition module is used for receiving the sample cell image set and extracting the characteristics of the plurality of cell images in the sample cell image set based on a preset convolutional neural network and the detection identifier to obtain the image type information of the cell images;
the model training module is used for inputting the image type information and the sample cell image set into the image acousto-optic flow control cell sorting model for training;
the image determining module is used for inputting the original cell images into the trained image acoustic flow control cell sorting model for image prediction, determining the image types of the original cell images, screening the original cell image sets according to the image types, and determining target images corresponding to the image types;
and the cell ejection module is used for ejecting the cells corresponding to the target image to a preset collection area.
10. A computer-readable storage medium storing computer-executable instructions for causing a computer to perform the method for cell sorting based on the image acousto-fluidic cell sorting model according to any one of claims 1 to 8.
CN202211121058.2A 2022-09-15 2022-09-15 Cell sorting method and system based on image acoustic flow control cell sorting model Pending CN115588191A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211121058.2A CN115588191A (en) 2022-09-15 2022-09-15 Cell sorting method and system based on image acoustic flow control cell sorting model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211121058.2A CN115588191A (en) 2022-09-15 2022-09-15 Cell sorting method and system based on image acoustic flow control cell sorting model

Publications (1)

Publication Number Publication Date
CN115588191A true CN115588191A (en) 2023-01-10

Family

ID=84778845

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211121058.2A Pending CN115588191A (en) 2022-09-15 2022-09-15 Cell sorting method and system based on image acoustic flow control cell sorting model

Country Status (1)

Country Link
CN (1) CN115588191A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116416616A (en) * 2023-04-13 2023-07-11 沃森克里克(北京)生物科技有限公司 DC cell in-vitro culture screening method, device and computer readable medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116416616A (en) * 2023-04-13 2023-07-11 沃森克里克(北京)生物科技有限公司 DC cell in-vitro culture screening method, device and computer readable medium
CN116416616B (en) * 2023-04-13 2024-01-05 沃森克里克(北京)生物科技有限公司 DC cell in-vitro culture screening method, device and computer readable medium

Similar Documents

Publication Publication Date Title
WO2020164282A1 (en) Yolo-based image target recognition method and apparatus, electronic device, and storage medium
CN104424466A (en) Object detection method, object detection device and image pickup device
CN110956081B (en) Method and device for identifying position relationship between vehicle and traffic marking and storage medium
CN114945941A (en) Non-tumor segmentation for supporting tumor detection and analysis
CN110659601B (en) Depth full convolution network remote sensing image dense vehicle detection method based on central point
Bai et al. Chromosome extraction based on U-Net and YOLOv3
Rauf et al. Attention-guided multi-scale deep object detection framework for lymphocyte analysis in IHC histological images
CN114764789B (en) Method, system, device and storage medium for quantifying pathological cells
CN115588191A (en) Cell sorting method and system based on image acoustic flow control cell sorting model
CN114821229B (en) Underwater acoustic data set augmentation method and system based on condition generation countermeasure network
Zhao et al. Research on detection method for the leakage of underwater pipeline by YOLOv3
CN113221956A (en) Target identification method and device based on improved multi-scale depth model
CN115359264A (en) Intensive distribution adhesion cell deep learning identification method
CN112991280B (en) Visual detection method, visual detection system and electronic equipment
CN112991281B (en) Visual detection method, system, electronic equipment and medium
CN117576073A (en) Road defect detection method, device and medium based on improved YOLOv8 model
CN113780287A (en) Optimal selection method and system for multi-depth learning model
Tikkanen et al. Training based cell detection from bright-field microscope images
CN111832463A (en) Deep learning-based traffic sign detection method
CN117689928A (en) Unmanned aerial vehicle detection method for improving yolov5
CN117011274A (en) Automatic glass bottle detection system and method thereof
CN116168328A (en) Thyroid nodule ultrasonic inspection system and method
CN116342505A (en) Detection method and detection system for granulating degree of aerobic granular sludge
CN116071557A (en) Long tail target detection method, computer readable storage medium and driving device
Samudrala et al. Semantic Segmentation in Medical Image Based on Hybrid Dlinknet and Unet

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination