CN112037862A - Cell screening method and device based on convolutional neural network - Google Patents

Cell screening method and device based on convolutional neural network Download PDF

Info

Publication number
CN112037862A
CN112037862A CN202010869638.4A CN202010869638A CN112037862A CN 112037862 A CN112037862 A CN 112037862A CN 202010869638 A CN202010869638 A CN 202010869638A CN 112037862 A CN112037862 A CN 112037862A
Authority
CN
China
Prior art keywords
cell
training
convolutional neural
neural network
gray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010869638.4A
Other languages
Chinese (zh)
Other versions
CN112037862B (en
Inventor
陈亮
韩晓健
侯媛媛
买买提依明·哈斯木
梁国龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Taili Biotechnology Co.,Ltd.
Original Assignee
Dongguan Taili Biological Engineering Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongguan Taili Biological Engineering Co ltd filed Critical Dongguan Taili Biological Engineering Co ltd
Priority to CN202010869638.4A priority Critical patent/CN112037862B/en
Publication of CN112037862A publication Critical patent/CN112037862A/en
Priority to PCT/CN2021/114165 priority patent/WO2022042506A1/en
Application granted granted Critical
Publication of CN112037862B publication Critical patent/CN112037862B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16BBIOINFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR GENETIC OR PROTEIN-RELATED DATA PROCESSING IN COMPUTATIONAL MOLECULAR BIOLOGY
    • G16B25/00ICT specially adapted for hybridisation; ICT specially adapted for gene or protein expression
    • G16B25/10Gene or protein expression profiling; Expression-ratio estimation or normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16BBIOINFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR GENETIC OR PROTEIN-RELATED DATA PROCESSING IN COMPUTATIONAL MOLECULAR BIOLOGY
    • G16B35/00ICT specially adapted for in silico combinatorial libraries of nucleic acids, proteins or peptides
    • G16B35/10Design of libraries
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16BBIOINFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR GENETIC OR PROTEIN-RELATED DATA PROCESSING IN COMPUTATIONAL MOLECULAR BIOLOGY
    • G16B40/00ICT specially adapted for biostatistics; ICT specially adapted for bioinformatics-related machine learning or data mining, e.g. knowledge discovery or pattern finding

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Biotechnology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Library & Information Science (AREA)
  • Genetics & Genomics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Public Health (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Epidemiology (AREA)
  • Bioethics (AREA)
  • Chemical & Material Sciences (AREA)
  • Biochemistry (AREA)
  • Investigating Or Analysing Biological Materials (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a cell screening method and a cell screening device based on a convolutional neural network, wherein the method comprises the following steps: obtaining a gray level diagram of cells to be detected corresponding to a plurality of cells to be detected in a cell culture pool; inputting a plurality of cell gray-scale maps to be detected corresponding to a plurality of cells to be detected into a target convolutional neural network model; obtaining protein expression quantities corresponding to the multiple cells to be detected respectively according to the output of the target convolutional neural network model; according to the protein expression quantity corresponding to the cells to be detected, the target cells with the protein expression quantity meeting the set conditions are determined from the cells to be detected, the cells with high protein expression quantity are rapidly determined, the cell screening can be carried out after repeated culture and screening is avoided, and the screening period is greatly shortened.

Description

Cell screening method and device based on convolutional neural network
Technical Field
The present application relates to the field of biotechnology, and in particular, to a method and an apparatus for cell screening based on a convolutional neural network, a computer device, and a storage medium.
Background
With the continuous development of genetic engineering technology, the isolation of monoclonal cell lines capable of expressing specific products from cell pools has become a common need in the biological field.
In the prior art, when obtaining cells for culturing monoclonal cell strains, cells in a cell pool can be transfected first, and the cell pool is treated by a limiting dilution method to obtain single cells, and then a homogeneous cell population, i.e., a cell strain, can be cultured by the single cells, and the cell strain with high expression level of a target protein is screened.
However, the process of obtaining single cells by the limiting dilution method is complicated, and requires repeated culture and screening, and meanwhile, due to the problem of cell transfection efficiency, the proportion of cells with high target protein expression level is low, so that the screening work efficiency of the screened cells is low, the screening period is long, and it is difficult to rapidly and accurately obtain cells with high target protein expression level.
Disclosure of Invention
In view of the above, it is necessary to provide a cell screening method, device, computer device and storage medium based on a convolutional neural network.
A convolutional neural network-based cell screening method, the method comprising:
obtaining a gray level diagram of cells to be detected corresponding to a plurality of cells to be detected in a cell culture pool;
inputting a plurality of cell gray-scale maps to be detected corresponding to a plurality of cells to be detected into a target convolutional neural network model; the target convolutional neural network model is obtained by training a plurality of training cell gray-scale images with expression quantity labels, the expression quantity labels are used for representing the real protein expression quantity of cells in each training cell gray-scale image, and the target convolutional neural network model is used for detecting the protein expression quantity of the cells in the cell gray-scale image to be detected which is input into the model;
obtaining protein expression quantities corresponding to the multiple cells to be detected respectively according to the output of the target convolutional neural network model;
and determining target cells with protein expression levels meeting set conditions from the multiple cells to be tested according to the protein expression levels corresponding to the multiple cells to be tested.
Optionally, the method further comprises:
acquiring a training cell gray-scale image and a training cell fluorescence image corresponding to the training cell gray-scale image;
determining the real protein expression quantity of the cells in the corresponding training cell gray-scale map according to the training cell fluorescence map, and obtaining an expression quantity label corresponding to the training cell gray-scale map based on the real protein expression quantity;
and training a convolutional neural network model by adopting the training cell gray-scale map and the expression quantity label to generate a target convolutional neural network model.
Optionally, the determining, according to the training cell fluorescence map, a true protein expression level of a cell in a corresponding training cell grayscale map, and obtaining an expression level label corresponding to the training cell grayscale map based on the true protein expression level includes:
determining the value of a green channel in the fluorescence map of the training cells;
determining the real protein expression quantity of the cells in the corresponding training cell gray-scale image according to the numerical value of the green channel in the training cell fluorescence image;
and determining the real protein expression quantity as an expression quantity label corresponding to the training cell gray-scale map.
Optionally, the training a convolutional neural network model by using the training cell gray scale map and the expression quantity label to generate a target convolutional neural network model, including:
inputting the training cell gray level map into the convolutional neural network model, and determining the protein expression amount corresponding to the training cell gray level map;
determining a training error according to the protein expression amount corresponding to the training cell gray-scale map and the expression amount label;
and adjusting the network parameters of the convolutional neural network model by reducing errors according to the training errors to obtain optimal network parameters, and generating a target convolutional neural network model by adopting the optimal network parameters.
Optionally, the convolutional neural network model includes multiple layers, and adjusting the network parameters of the convolutional neural network model by reducing an error according to the training error to obtain the optimal network parameters includes:
judging whether the training error is converged and is smaller than a preset error threshold value;
if so, determining the network parameters of the current convolutional neural network model as the optimal network parameters;
if not, the training error is adopted to reversely propagate from the last layer of the convolutional neural network model, network parameters of each layer of the convolutional neural network model are adjusted by reducing the error, the training cell gray-scale map is input into the convolutional neural network model, and the protein expression quantity corresponding to the training cell gray-scale map is determined.
Optionally, the convolutional neural network model comprises a first network structure, a second network structure, a third network structure, a fourth network structure and a full connection layer; the training cell gray level image is a multi-training cell gray level image;
inputting the training cell gray level map into the convolutional neural network model to determine the protein expression amount corresponding to the training cell gray level map, including:
inputting a plurality of training cell gray maps into the convolutional neural network model;
aiming at each training cell gray-scale image, acquiring corresponding training cell characteristics through the first network structure; inputting the training cell characteristics into the second network structure, the third network structure and the fourth network structure to obtain corresponding first cell characteristics, second cell characteristics and third cell characteristics;
connecting the first cell characteristic, the second cell characteristic and the third cell characteristic in parallel to obtain a characteristic fusion result corresponding to each training cell gray-scale image; wherein the first, second, and third cellular features have different levels of abstract expression;
and inputting the feature fusion results corresponding to the training cell gray maps to the full-connection layer, and determining the protein expression amounts corresponding to the training cell gray maps according to the output result of the full-connection layer.
Optionally, the determining, according to the protein expression levels corresponding to the multiple test cells, a target cell whose protein expression level meets a set condition from the multiple test cells includes:
sequencing protein expression quantities corresponding to a plurality of cells to be tested, and determining a preset number of protein expression quantities sequenced most in sequence from the sequenced protein expression quantities as target expression quantities;
and determining a gray-scale map of the cell to be detected corresponding to the target expression amount, and determining the cell to be detected corresponding to the gray-scale map of the cell to be detected as a target cell.
An apparatus for convolutional neural network-based cell screening, the apparatus comprising:
the cell gray-scale image acquisition module is used for acquiring cell gray-scale images to be detected corresponding to a plurality of cells to be detected in the cell culture pool;
the first input module is used for inputting a plurality of cell gray-scale maps to be detected corresponding to a plurality of cells to be detected into the target convolutional neural network model; the target convolutional neural network model is obtained by training a plurality of training cell gray-scale images with expression quantity labels, the expression quantity labels are used for representing the real protein expression quantity of cells in each training cell gray-scale image, and the target convolutional neural network model is used for detecting the protein expression quantity of the cells in the cell gray-scale image to be detected which is input into the model;
the protein expression quantity prediction module is used for obtaining the protein expression quantities respectively corresponding to the multiple cells to be detected according to the output of the target convolutional neural network model;
and the cell screening module is used for determining a target cell with the protein expression quantity meeting set conditions from the multiple cells to be detected according to the protein expression quantities corresponding to the multiple cells to be detected.
A computer device comprising a memory and a processor, wherein the memory stores a computer program, and the processor implements the above-mentioned cell screening method based on convolutional neural network when executing the computer program:
a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the above-mentioned convolutional neural network-based cell screening method:
according to the cell screening method, the device, the computer equipment and the storage medium based on the convolutional neural network, the gray-scale maps of the cells to be tested, which correspond to the cells to be tested, in the cell culture pool are obtained, the gray-scale maps of the cells to be tested, which correspond to the cells to be tested, are input into the target convolutional neural network model, the protein expression quantities corresponding to the cells to be tested are obtained according to the output of the target convolutional neural network model, and then the target cells with the protein expression quantities meeting the set conditions are determined from the cells to be tested according to the protein expression quantities corresponding to the cells to be tested, so that the cells with high protein expression quantities are rapidly determined, the cell screening can be carried out after repeated culture and screening is avoided, the screening period is greatly shortened, moreover, millions of single cells can be rapidly processed through the method, the cell screening range is enlarged, the workload of workers is reduced, and the cell screening efficiency is effectively improved.
Drawings
FIG. 1 is a schematic flow chart of a convolutional neural network-based cell screening method according to an embodiment;
FIG. 2 is a flow diagram illustrating the steps of model training in one embodiment;
FIG. 3a is a gray scale of a training cell in one embodiment;
FIG. 3b is a fluorescence image of a training cell according to an embodiment;
FIG. 4 is a schematic flow chart diagram illustrating steps of another model training process in one embodiment;
FIG. 5 is a schematic flow chart diagram illustrating the steps of model parameter adjustment in one embodiment;
FIG. 6 is a schematic flowchart showing a procedure for predicting an expression level of a protein in one embodiment;
FIG. 7 is a block diagram of a convolutional neural network-based cell screening apparatus according to an embodiment;
FIG. 8 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
To facilitate an understanding of embodiments of the present invention, a description of the prior art will be given.
In the prior art, when obtaining cells for culturing monoclonal cell strains, cells in a cell pool can be transfected first, and the cell pool is treated by a limiting dilution method to obtain single cells, and then a homogeneous cell population, i.e., a cell strain, can be cultured by the single cells, and the cell strain with high expression level of a target protein is screened.
However, the process of obtaining single cells by using the limiting dilution method is complicated, and repeated culture and screening are required, and meanwhile, due to the problem of cell transfection efficiency, the proportion of cells with high target protein expression level is low, so that the screening work efficiency of the screened cells is low, the screening period is long, the traditional method usually needs 6 months or more time, and the requirements of large scale and industrialization are difficult to meet while a large amount of manpower and material resources are consumed.
In one embodiment, as shown in fig. 1, a method for cell screening based on a convolutional neural network is provided, and this embodiment is illustrated by applying the method to a terminal, it is to be understood that the method may also be applied to a server, and may also be applied to a system including the terminal and the server, and is implemented by interaction between the terminal and the server. In this embodiment, the method includes the steps of:
step 101, obtaining a gray level map of cells to be detected corresponding to a plurality of cells to be detected in a cell culture pool;
as an example, the test cell may be a cell treated by transfection technique, the test cell may be a cell which has not obtained the exogenous DNA fragment after treatment, a cell which has obtained the exogenous DNA fragment but has not integrated into the chromosome, or a cell which has integrated the exogenous DNA fragment into the chromosome. The gray-scale image of the cell to be detected is a gray-scale image of the cell to be detected.
In practical applications, a plurality of cells in the cell culture pool can be transfected, so that some or all of the cells in the cell culture pool can obtain the exogenous DNA fragments. After the transfection treatment, gray-scale maps of the cells to be detected corresponding to the multiple cells to be detected in the cell culture pool can be obtained.
102, inputting a plurality of cell gray level maps to be detected corresponding to a plurality of cells to be detected into a target convolutional neural network model; the target convolutional neural network model is obtained by training a plurality of training cell gray-scale images with expression quantity labels, the expression quantity labels are used for representing the real protein expression quantity of cells in each training cell gray-scale image, and the target convolutional neural network model is used for detecting the protein expression quantity of the cells in the cell gray-scale image to be detected which is input into the model;
as an example, the training cell gray map is a picture as a training sample for training the convolutional neural network model, and the cells in the training cell gray map are cells after transfection processing.
In practical application, a plurality of training cell gray-scale maps with expression quantity labels can be adopted to train the convolutional neural network model, so that the target convolutional neural network model is obtained. The expression quantity label corresponding to the training cell gray-scale image can represent the real protein expression quantity of the cell in the training cell gray-scale image, and the expression condition of the cell in the training cell gray-scale image on the target gene can be determined through the real protein expression quantity because the main expression product of the gene is protein.
After the gray level maps of the cells to be detected corresponding to the cells to be detected are obtained, the gray level maps of the cells to be detected can be input into a preset target convolution neural network model, and the protein expression quantity of the cells in the gray level maps of the cells to be detected can be detected through the target convolution neural network model.
103, obtaining protein expression quantities corresponding to the multiple cells to be detected respectively according to the output of the target convolutional neural network model;
as an example, the protein expression level corresponding to the cell to be detected is the protein expression level of the cell in the gray-scale map of the cell to be detected, and the protein expression level is predicted by the target convolutional neural network model.
After the gray-scale maps of the cells to be detected are input into the target convolutional neural network model, the protein expression amounts respectively corresponding to the cells to be detected can be obtained according to the output of the target convolutional neural network model.
And 104, determining target cells with protein expression levels meeting set conditions from the multiple cells to be detected according to the protein expression levels corresponding to the multiple cells to be detected.
After obtaining the protein expression levels corresponding to the multiple cells to be tested, the multiple cells to be tested can be screened, and the target cells with the protein expression levels meeting the set conditions are determined from the multiple cells to be tested.
In this embodiment, by obtaining gray-scale maps of cells to be tested corresponding to a plurality of cells to be tested in a cell culture pool, the gray-scale maps of the cells to be tested corresponding to the plurality of cells to be tested are input into a target convolutional neural network model, and obtaining the protein expression quantity corresponding to each of the multiple cells to be detected according to the output of the target convolutional neural network model, and then according to the protein expression quantity corresponding to the cells to be detected, the target cells with the protein expression quantity meeting the set conditions are determined from the cells to be detected, thereby realizing the rapid determination of the cells with high protein expression quantity, avoiding the cell screening after repeated culture and screening, greatly shortening the screening period, and rapidly processing millions of single cells, when the cell screening scope is increased, the workload of workers is reduced, and the cell screening efficiency is effectively improved.
In one embodiment, as shown in fig. 2, the following steps may be further included:
step 201, acquiring a training cell gray-scale image and a training cell fluorescence image corresponding to the training cell gray-scale image;
in a specific implementation, a cell as a training set may be set, and the cell may be photographed to obtain a training cell gray-scale map and a corresponding training cell fluorescence map, respectively. The cells of the training set can be used for training a convolutional neural network model, training a cell gray-scale map and a corresponding training cell fluorescence map, and can be a gray-scale map and a fluorescence map which are shot aiming at the same cells under the same shooting condition.
The cells used as the training set may be cells treated by transfection technique in a cell culture pool, the cells may not obtain the exogenous DNA fragment after treatment, the cells may obtain the exogenous DNA fragment but not integrate into the chromosome, or the cells may integrate into the chromosome.
The gray-scale image and the fluorescence image can be simultaneously shot by using a microscope under the same shooting condition for the same batch of cells subjected to transfection treatment in the cell culture pond, one or more cells can be included in the obtained gray-scale image and the fluorescence image, and the coordinate of each cell in the gray-scale image corresponds to the coordinate of the cell in the fluorescence image. Since the same gray-scale map and fluorescence map can contain a plurality of cells at the same time, after the gray-scale map and fluorescence map are obtained, the gray-scale map and fluorescence map can be divided to obtain the training cell gray-scale map and the training cell fluorescence map corresponding to a single cell, as shown in fig. 3a and 3 b.
Step 202, determining the real protein expression quantity of the cell in the corresponding training cell gray-scale map according to the training cell fluorescence map, and obtaining an expression quantity label corresponding to the training cell gray-scale map based on the real protein expression quantity;
after the training cell fluorescence image is obtained, the real protein expression quantity of the cell in the corresponding training cell gray-scale image can be determined according to the training cell fluorescence image, and the expression quantity label corresponding to the training cell gray-scale image is obtained based on the real protein expression quantity.
And 203, training a convolutional neural network model by using the training cell gray-scale map and the expression quantity label to generate a target convolutional neural network model.
After the expression quantity labels are obtained, the convolutional neural network model can be trained by adopting a training cell gray-scale map and the corresponding expression quantity labels to generate a target convolutional neural network model.
In this embodiment, the training cell gray-scale map and the expression level label are used to train the convolutional neural network model to generate the target convolutional neural network model, so that the relationship between the cell gray-scale map and the protein expression level of the cell in the cell gray-scale map can be established, and a model support is provided for rapidly screening the cell with high protein expression level.
In an embodiment, the determining, according to the training cell fluorescence map, a true protein expression level of a cell in a corresponding training cell gray scale map, and obtaining an expression level label corresponding to the training cell gray scale map based on the true protein expression level may include the following steps:
determining the value of a green channel in the fluorescence map of the training cells; according to the value of the green channel in the training cell fluorescence image, determining the real protein expression quantity of the cell in the corresponding training cell gray image by accumulating the green channel values of the cell area in the fluorescence image; and determining the real protein expression quantity as an expression quantity label corresponding to the training cell gray-scale map.
In practical applications, the protein produced by the target gene (e.g., the foreign DNA fragment) can emit fluorescence at a specific wavelength. After the training cell fluorescence image is obtained, a numerical value (also referred to as a fluorescence value) corresponding to a green channel in the training cell fluorescence image can be determined, and a real protein expression quantity of a cell in the corresponding training cell gray-scale image is determined according to the numerical value, so that the real protein expression quantity can be determined as an expression quantity label corresponding to the training cell gray-scale image. The fluorescence value and the protein expression amount can be in a positive correlation, and the real protein expression amount can be determined through a green brightness value by obtaining a quantity mapping relationship between the fluorescence value and the protein expression amount.
In this embodiment, the real protein expression amount of the cell in the training cell grayscale is determined according to the value corresponding to the green channel in the training cell fluorescence map, and the real protein expression amount is determined as the expression amount label corresponding to the training cell grayscale, so that the value corresponding to the green channel in the training cell fluorescence map can be used as an intermediate variable, the protein expression amount of the cell in the training cell grayscale is quantified, the expression amount label corresponding to the training cell grayscale is obtained, and accurate real protein expression amount data is provided.
In one embodiment, as shown in fig. 4, the training a convolutional neural network model using the training cell gray scale map and the expression quantity labels to generate a target convolutional neural network model may include the following steps:
step 401, inputting the training cell gray level map into the convolutional neural network model, and determining a protein expression amount corresponding to the training cell gray level map;
in practical application, when the convolutional neural network model is trained, the training cell gray scale map may be input to the convolutional neural network model, and the protein expression amount corresponding to the training cell gray scale map is determined according to an output result of the convolutional neural network model, where the convolutional neural network model is used to predict the protein expression amount of cells in the training cell gray scale map, and when the protein expression amount is predicted, the convolutional neural network model takes the training cell gray scale map as input to predict the corresponding protein expression amount.
Step 402, determining a training error according to the protein expression amount corresponding to the training cell gray-scale map and the expression amount label;
after the protein expression amount corresponding to the training cell gray-scale map is obtained, because the protein expression amount corresponding to the training cell gray-scale map is the protein expression amount predicted by the convolutional neural network model, at the initial stage of the training process, the predicted protein expression amount is different from the real protein expression amount, and based on the difference, the training error between the protein expression amount corresponding to the training cell gray-scale map and the protein expression amount label can be determined. In practical application, the training error between the protein expression level and the expression level label corresponding to the training cell gray-scale map can be calculated through the cost function.
And 403, adjusting the network parameters of the convolutional neural network model by reducing errors according to the training errors to obtain optimal network parameters, and generating a target convolutional neural network model by using the optimal network parameters.
After the training error is determined, the network parameters of the convolutional neural network model can be adjusted according to the training error and the adjustment purpose of reducing the error until the optimal network parameters are obtained. After determining the optimal network parameters, a target convolutional neural network model may be generated based on the optimal network parameters.
In this embodiment, a training error is determined by training a protein expression amount and an expression amount label corresponding to a cell grayscale, a network parameter of a convolutional neural network model is adjusted according to the training error to obtain an optimal network parameter, a target convolutional neural network model is generated by using the optimal network parameter, and the convolutional neural network model can be continuously trained and optimized through a difference between a predicted protein expression amount and a real protein expression amount.
In one embodiment, the convolutional neural network model includes a multi-layer structure, and the adjusting the network parameters of the convolutional neural network model by reducing the error according to the training error to obtain the optimal network parameters may include the following steps:
judging whether the training error is converged and is smaller than a preset error threshold value; if so, determining the network parameters of the current convolutional neural network model as the optimal network parameters; if not, the training error is adopted to reversely propagate from the last layer of the convolutional neural network model, network parameters of each layer of the convolutional neural network model are adjusted by reducing the error, the training cell gray-scale map is input into the convolutional neural network model, and the protein expression quantity corresponding to the training cell gray-scale map is determined.
In practical applications, the convolutional neural network model may include a multi-layered structure, such as a set max pooling layer, an average pooling layer, and a multi-layered convolutional layer. And when the convolutional neural network model is trained, judging whether the training error is converged and is smaller than a preset error threshold value.
If yes, determining that the protein expression quantity corresponding to the training cell gray level graph is close to the real protein expression quantity, and determining the network parameter of the current convolutional neural network model as the optimal network parameter; if not, the training error can be reversely propagated from the last layer of the convolutional neural network model, the network parameters of each layer in the convolutional neural network model are adjusted according to the adjustment direction for reducing the error, and after the adjustment, the step of inputting the training cell gray-scale map into the convolutional neural network model and determining the protein expression quantity corresponding to the training cell gray-scale map can be returned.
In order to enable those skilled in the art to better understand the above steps, the following is an example to illustrate the embodiments of the present application, but it should be understood that the embodiments of the present application are not limited thereto.
As shown in fig. 5, a training cell gray-scale map and a training fluorescence map may be obtained, after initializing network parameters of each layer in the convolutional neural network model, the training cell gray-scale map may be input into the convolutional neural network model, forward propagation is performed to obtain an output value (i.e., a protein expression amount corresponding to the training cell gray-scale map in the present application), and a training error between the output value and a true value (i.e., an expression amount label in the present application) is calculated.
After the training error is obtained, whether the training error is convergent or not can be judged, if yes, the training error can be propagated reversely, the SGD (random gradient descent) algorithm or other optimization algorithms are adopted to update the connection weight and bias (namely network parameters in the application) of each layer, and the output value is obtained through forward propagation again; if not, the current network parameters can be determined to be the optimal network parameters, and a target convolutional neural network model is generated based on the optimal network parameters.
In practical application, cells subjected to transfection treatment in a cell culture pool can be divided into a training set, a verification set and a test set, wherein cells in the training set corresponding to a training cell gray-scale image can be used for training a convolutional neural network model; the cell gray level image corresponding to the cells in the verification set can be used for verifying the trained target convolutional neural network model, so that overfitting of the model on the training set is prevented, and the accuracy of the model in the training process can be determined through the verification set; the cells of the test set can be the cells to be tested in the application, and the target cells with protein expression quantity meeting set conditions can be determined from a plurality of cells to be tested through the target convolutional neural network model.
In this embodiment, whether the training error is converged is judged, if not, the training error can be reversely propagated from the last layer of the convolutional neural network model, network parameters of each layer of the convolutional neural network model are adjusted by reducing the error, and the network parameters can be continuously optimized through iterative calculation until the predicted protein expression amount is close to the real protein expression amount, so that the prediction accuracy of the target convolutional neural network model on the protein expression amount is improved.
In one embodiment, the convolutional neural network model comprises a first network structure, a second network structure, a third network structure, a fourth network structure and a full connection layer, and the training cell gray map can be a plurality of training cell gray maps.
As shown in fig. 6, the inputting the training cell gray scale map into the convolutional neural network model to determine the protein expression level corresponding to the training cell gray scale map may include the following steps:
step 601, inputting a plurality of training cell gray level maps into the convolutional neural network model;
in a particular implementation, multiple training cell grayscale maps may be input to the convolutional neural network model.
Step 602, aiming at each training cell gray-scale map, acquiring corresponding training cell characteristics through the first network structure; inputting the training cell characteristics into the second network structure, the third network structure and the fourth network structure to obtain corresponding first cell characteristics, second cell characteristics and third cell characteristics;
and aiming at each training cell gray-scale image, training cell characteristics corresponding to the training cell gray-scale image can be obtained through the first network structure, and the training cell characteristics are input into the second network structure, the third network structure and the fourth network structure to obtain corresponding first cell characteristics, second cell characteristics and third cell characteristics.
Step 603, connecting the first cell characteristic, the second cell characteristic and the third cell characteristic in parallel to obtain a characteristic fusion result corresponding to each training cell gray level image; wherein the first, second, and third cellular features have different levels of abstract expression;
after the first cell feature, the second cell feature, and the third cell feature are obtained, the first cell feature, the second cell feature, and the third cell feature may be connected in parallel to obtain a feature fusion result corresponding to each training cell gray scale map, where the first cell feature, the second cell feature, and the third cell feature may have different abstract expression levels.
In a specific implementation, the first network structure may be a feature extraction network composed of 10 convolutional layers, the second network structure may be composed of 11 convolutional layers and an average pooling layer, the third network structure may be composed of 2 convolutional layers and a maximum pooling layer, and the fourth network structure may be a network composed of an average pooling layer added after the third network structure, that is, the fourth network structure may be composed of 2 convolutional layers, a maximum pooling layer, and an average pooling layer.
In the convolutional neural network model, the shallow network may extract simple features in a training cell gray map, for example, feature extraction for cell morphology, color, texture, and cell edge, which may reflect specific features of a certain dimension of a cell, and the feature extraction of the deep network may abstract the features extracted by the shallow network to obtain cell features that may reflect the whole cell. Based on this, after the first network structure extracts the specific training cell characteristics, the training cell characteristics can be further input into the second network structure, the third network structure and the fourth network structure, and the cell characteristics with different abstract expression levels can be obtained through networks of different levels.
In practical application, the first cell characteristic, the second cell characteristic and the third cell characteristic can be output in a matrix form, after the matrixes corresponding to the first cell characteristic, the second cell characteristic and the third cell characteristic are obtained, each matrix can be added and summed after being multiplied by different weights, and the result is a characteristic fusion result, wherein the weights of the matrixes are positively correlated with the cell characteristic ratios extracted by the network structure, namely, the larger the weight is, the higher the cell characteristic ratio extracted by the network structure is.
And 604, inputting the feature fusion results corresponding to the training cell gray maps into the full-link layer, and determining the protein expression amounts corresponding to the training cell gray maps according to the output result of the full-link layer.
After the feature fusion result is determined, the feature fusion results corresponding to the training cell gray-scale maps can be input into the full-link layer, and the protein expression amounts corresponding to the training cell gray-scale maps can be determined according to the output result of the full-link layer.
In order to enable those skilled in the art to better understand the above steps, the following is an example to illustrate the embodiments of the present application, but it should be understood that the embodiments of the present application are not limited thereto.
For a plurality of training cell gray maps, an input parameter B3 448 is defined, B is the number of training cell gray maps input to the network each time the convolutional neural network model is trained, 3 represents that the number of image channels is R, G, B, and 448 is the width and height of the image.
After a plurality of training cell gray-scale maps are input into a first network structure and training cell features corresponding to each training cell gray-scale map are obtained, the training cell features can be respectively input into a second network structure, a third network structure and a fourth network structure, the second network structure, the third network structure and the fourth network structure can respectively output matrixes with the size of B100, namely the first cell features, the second cell features and the third cell features, wherein the matrix size can be adjusted in the training process, namely the value 100 can be adjusted according to actual needs.
After obtaining 3 matrices with size B × 100, each matrix may be multiplied by different weights respectively, and summed to obtain a matrix with size B × 100, i.e. a feature fusion result, where the weights may be numbers varying in the interval of 0 to 1.
After the feature fusion result is obtained, the result may be input to the fully-connected layer, the input number of the fully-connected layer corresponds to the size of the matrix, and the output number is 1, in this example, the feature fusion result may pass through one fully-connected layer with an input of 100 and an output of 1, to obtain a vector with an output form of B × 1, where each component in the vector corresponds to one training cell grayscale, and the value of the component is the second protein expression amount.
In an embodiment, the determining, according to the protein expression levels corresponding to the plurality of test cells, a target cell whose protein expression level satisfies a set condition from the plurality of test cells may include:
sequencing the protein expression quantities corresponding to the multiple cells to be tested, and determining the protein expression quantities of the preset number which are sequenced at the top as target expression quantities from the sequenced protein expression quantities corresponding to the multiple cells to be tested; and determining a gray-scale map of the cell to be detected corresponding to the target expression amount, and determining the cell to be detected corresponding to the gray-scale map of the cell to be detected as a target cell.
In a specific implementation, after obtaining protein expression quantities corresponding to a plurality of cells to be tested, the protein expression quantities corresponding to the plurality of cells to be tested may be ranked, and a preset number of protein expression quantities ranked at the top are determined as target expression quantities from the ranked protein expression quantities.
Specifically, the protein expression levels corresponding to the multiple cells to be tested may be sorted in descending order, i.e., sorted from large to small, and after sorting, the protein expression levels corresponding to the first N cells may be determined as the target expression level. Of course, in practical applications, the protein expression level exceeding the preset threshold may also be determined as the target expression level.
After the target expression level is determined, a gray-scale map of the cell to be detected corresponding to the target expression level can be determined, and the cell to be detected corresponding to the gray-scale map of the cell to be detected is determined as the target cell. The target cell can be used for culturing cell strains.
In this embodiment, protein expression levels corresponding to a plurality of cells to be tested are sorted, and according to the sorted protein expression levels corresponding to the plurality of cells to be tested, a preset number of cells with the highest protein expression level in the sorting process are determined as target cells from the plurality of cells to be tested, so that cells with high protein expression levels can be rapidly screened, and the screening workload is greatly reduced.
In one embodiment, the obtaining of the training cell gray scale map may include the following steps:
obtaining an original cell gray-scale image for model training, and carrying out normalization processing on the original cell gray-scale image; performing data enhancement processing on the processed original cell gray-scale image to obtain a training cell gray-scale image; the data enhancement processing comprises any one or more of: rotation processing, turnover processing, contrast enhancement processing and random cutting processing.
In a specific implementation, an original cell gray-scale map used for model training may be acquired, and the original cell gray-scale map may be normalized, where the original cell gray-scale map may be a gray-scale map obtained by shooting cells as a training set using a microscope.
After normalization, the processed raw cell gray-scale image may be subjected to data enhancement processing, such as rotating, flipping, random cropping, or enhancing the contrast of the image.
In this embodiment, the training cell gray-scale map is obtained by performing data enhancement processing on the processed original cell gray-scale map, so that the training cell gray-scale map used for training the convolutional neural network model can be added, and under the condition that the training sample is insufficient, the training sample is rapidly expanded, so as to provide data support for training the convolutional neural network model.
It should be understood that although the various steps in the flow charts of fig. 1-6 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps may be performed in other sequences unless explicitly stated herein to indicate a strict order of execution of the steps. Moreover, at least some of the steps in fig. 1-6 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
In one embodiment, as shown in fig. 7, there is provided a convolutional neural network-based cell screening apparatus, which may include:
the cell gray-scale image acquisition module 701 is used for acquiring cell gray-scale images to be detected corresponding to a plurality of cells to be detected in the cell culture pool;
a first input module 702, configured to input a gray-scale map of multiple cells to be detected corresponding to the multiple cells to be detected into a target convolutional neural network model; the target convolutional neural network model is obtained by training a plurality of training cell gray-scale images with expression quantity labels, the expression quantity labels are used for representing the real protein expression quantity of cells in each training cell gray-scale image, and the target convolutional neural network model is used for detecting the protein expression quantity of the cells in the cell gray-scale image to be detected which is input into the model;
a protein expression prediction module 703, configured to obtain, according to the output of the target convolutional neural network model, protein expression amounts corresponding to the multiple cells to be detected, respectively;
and the cell screening module 704 is configured to determine, according to the protein expression levels corresponding to the multiple cells to be detected, a target cell whose protein expression level meets a set condition from the multiple cells to be detected.
In one embodiment, further comprising:
the training cell gray-scale image acquisition module is used for acquiring a training cell gray-scale image and a training cell fluorescence image corresponding to the training cell gray-scale image;
the expression quantity label determining module is used for determining the real protein expression quantity of the cells in the corresponding training cell gray-scale image according to the training cell fluorescence image and obtaining the expression quantity label corresponding to the training cell gray-scale image based on the real protein expression quantity;
and the training module is used for training a convolutional neural network model by adopting the training cell gray-scale map and the expression quantity label to generate a target convolutional neural network model.
In one embodiment, the expression level tag determination module comprises:
the green brightness value determination submodule is used for determining the value of a green channel in the training cell fluorescence map;
the real protein expression quantity determining submodule is used for determining the real protein expression quantity of the cells in the corresponding training cell gray-scale image according to the numerical value of the green channel in the training cell fluorescent image;
and the expression quantity label generation submodule is used for determining the real protein expression quantity as an expression quantity label corresponding to the training cell gray-scale map.
In one embodiment, the training module comprises:
the protein expression quantity determining submodule is used for inputting the training cell gray level map into the convolutional neural network model and determining the protein expression quantity corresponding to the training cell gray level map;
the training error determining submodule is used for determining a training error according to the protein expression amount corresponding to the training cell gray-scale map and the expression amount label;
and the parameter adjusting submodule is used for adjusting the network parameters of the convolutional neural network model by reducing errors according to the training errors to obtain optimal network parameters, and generating a target convolutional neural network model by adopting the optimal network parameters.
In one embodiment, the convolutional neural network model includes a plurality of layers, and the parameter adjustment submodule includes:
the judging unit is used for judging whether the training error is converged and is smaller than a preset error threshold value; if yes, calling a parameter determining unit; if not, calling a back propagation unit;
the parameter determining unit is used for determining the network parameters of the current convolutional neural network model as the optimal network parameters;
and the back propagation unit is used for performing back propagation from the last layer of the convolutional neural network model by adopting the training error, adjusting network parameters of each layer of the convolutional neural network model by reducing the error, returning to input the training cell gray-scale map into the convolutional neural network model, and determining the protein expression amount corresponding to the training cell gray-scale map.
In one embodiment, the convolutional neural network model comprises a first network structure, a second network structure, a third network structure, a fourth network structure, and a fully-connected layer; the training cell gray level image is a multi-training cell gray level image;
the protein expression quantity determination submodule comprises:
the second input unit is used for inputting a plurality of training cell gray level maps into the convolutional neural network model;
the training cell characteristic acquisition unit is used for acquiring corresponding training cell characteristics through the first network structure aiming at each training cell gray-scale image; inputting the training cell characteristics into the second network structure, the third network structure and the fourth network structure to obtain corresponding first cell characteristics, second cell characteristics and third cell characteristics;
the characteristic fusion result acquisition unit is used for connecting the first cell characteristic, the second cell characteristic and the third cell characteristic in parallel to obtain a characteristic fusion result corresponding to each training cell gray-scale image; wherein the first cellular characteristic, the second cellular characteristic, and the second cellular characteristic have different levels of abstract expression;
and the result output unit is used for inputting the feature fusion results corresponding to the training cell gray-scale maps to the full-connection layer and determining the second protein expression amounts corresponding to the training cell gray-scale maps according to the output result of the full-connection layer.
In one embodiment, the cell screening module 704 includes:
the sequencing submodule is used for sequencing the protein expression quantities corresponding to the multiple cells to be tested and determining the protein expression quantities with the preset number in the top sequence as target expression quantities from the sequenced protein expression quantities corresponding to the multiple cells to be tested;
and the target cell determination submodule is used for determining a to-be-detected cell gray-scale map corresponding to the target expression level and determining the to-be-detected cell corresponding to the to-be-detected cell gray-scale map as the target cell.
In one embodiment, the training cell gray scale map obtaining module includes:
the original cell gray level image acquisition sub-module is used for acquiring an original cell gray level image used for model training and carrying out normalization processing on the original cell gray level image;
the data enhancement processing submodule is used for carrying out data enhancement processing on the processed original cell gray-scale image to obtain a training cell gray-scale image; the data enhancement processing comprises any one or more of: rotation processing, turnover processing, contrast enhancement processing and random cutting processing.
For the specific definition of the cell screening apparatus based on the convolutional neural network, refer to the above definition of the cell screening method based on the convolutional neural network, and are not described herein again. The modules in the convolutional neural network-based cell screening device can be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 8. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a convolutional neural network-based cell screening method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 8 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
obtaining a gray level diagram of cells to be detected corresponding to a plurality of cells to be detected in a cell culture pool;
inputting a plurality of cell gray-scale maps to be detected corresponding to a plurality of cells to be detected into a target convolutional neural network model; the target convolutional neural network model is obtained by training a plurality of training cell gray-scale images with expression quantity labels, the expression quantity labels are used for representing the real protein expression quantity of cells in each training cell gray-scale image, and the target convolutional neural network model is used for detecting the protein expression quantity of the cells in the cell gray-scale image to be detected which is input into the model;
obtaining protein expression quantities corresponding to the multiple cells to be detected respectively according to the output of the target convolutional neural network model;
and determining target cells with protein expression levels meeting set conditions from the multiple cells to be tested according to the protein expression levels corresponding to the multiple cells to be tested.
In one embodiment, the steps in the other embodiments described above are also implemented when the computer program is executed by a processor.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
obtaining a gray level diagram of cells to be detected corresponding to a plurality of cells to be detected in a cell culture pool;
inputting a plurality of cell gray-scale maps to be detected corresponding to a plurality of cells to be detected into a target convolutional neural network model; the target convolutional neural network model is obtained by training a plurality of training cell gray-scale images with expression quantity labels, the expression quantity labels are used for representing the real protein expression quantity of cells in each training cell gray-scale image, and the target convolutional neural network model is used for detecting the protein expression quantity of the cells in the cell gray-scale image to be detected which is input into the model;
obtaining protein expression quantities corresponding to the multiple cells to be detected respectively according to the output of the target convolutional neural network model;
and determining target cells with protein expression levels meeting set conditions from the multiple cells to be tested according to the protein expression levels corresponding to the multiple cells to be tested.
In one embodiment, the computer program when executed by the processor also performs the steps in the other embodiments described above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method for cell screening based on a convolutional neural network, the method comprising:
obtaining a gray level diagram of cells to be detected corresponding to a plurality of cells to be detected in a cell culture pool;
inputting a plurality of cell gray-scale maps to be detected corresponding to a plurality of cells to be detected into a target convolutional neural network model; the target convolutional neural network model is obtained by training a plurality of training cell gray-scale images with expression quantity labels, the expression quantity labels are used for representing the real protein expression quantity of cells in each training cell gray-scale image, and the target convolutional neural network model is used for detecting the protein expression quantity of the cells in the cell gray-scale image to be detected which is input into the model;
obtaining protein expression quantities corresponding to the multiple cells to be detected respectively according to the output of the target convolutional neural network model;
and determining target cells with protein expression levels meeting set conditions from the multiple cells to be tested according to the protein expression levels corresponding to the multiple cells to be tested.
2. The method of claim 1, further comprising:
acquiring a training cell gray-scale image and a training cell fluorescence image corresponding to the training cell gray-scale image;
determining the real protein expression quantity of the cells in the corresponding training cell gray-scale map according to the training cell fluorescence map, and obtaining an expression quantity label corresponding to the training cell gray-scale map based on the real protein expression quantity;
and training a convolutional neural network model by adopting the training cell gray-scale map and the expression quantity label to generate a target convolutional neural network model.
3. The method according to claim 2, wherein the determining the real protein expression level of the cell in the corresponding training cell gray-scale map according to the training cell fluorescence map, and obtaining the expression level label corresponding to the training cell gray-scale map based on the real protein expression level comprises:
determining the value of a green channel in the fluorescence map of the training cells;
determining the real protein expression quantity of the cells in the corresponding training cell gray-scale image according to the numerical value of the green channel in the training cell fluorescence image;
and determining the real protein expression quantity as an expression quantity label corresponding to the training cell gray-scale map.
4. The method of claim 2, wherein training the convolutional neural network model using the training cell grayscale map and the expression level labels to generate a target convolutional neural network model comprises:
inputting the training cell gray level map into the convolutional neural network model, and determining the protein expression amount corresponding to the training cell gray level map;
determining a training error according to the protein expression amount corresponding to the training cell gray-scale map and the expression amount label;
and adjusting the network parameters of the convolutional neural network model by reducing errors according to the training errors to obtain optimal network parameters, and generating a target convolutional neural network model by adopting the optimal network parameters.
5. The method of claim 4, wherein the convolutional neural network model comprises a plurality of layers, and wherein adjusting the network parameters of the convolutional neural network model by reducing the error according to the training error to obtain the optimal network parameters comprises:
judging whether the training error is converged and is smaller than a preset error threshold value;
if so, determining the network parameters of the current convolutional neural network model as the optimal network parameters;
if not, the training error is adopted to reversely propagate from the last layer of the convolutional neural network model, network parameters of each layer of the convolutional neural network model are adjusted by reducing the error, the training cell gray-scale map is input into the convolutional neural network model, and the protein expression quantity corresponding to the training cell gray-scale map is determined.
6. The method of claim 4, wherein the convolutional neural network model comprises a first network structure, a second network structure, a third network structure, a fourth network structure, and a fully-connected layer; the training cell gray level image is a multi-training cell gray level image;
inputting the training cell gray level map into the convolutional neural network model to determine the protein expression amount corresponding to the training cell gray level map, including:
inputting a plurality of training cell gray maps into the convolutional neural network model;
aiming at each training cell gray-scale image, acquiring corresponding training cell characteristics through the first network structure; inputting the training cell characteristics into the second network structure, the third network structure and the fourth network structure to obtain corresponding first cell characteristics, second cell characteristics and third cell characteristics;
connecting the first cell characteristic, the second cell characteristic and the third cell characteristic in parallel to obtain a characteristic fusion result corresponding to each training cell gray-scale image; wherein the first, second, and third cellular features have different levels of abstract expression;
and inputting the feature fusion results corresponding to the training cell gray maps to the full-connection layer, and determining the protein expression amounts corresponding to the training cell gray maps according to the output result of the full-connection layer.
7. The method according to claim 1, wherein the determining the target cell whose protein expression level satisfies the set condition from the plurality of test cells according to the protein expression levels corresponding to the plurality of test cells comprises:
sequencing the protein expression quantities corresponding to the multiple cells to be tested, and determining the protein expression quantities of the preset number which are sequenced at the top as target expression quantities from the sequenced protein expression quantities corresponding to the multiple cells to be tested;
and determining a gray-scale map of the cell to be detected corresponding to the target expression amount, and determining the cell to be detected corresponding to the gray-scale map of the cell to be detected as a target cell.
8. A convolutional neural network-based cell screening apparatus, comprising:
the cell gray-scale image acquisition module is used for acquiring cell gray-scale images to be detected corresponding to a plurality of cells to be detected in the cell culture pool;
the first input module is used for inputting a plurality of cell gray-scale maps to be detected corresponding to a plurality of cells to be detected into the target convolutional neural network model; the target convolutional neural network model is obtained by training a plurality of training cell gray-scale images with expression quantity labels, the expression quantity labels are used for representing the real protein expression quantity of cells in each training cell gray-scale image, and the target convolutional neural network model is used for detecting the protein expression quantity of the cells in the cell gray-scale image to be detected which is input into the model;
the protein expression quantity prediction module is used for obtaining the protein expression quantities respectively corresponding to the multiple cells to be detected according to the output of the target convolutional neural network model;
and the cell screening module is used for determining a target cell with the protein expression quantity meeting set conditions from the multiple cells to be detected according to the protein expression quantities corresponding to the multiple cells to be detected.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor when executing the computer program implements the steps of the convolutional neural network-based cell screening method of any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the convolutional neural network-based cell screening method of any one of claims 1 to 7.
CN202010869638.4A 2020-08-26 2020-08-26 Cell screening method and device based on convolutional neural network Active CN112037862B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010869638.4A CN112037862B (en) 2020-08-26 2020-08-26 Cell screening method and device based on convolutional neural network
PCT/CN2021/114165 WO2022042506A1 (en) 2020-08-26 2021-08-24 Convolutional neural network-based cell screening method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010869638.4A CN112037862B (en) 2020-08-26 2020-08-26 Cell screening method and device based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN112037862A true CN112037862A (en) 2020-12-04
CN112037862B CN112037862B (en) 2021-11-30

Family

ID=73580914

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010869638.4A Active CN112037862B (en) 2020-08-26 2020-08-26 Cell screening method and device based on convolutional neural network

Country Status (2)

Country Link
CN (1) CN112037862B (en)
WO (1) WO2022042506A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112861986A (en) * 2021-03-02 2021-05-28 广东工业大学 Method for detecting blood fat subcomponent content based on convolutional neural network
WO2022042509A1 (en) * 2020-08-26 2022-03-03 深圳太力生物技术有限责任公司 Cell screening method and apparatus based on expression level prediction model
WO2022042506A1 (en) * 2020-08-26 2022-03-03 深圳太力生物技术有限责任公司 Convolutional neural network-based cell screening method and device
CN114360652A (en) * 2022-01-28 2022-04-15 深圳太力生物技术有限责任公司 Cell strain similarity evaluation method and similar cell strain culture medium formula recommendation method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040053876A1 (en) * 2002-03-26 2004-03-18 The Regents Of The University Of Michigan siRNAs and uses therof
US7655397B2 (en) * 2002-04-25 2010-02-02 The United States Of America As Represented By The Department Of Health And Human Services Selections of genes and methods of using the same for diagnosis and for targeting the therapy of select cancers
US20180349770A1 (en) * 2016-02-26 2018-12-06 Google Llc Processing cell images using neural networks
CN109102515A (en) * 2018-07-31 2018-12-28 浙江杭钢健康产业投资管理有限公司 A kind of method for cell count based on multiple row depth convolutional neural networks
CN109815870A (en) * 2019-01-17 2019-05-28 华中科技大学 The high-throughput functional gene screening technique and system of cell phenotype image quantitative analysis
CN110826379A (en) * 2018-08-13 2020-02-21 中国科学院长春光学精密机械与物理研究所 Target detection method based on feature multiplexing and YOLOv3
CN110838340A (en) * 2019-10-31 2020-02-25 军事科学院军事医学研究院生命组学研究所 Method for identifying protein biomarkers independent of database search
CN110992303A (en) * 2019-10-29 2020-04-10 平安科技(深圳)有限公司 Abnormal cell screening method and device, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112037862B (en) * 2020-08-26 2021-11-30 深圳太力生物技术有限责任公司 Cell screening method and device based on convolutional neural network

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040053876A1 (en) * 2002-03-26 2004-03-18 The Regents Of The University Of Michigan siRNAs and uses therof
US7655397B2 (en) * 2002-04-25 2010-02-02 The United States Of America As Represented By The Department Of Health And Human Services Selections of genes and methods of using the same for diagnosis and for targeting the therapy of select cancers
US20180349770A1 (en) * 2016-02-26 2018-12-06 Google Llc Processing cell images using neural networks
CN109102515A (en) * 2018-07-31 2018-12-28 浙江杭钢健康产业投资管理有限公司 A kind of method for cell count based on multiple row depth convolutional neural networks
CN110826379A (en) * 2018-08-13 2020-02-21 中国科学院长春光学精密机械与物理研究所 Target detection method based on feature multiplexing and YOLOv3
CN109815870A (en) * 2019-01-17 2019-05-28 华中科技大学 The high-throughput functional gene screening technique and system of cell phenotype image quantitative analysis
CN110992303A (en) * 2019-10-29 2020-04-10 平安科技(深圳)有限公司 Abnormal cell screening method and device, electronic equipment and storage medium
CN110838340A (en) * 2019-10-31 2020-02-25 军事科学院军事医学研究院生命组学研究所 Method for identifying protein biomarkers independent of database search

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
NACEI: "一种基于灰度共生矩阵的细胞表达量初步筛选方法", 《HTTP://WWW.360DOC.COM/DOCUMENT/20/0417/09/14292954_906582070.SHTML》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022042509A1 (en) * 2020-08-26 2022-03-03 深圳太力生物技术有限责任公司 Cell screening method and apparatus based on expression level prediction model
WO2022042506A1 (en) * 2020-08-26 2022-03-03 深圳太力生物技术有限责任公司 Convolutional neural network-based cell screening method and device
CN112861986A (en) * 2021-03-02 2021-05-28 广东工业大学 Method for detecting blood fat subcomponent content based on convolutional neural network
CN112861986B (en) * 2021-03-02 2022-04-22 广东工业大学 Method for detecting blood fat subcomponent content based on convolutional neural network
CN114360652A (en) * 2022-01-28 2022-04-15 深圳太力生物技术有限责任公司 Cell strain similarity evaluation method and similar cell strain culture medium formula recommendation method

Also Published As

Publication number Publication date
CN112037862B (en) 2021-11-30
WO2022042506A1 (en) 2022-03-03

Similar Documents

Publication Publication Date Title
CN112037862B (en) Cell screening method and device based on convolutional neural network
CN112001329B (en) Method and device for predicting protein expression amount, computer device and storage medium
CN111428818B (en) Deep learning model test method and device based on neural pathway activation state
CN111598213B (en) Network training method, data identification method, device, equipment and medium
CN108665065B (en) Method, device and equipment for processing task data and storage medium
KR20210032140A (en) Method and apparatus for performing pruning of neural network
CN112017730B (en) Cell screening method and device based on expression quantity prediction model
CN110866922B (en) Image semantic segmentation model and modeling method based on reinforcement learning and migration learning
CN111047563A (en) Neural network construction method applied to medical ultrasonic image
CN112287965A (en) Image quality detection model training method and device and computer equipment
CN113408802B (en) Energy consumption prediction network training method and device, energy consumption prediction method and device, and computer equipment
CN114972759A (en) Remote sensing image semantic segmentation method based on hierarchical contour cost function
CN112836820A (en) Deep convolutional network training method, device and system for image classification task
CN115239946A (en) Small sample transfer learning training and target detection method, device, equipment and medium
Kuchemüller et al. Efficient optimization of process strategies with model-assisted design of experiments
CN114169460A (en) Sample screening method, sample screening device, computer equipment and storage medium
CN111949530B (en) Test result prediction method and device, computer equipment and storage medium
Neydorf et al. Monochrome multitone image approximation with low-dimensional palette
CN113052217A (en) Prediction result identification and model training method and device thereof, and computer storage medium
CN112330671A (en) Method and device for analyzing cell distribution state, computer equipment and storage medium
CN116129189A (en) Plant disease identification method, plant disease identification equipment, storage medium and plant disease identification device
US11775822B2 (en) Classification model training using diverse training source and inference engine using same
CN112396083B (en) Image recognition, model training and construction and detection methods, systems and equipment
Itano et al. An automated image analysis and cell identification system using machine learning methods
Dhivya et al. Weighted particle swarm optimization algorithm for randomized unit testing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20211022

Address after: 518048 No. 323-m, third floor, comprehensive Xinxing phase I, No. 1, Haihong Road, Fubao community, Fubao street, Futian District, Shenzhen, Guangdong Province

Applicant after: Shenzhen Taili Biotechnology Co.,Ltd.

Address before: 523560 building 3 and 4, gaobao green technology city, Tutang village, Changping Town, Dongguan City, Guangdong Province

Applicant before: Dongguan Taili Biological Engineering Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant