CN108319815A - A kind of method and its system virtually dyed for cell - Google Patents

A kind of method and its system virtually dyed for cell Download PDF

Info

Publication number
CN108319815A
CN108319815A CN201810109960.XA CN201810109960A CN108319815A CN 108319815 A CN108319815 A CN 108319815A CN 201810109960 A CN201810109960 A CN 201810109960A CN 108319815 A CN108319815 A CN 108319815A
Authority
CN
China
Prior art keywords
dyeing
information
dye image
image
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810109960.XA
Other languages
Chinese (zh)
Other versions
CN108319815B (en
Inventor
刘昌灵
刘小晴
郝伶童
凌少平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Trino Invensys (beijing) Gene Technology Co Ltd
Original Assignee
Trino Invensys (beijing) Gene Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Trino Invensys (beijing) Gene Technology Co Ltd filed Critical Trino Invensys (beijing) Gene Technology Co Ltd
Priority to CN201810109960.XA priority Critical patent/CN108319815B/en
Publication of CN108319815A publication Critical patent/CN108319815A/en
Application granted granted Critical
Publication of CN108319815B publication Critical patent/CN108319815B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16BBIOINFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR GENETIC OR PROTEIN-RELATED DATA PROCESSING IN COMPUTATIONAL MOLECULAR BIOLOGY
    • G16B40/00ICT specially adapted for biostatistics; ICT specially adapted for bioinformatics-related machine learning or data mining, e.g. knowledge discovery or pattern finding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16BBIOINFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR GENETIC OR PROTEIN-RELATED DATA PROCESSING IN COMPUTATIONAL MOLECULAR BIOLOGY
    • G16B20/00ICT specially adapted for functional genomics or proteomics, e.g. genotype-phenotype associations

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Biotechnology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Databases & Information Systems (AREA)
  • Epidemiology (AREA)
  • Bioethics (AREA)
  • Public Health (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Genetics & Genomics (AREA)
  • Proteomics, Peptides & Aminoacids (AREA)
  • Investigating Or Analysing Biological Materials (AREA)

Abstract

The present invention relates to a kind of methods and its system virtually dyed for cell.The method of the invention is established by way of machine learning dyes the prediction model of information based on the first dyeing information prediction second, and is virtually dyed to cell dyeing image by the prediction result of the prediction model, and is added on the cell dyeing image.System of the present invention is the corresponding system of following methods, can accurately execute aforementioned virtual colouring method.Compared with common staining method, the method of the invention and system obtain the second dyeing information by virtually dyeing, without carrying out corresponding practical dyeing to the corresponding sample of the second cell dyeing image, it is time saving and energy saving, simultaneously, it can easily realize on same cell dyeing image and carry out a variety of virtual dyeing on the same cell of same cell dyeing image, preferably the features such as cell type, quantity, distribution and position relationship are analyzed, are conducive to clinical treatment and scientific research.

Description

A kind of method and its system virtually dyed for cell
Technical field
The present invention relates to image processing fields, in particular to a kind of method virtually dyed for cell and its are System.
Background technology
Immunohistochemistry (immunohistochemistry) is one and is widely used in clinical position and scientific research Technological means, by dyeing reflect sample to be tested cellular morphology and molecular marked compound expression, be clinician and Researcher provides important decision information.
With going deep into for scientific research, to obtain more useful informations in order to integrated decision-making, need to grind in same sample The label species studied carefully are more and more.For example, the tumor microenvironment residing for analysis tumor infiltrating lymphocyte needs 20 kinds of research Above molecular labeling.However, there are many defects for existing immunohistochemical staining method, it is difficult to meet and increase marker dyeing The requirement of index.
Currently, conventional immunohistochemical staining method includes dual immunohistochemical stains, serial section dyeing, multicolor fluorescence Immunohistochemical staining, same tissue repeatedly dyes and the slice colouring method based on RNA.
Wherein, dual immunohistochemical stains are dyed twice on slice at same, and the dyeing information that can be obtained has Limit is more limited to diversified today in research index, and for common pigmented section, the label of after stain can cover former mark Note, the discrimination to dye information bring difficulty.
Serial section dyeing is to carry out serial section to pathological tissue, directly by the dyeing information superposition of subsequent image to the On one pictures.But what is be actually colored is different cells, is distributed scattered tumor infiltrating lymphocyte and different in research The conclusion obtained when the high tumour of matter is inaccurate.
Multicolor fluorescence immunohistochemical staining can support 4~7 kinds of dyeing, but need expensive instrument, and dyeing flow is complicated, Time-consuming, and it is also restrained to dye quantity.
Repeatedly dyeing is to repeat to dye-wash to same histotomy using the color developing agent for being easy to elution to same tissue De- process, and preserve the result dyed each time.There are coloring agents to elute incomplete, picture and metaplasia etc. for this method Defect.
Slice colouring method based on RNA, the expression by detecting RNA reflect protein expression level from side.This method It is required that the expression of fresh specimens and RNA and albumen have asynchronism, not only require high but also testing result inaccurate in sample.
It can be seen that existing immunohistochemical staining method is not suitable for carrying out multiple staining on same histotomy, or The cost of person, multiple staining it is excessively high and dye index number of species be restricted.Therefore, this field urgently provides one kind clinically Method that is acceptable, cost-effective, multiple staining being realized on same histotomy, especially can to 7 kinds with Upper index is carried out at the same time the colouring method of calibration.
In view of this, special propose the present invention.
Invention content
The first object of the present invention is to provide a kind of method virtually dyed for cell, the virtual colouring method energy Correlation degree enough between the first dyeing information of assessment and the second dyeing information, establishes prediction model, directly according to the first dyeing I.e. predictable the second dyeing of the acquisition information of information, it is time saving and energy saving.
Further, the method for the invention can be predicted on same cell dyeing image and be superimposed a variety of dyeing letters Breath, need not elute and dye repeatedly, also not need expensive detecting instrument and complicated dyeing flow, the superposition it is a variety of Dyeing information can be used for studying complicated coexpression and close on relationship etc..
Further, the first dyeing information of the method for the invention is preferably nuclear staining information, cytoplasm dyeing letter Breath or cytoskeleton dye information, and the mode of the machine learning is full convolutional neural networks mode of learning.Side of the present invention Method by morphology and deep learning method by different dyeing information integration to same cell dyeing image, can be preferably Classified to cell and extract cell characteristic, is conducive to follow-up study.
Further, the second dyeing information of the present invention is preferably the specific marker object of non-heterocyst expression Information is dyed, the prediction across patient may be implemented.
Further, the first dyeing information of the present invention is preferably nuclear staining information, and the second dyeing information is excellent It is selected as CD3 dyeing information and/or CD4 dyeing information, wherein the nuclear staining information dyes information with CD3 dyeing information, CD4 The degree of association it is high, the model established accordingly can accurately predict that CD3, CD4 dye information by nuclear staining information.
The second object of the present invention is to provide a kind of system for preceding method, and the system can be executed accurately The corresponding operating procedure of preceding method realizes prediction and the mark of nuclear staining information.
In order to realize that the above-mentioned purpose of the present invention, spy use following technical scheme:
A method of it virtually dyes, the described method comprises the following steps for cell:
(1) obtain the first dye image, first dye image be cell dyeing image, comprising first dyeing information with Second dyeing information;
(2) it using first dye image as learning object, by way of machine learning, establishes and is believed according to the first dyeing Breath prediction obtains the prediction model of the second dyeing information and verifies whether the prediction model is qualified, and qualified prediction model is used for Cell is carried out to the second dye image virtually to dye;
(3) the second dye image is obtained, second dye image is cell dyeing image, including the first dyeing information, But do not include the second dyeing information;
(4) prediction model qualified described in the first dyeing information input by second dye image, prediction obtain institute Second dyeing for stating the second dye image and is superimposed upon in second dye image information, that is, obtains described second the Virtual coloration result in two dye images.
Also include other dyeing information in some specific embodiments, in second dye image;
Preferably, other described dyeing information obtain by way of being dyed direct staining and/or virtually;
It is highly preferred that other described dyeing one or both of information are obtained by way of direct staining, remaining its He dyes information and is obtained by way of virtually dyeing;
Most preferably, the virtual staining method of other dyeing information is:Pass through step (1)~step (4) described side Formula acquisition is virtually dyed, wherein the first dye image and second that other dyeing information use in virtual dyeing course The first dye image that dyeing information uses in virtual dyeing course is identical or different, other dyeing information were dyed virtually Used in journey first dyeing information with second dyeing information used in virtual dyeing course first dye information it is identical or It is different.
In some specific embodiments, the step (2) specifically includes following steps:
A, the first dyeing information and the second dyeing information for detaching first dye image, obtain the first dyeing channel and Second dyeing channel;
B, first dye image is divided into multiple small figures, obtains data set, the data set includes each small figure The first dyeing channel and the second dyeing channel;
C, the data set is divided into training set and checksum set;
D, using the first dyeing channel of each small figure as input value, corresponding second dyeing channel is output valve, described Machine learning task is established in training set, obtains the model that information is dyed by the first dyeing information prediction second;
E, specificity and the sensitivity of the model, the model of specificity and sensitivity qualification are verified using the checksum set The second dyeing information for predicting not include in the second dye image.
In some specific embodiments, the sensitivity and specificity of qualified model are 85% or more;Preferably, it closes The sensitivity and specificity of lattice model are being 90% or more;It is highly preferred that the sensitivity and specificity of qualified model 95% with On.
In some specific embodiments, the machine learning is that neural network learning or probability graph model learn;It is excellent Selection of land, the machine learning are neural network learning;It is highly preferred that the neural network learning is the full convolutional neural networks of multilayer Model.
In some specific embodiments, the step (4) specifically includes following steps:
F. the first dyeing information for detaching second dye image, obtains the first dyeing channel;
G. first dyeing channel is inputted into the model, the second dyeing that prediction obtains second dye image is logical Road;
I. second dyeing channel is superimposed upon in second dye image.
In some specific embodiments, the first dyeing information is selected from nuclear staining information, cytoplasm dyes information, It is one or more in cytoskeleton dyeing information or molecular marked compound dyeing information;
Preferably, the first dyeing information is nuclear staining information;
It is highly preferred that the nuclear staining information, which is hematoxylin, dyes information or 4', 6- diamidino -2-phenylindone (4', 6- Diamidino-2-phenylindole, DAPI) dyeing information.
In some specific embodiments, the second dyeing information is selected from nuclear staining information, cytoplasm dyes information, Cytoskeleton dyes information or molecular marked compound dyes information;
Preferably, the second dyeing information is that molecular marked compound dyes information;
It is highly preferred that the molecular marked compound is specific marker object or the heterocyst expression of non-heterocyst expression Specific marker object;
Most preferably, the specific marker object of the non-heterocyst expression is selected from CD3, CD4 or CD8, described special-shaped thin The specific marker object of cellular expression is selected from CD34, MUC1 or P53.
In some specific embodiments, first dye image is obtained from histotomy, cell smear or cell Creep plate;Second dye image is obtained from histotomy, cell smear or cell climbing sheet.
In some specific embodiments, the sample source of first dye image and second dye image is Similar sample or non-similar sample include same cells in the non-similar sample;
In some specific embodiments, the sample source of first dye image and second dye image is Similar sample, it is preferable that be the similar sample of same subject;It is highly preferred that the similar sample of the same subject is phase With the adjacent tissue of subject.
In some specific embodiments, other described dyeing information are selected from nuclear staining information, cytoplasm dyes information, Cytoskeleton dyes information or molecular marked compound dyes information;
Preferably, other described dyeing information are that molecular marked compound dyes information;
It is highly preferred that the molecular marked compound is specific marker object or the heterocyst expression of non-heterocyst expression Specific marker object;
Most preferably, the specific marker object of the non-heterocyst expression is selected from CD3, CD4 or CD8, described special-shaped thin The specific marker object of cellular expression is selected from CD34, MUC1 or P53.
In some specific embodiments, the second dyeing information and other described dyeing information are coexpression or neighbour The dyeing information for the molecular marked compound closely expressed.
In some specific embodiment kinds, the association of the first dyeing information and the second dyeing information is in advance It is knowing or unknown.
In some specific embodiments, the method is non-diagnostic purpose;Preferably, the non-diagnostic purpose includes Obtain the analysis result of cell type, quantity, distribution or position relationship.
The invention further relates to a kind of systems for preceding method, and the system comprises image collection module, model foundations Module and virtual staining modules, wherein:
Described image acquisition module, for obtaining the first dye image and the second dye image;
The model building module is contaminated for being established by way of machine learning according to the first dyeing information prediction second The prediction model of color information and whether detect the prediction model qualified;
The virtual staining modules, for dyeing information input to the prediction mould by the first of second dye image Type to predict to obtain the second dyeing information of second dye image, and dyes information superposition described by described second In second dye image.
In some specific embodiments, the model building module includes dyeing information separation module, image segmentation Module, data processing module, training module and correction verification module, wherein:
The dyeing information separation module, the first dyeing information for detaching first dye image and the second dyeing Information obtains the first dyeing channel and the second dyeing channel;Preferably, the dyeing information separation module is additionally operable to detach miscellaneous Matter, noise and non-normal tissue, for example, out of focus, stain and sample fold.
Described image divides module, for first dye image to be divided into multiple small figures;
The data processing module, the first dyeing channel and the second dyeing channel for obtaining each small figure form number It is divided into training set and checksum set according to collection, and by the data set;
The training module, using the first dyeing channel of each small figure in training set as input value, corresponding second dyeing Information is output valve, and the prediction mould that information is dyed according to the first dyeing information prediction second is obtained by way of machine learning Type;
The correction verification module, the accuracy for verifying the prediction module, by the model of accuracy qualification for predicting The the second dyeing information for not including in second dye image.
In some specific embodiments, the virtual staining modules include dyeing information separation module, prediction module And laminating module, wherein:
The dyeing information separation module, for detaching the first dye marker of second dye image to obtain the One dyeing channel;
The prediction module, for by the first dyeing channel of second dye image input the prediction model to Obtain the second dyeing information prediction result of second dye image;
The laminating module is superimposed upon for dyeing information prediction result by described second in second dye image.
In some specific embodiments, the training module passes through neural network learning, it is preferable that the full convolution of multilayer The mode of neural network model obtains prediction model.
Description of the drawings
It, below will be to specific in order to illustrate more clearly of the specific embodiment of the invention or technical solution in the prior art Embodiment or attached drawing needed to be used in the description of the prior art are briefly described, it should be apparent that, in being described below Attached drawing is some embodiments of the present invention, for those of ordinary skill in the art, before not making the creative labor It puts, other drawings may also be obtained based on these drawings.
Fig. 1 is the flow diagram of virtual colouring method described in embodiment 1
Fig. 2 is full convolutional network structure chart described in embodiment 1;
Fig. 3 is the flow diagram of virtual colouring method described in embodiment 2;
Fig. 4 is the schematic diagram of system described in embodiment 3;
Fig. 5 is the first colored graph described in experimental example 1;
Fig. 6 is nuclear staining channel (part) schematic diagram isolated in the first colored graph described in experimental example 1;
Fig. 7 is the CD3 dyeing channels (part) that first colored graph is isolated described in experimental example 1;
Fig. 8 is the CD3 coloration result prognostic charts after model stability, corresponding diagram 4-5 on 1 training set of experimental example;
Fig. 9 is the second colored graph described in experimental example 1, includes that nucleus dyes information with MUC1 on the colored graph, does not wrap Information is dyed containing CD3 and CD4;
Figure 10 is the CD4 dyeing channels (part) that first dye image is isolated described in embodiment 1;
Figure 11 is the CD4 coloration result prognostic charts after model stability, corresponding diagram 8 on training set described in embodiment 1;
Figure 12 is the virtual coloration result figures of CD3 of the second colored graph described in experimental example 1;
Figure 13 is the virtual coloration result figure of CD3 and CD4 of cell dyeing figure described in experimental example 1.
Specific implementation mode
Embodiment of the present invention is described in detail below in conjunction with embodiment, but those skilled in the art will Understand, the following example is merely to illustrate the present invention, and is not construed as limiting the scope of the invention.It is not specified in embodiment specific Condition person carries out according to conventional conditions or manufacturer's recommended conditions.Reagents or instruments used without specified manufacturer is The conventional products obtained can be bought by city.
Embodiment 1
As shown in Figure 1, the embodiment of the present invention 1 provides a kind of method virtually dyed for cell, the method includes with Lower step:
(1) obtain the first dye image, first dye image be cell dyeing image, comprising first dyeing information with Second dyeing information;
(2) it using first dye image as learning object, by way of machine learning, establishes and is believed according to the first dyeing The prediction model of breath prediction the second dyeing information simultaneously verifies whether the prediction model is qualified, and qualified prediction model is used for the Two dye images are virtually dyed, including:
A, the first dyeing information and the second dyeing information in first dye image are detached, the first dyeing channel is obtained With the second dyeing channel;
B, first dye image is divided into multiple small figures, obtains data set, the data set includes each small figure The first dyeing channel and the second dyeing channel;
C, the data set is divided into training set and checksum set, wherein 60% is training set, and 40% is checksum set;
D, using the first dyeing channel of each small figure as input value, corresponding second dyeing channel is output valve, described Machine learning task is established in training set, obtains the model that information is dyed by the first dyeing information prediction second, wherein described Machine learning is that the full convolutional neural networks of multilayer learn (full convolutional network structure is as shown in Figure 2);
E, the accuracy of the model is verified using the checksum set, model of the accuracy more than 85% is for predicting second The the second dyeing information for not including in dye image;
(3) the second dye image is obtained, second dye image is cell dyeing image, including the first dyeing information, But do not include the second dyeing information;
(4) model described in the first dyeing information input by second dye image, prediction obtain second dyeing The second dyeing information of image is simultaneously superimposed upon in second dye image, that is, obtains the second dyeing information second Virtual coloration result in dye image, the step (4) specifically include:
F. the first dyeing information in second dye image is detached, the first dyeing channel is obtained;
G. first dyeing channel is inputted into the model, prediction obtains second dyeing channel;
I. second dyeing channel is superimposed upon in second dye image.
Embodiment 2
As shown in figure 3, the embodiment of the present invention 2 provides a kind of method virtually dyed for cell, the method first carries out The second dyeing information in 1 the second image of the method pair of embodiment carries out virtual colouring method, refers again to 1 institute of embodiment later The method of stating continues to execute the virtual dyeing of other dyeing information.
Embodiment 3
As shown in figure 4, the embodiment of the present invention 3 provides a kind of system 100 being used for embodiment 1 or 2 the methods, the system System include image collection module 110, model building module 120 and virtually staining modules 130, wherein:
Described image acquisition module 110, for obtaining the first dye image and the second dye image;
The model building module 120, for being established by way of machine learning according to the first dyeing information prediction the The prediction model of two dyeing information;
The virtual staining modules 130, for by the first of second dye image the dyeing information input to described pre- Model is surveyed, to predict to obtain the second dyeing information of second dye image, and information superposition is dyed by described second and exists In second dye image;
The model building module 120 includes dyeing information separation module 121, image segmentation module 122, data processing mould Block 123, training module 124 and correction verification module 125;Wherein:
The dyeing information separation module 121, the first dyeing information and second for detaching first dye image Information is dyed, the first dyeing channel and the second dyeing channel are obtained;
Described image divides module 122, for first dye image to be divided into multiple small figures;
The data processing module 123, the first dyeing channel and the second dyeing channel for obtaining each small figure are formed Data set, and the data set is divided into training set and checksum set;
The training module 124, using the first dyeing channel of each small figure in training set as input value, corresponding second dye Color information is output valve, and the prediction mould that information is dyed according to the first dyeing information prediction second is obtained by way of machine learning Type;
The correction verification module 125, the accuracy for verifying the prediction module, by the model of accuracy qualification for pre- Survey the second dyeing information for not including in second dye image.
The virtual staining modules 130 include dyeing information separation module 131, prediction module 132 and laminating module 133, Wherein:
The dyeing information separation module 131, for detach the first dye marker in second dye image to Obtain the first dyeing channel;
The prediction module 132, for the first dyeing channel of second dye image to be inputted the prediction model To obtain the second dyeing information prediction result of second dye image;
The laminating module 133 is superimposed upon for dyeing prediction result by described second in second dye image.
Experimental example 1
Execute virtual colouring method described in embodiment 2, wherein the first dye image is tissue section strain as shown in Figure 5 Scheme, comprising the first dyeing information (i.e. nuclear targeting information) and the second dyeing information, (i.e. CD3 is dyed on first colored graph Information);Second dye image is tissue section strain figure as shown in figure 11, is believed comprising the first dyeing on second colored graph Breath, i.e. nuclear targeting information and the 4th dyeing information (i.e. MUC1 dyes information) do not include the second dyeing information (that is, CD3 contaminates Color information) and third dyeing information (that is, CD4 dyes information);First dye image and the second dye image are tumour The dye image of infiltrating lymphocytes histotomy.
Specifically, the virtual colouring method, including:
(1) the nuclei dyeing chrominance channel (as shown in Figure 6) of the first colored graph of separation acquisition and CD3 dyeing channels are (such as Fig. 7 institutes Show);In training set, using nuclei dyeing chrominance channel as input value, CD3 dyeing channels are output valve, the full convolution god of training multilayer Through network, the prediction model according to nuclear targeting Channel Prediction CD3 coloration results is obtained;In checksum set, nucleus is inputted Dyeing channel obtains CD3 prediction coloration results (as shown in Figure 8) to the prediction model, through statistics, the standard of the prediction model True rate is 88.9%;
(2) the nuclei dyeing chrominance channel for detaching the second colored graph, is inputted step (1) described prediction model, measures in advance To the coloration result of CD3, the prediction coloration result of CD3 is superimposed with second colored graph, that is, obtains the second dye image The virtual coloration results of CD3 (as shown in figure 12);
(3) prediction model that CD4 coloration results are predicted according to nucleus is established with reference to step (1), wherein Fig. 9 is CD4's True staining conditions, Figure 10 are the dyeing prediction result of CD4, and through statistics, the accuracy rate of the prediction model is 87.2%;
(4) with reference to step (2), prediction obtains the coloration result of CD4, the prediction coloration result of CD4 is superimposed with Figure 12, i.e., Obtain the virtual coloration result of CD3 and CD4 of the second colored graph (as shown in Figure 12~13).
The position relationship of Infiltrating T cells and MUC1 expression can be analyzed according to coloration result described in Figure 13:About 1/4 T is thin Born of the same parents are raised to around MUC1 positive cells, and the T helper cell raised to around MUC1 positive cells is less.
Finally it should be noted that:The above embodiments are only used to illustrate the technical solution of the present invention., rather than its limitations;To the greatest extent Present invention has been described in detail with reference to the aforementioned embodiments for pipe, but it will be understood by those of ordinary skill in the art that:Its It still can be with technical scheme described in the above embodiments is modified, either to which part or all technical features Carry out equivalent replacement;And these modifications or replacements, various embodiments of the present invention skill that it does not separate the essence of the corresponding technical solution The range of art scheme.

Claims (10)

1. a kind of method virtually dyed for cell, which is characterized in that the described method comprises the following steps:
(1) the first dye image is obtained, first dye image is cell dyeing image, including the first dyeing information and second Dye information;
(2) it using first dye image as learning object, by way of machine learning, establishes pre- according to the first dyeing information It surveys the prediction model of the second dyeing information and verifies whether the prediction model is qualified, qualified prediction model is used to contaminate second Color image is virtually dyed;
(3) the second dye image is obtained, second dye image is cell dyeing image, including the first dyeing information, but not Including the second dyeing information;
(4) by prediction model qualified described in the first of second dye image the dyeing information input, prediction obtains described the The second dyeing information of two dye images is simultaneously superimposed upon in second dye image, that is, obtains second dyeing Virtual coloration result of the information in the second dye image.
2. according to the method described in claim 1, it is characterized in that, further including other dyeing letters in second dye image Breath, it is preferable that other described dyeing information obtain by way of being dyed direct staining and/or virtually;
It is highly preferred that one or both of other described dyeing information are obtained by way of direct staining, remaining other dye Color information is obtained by way of virtually dyeing;
It is highly preferred that the virtual staining method of other dyeing information is:It is obtained by step (1)~step (4) described mode It takes and is virtually dyed, wherein the first dye image that other dyeing information use in virtual dyeing course and the second dyeing The first dye image that information uses in virtual dyeing course is identical or different, other dyeing information are in virtual dyeing course The the first dyeing information used and the first dyeing information that the second dyeing information uses in virtual dyeing course are identical or different.
3. according to the method described in claim 1, it is characterized in that, the step (2) specifically includes following steps:
A, the first dyeing information and the second dyeing information for detaching first dye image, obtain the first dyeing channel and second Dyeing channel;
B, first dye image is divided into multiple small figures, obtains data set, the data set includes the of each small figure One dyeing channel and the second dyeing channel;
C, the data set is divided into training set and checksum set;
D, using the first dyeing channel of each small figure as input value, corresponding second dyeing channel is output valve, in the training Concentration establishes machine learning task, obtains the model that information is dyed by the first dyeing information prediction second;
E, specificity and the sensitivity of the model are verified using the checksum set, the model of specificity and sensitivity qualification is used for Predict the do not include in the second dye image second dyeing information.
4. according to the method described in claim 3, it is characterized in that, the machine learning is neural network learning or probability artwork Type learns;Preferably, the machine learning is neural network learning;It is rolled up entirely it is highly preferred that the neural network learning is multilayer Product neural network model.
5. according to the method described in claim 1, it is characterized in that, the step (4) specifically includes following steps:
F. the first dyeing information for detaching second dye image, obtains the first dyeing channel;
G. first dyeing channel is inputted into the model, predicts the second dyeing channel of second dye image;
I. second dyeing channel is superimposed upon in second dye image.
6. according to the method described in claim 1, it is characterized in that, the first dyeing information is selected from nuclear staining information, cell Matter dyes one or more in information, cytoskeleton dyeing information or molecular marked compound dyeing information;Preferably, described first Dyeing information is nuclear staining information;It is highly preferred that the nuclear staining information, which is hematoxylin, dyes information or 4', 6- diamidino -2- Phenylindole (4', 6-diamidino-2-phenylindole, DAPI) dyes information;
The second dyeing information is selected from nuclear staining information, cytoplasm dyeing information, stock-dye information or molecular marked compound dye Color information;Preferably, the second dyeing information is that molecular marked compound dyes information;It is highly preferred that the molecular marked compound is The specific marker object of non-heterocyst expression or the specific marker object of heterocyst expression;Most preferably, the non-abnormal shape The specific marker object of cell expression is selected from CD3, CD4 or CD8, and the specific marker object of the heterocyst expression is selected from CD34, MUC1 or P53;
Preferably, first dye image is obtained from histotomy, cell smear or cell climbing sheet;Second dye image It is obtained from histotomy, cell smear or cell climbing sheet;
Preferably, the sample source of first dye image and second dye image is similar sample or non-similar sample Product include allogenic cell in the non-similar sample;
It is highly preferred that the sample source of first dye image and second dye image is similar sample, it is highly preferred that For the similar sample of same subject;Most preferably, the similar sample of the same subject closes on group for same subject It knits.
7. according to the method described in claim 2, it is characterized in that, other described dyeing information are selected from nuclear staining information, cell Matter dyes information, cytoskeleton dyeing information or molecular marked compound and dyes information;Preferably, other described dyeing information are molecule Marker dyes information;It is highly preferred that the molecular marked compound is the specific marker object or special-shaped thin of non-heterocyst expression The specific marker object of cellular expression;Most preferably, the specific marker object of the non-heterocyst expression be selected from CD3, CD4 or The specific marker object of CD8, the heterocyst expression are selected from CD34, MUC1 or P53;
Preferably, the second dyeing information and the dyeing that other dyeing information are coexpression or the molecular marked compound adjacent to expression Information.
8. a kind of system for any one of claim 1~7 the method, which is characterized in that the system comprises images to obtain Modulus block, model building module and virtual staining modules, wherein:
Described image acquisition module, for obtaining the first dye image and the second dye image;
The model building module, for being established by way of machine learning according to the first dyeing dyeing letter of information prediction second The prediction model of breath and whether detect the prediction model qualified;
The virtual staining modules are used for the first dyeing information input of second dye image to the prediction model, To predict to obtain the second dyeing information, and information superposition is dyed in second dye image by described second.
9. system according to claim 8, which is characterized in that the model building module includes dyeing information splitting die Block, image segmentation module, data processing module, training module and correction verification module;Wherein:
The dyeing information separation module, the first dyeing information for detaching first dye image and the second dyeing letter Breath obtains the first dyeing channel and the second dyeing channel;
Described image divides module, for first dye image to be divided into multiple small figures;
The data processing module, the first dyeing channel and the second dyeing channel for obtaining each small figure form data set, And the data set is divided into training set and checksum set;
The training module, using the first dyeing channel of each small figure in training set as input value, corresponding second dyeing information For output valve, the prediction model that information is dyed according to the first dyeing information prediction second is obtained by way of machine learning;
The correction verification module, the accuracy for verifying the prediction module are described for predicting by the model of accuracy qualification The the second dyeing information for not including in second dye image.
10. system according to claim 8, which is characterized in that the virtual staining modules include dyeing information splitting die Block, prediction module and laminating module, wherein:
The dyeing information separation module, for detaching the first dyeing information of second dye image to obtain the first dye Chrominance channel;
The prediction module, for the first dyeing channel of second dye image to be inputted the prediction model to obtain Second dyeing information prediction result of second dye image;
The laminating module is superimposed upon for dyeing information prediction result by described second in second dye image.
CN201810109960.XA 2018-02-05 2018-02-05 Method and system for virtually staining cells Active CN108319815B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810109960.XA CN108319815B (en) 2018-02-05 2018-02-05 Method and system for virtually staining cells

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810109960.XA CN108319815B (en) 2018-02-05 2018-02-05 Method and system for virtually staining cells

Publications (2)

Publication Number Publication Date
CN108319815A true CN108319815A (en) 2018-07-24
CN108319815B CN108319815B (en) 2020-07-24

Family

ID=62902441

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810109960.XA Active CN108319815B (en) 2018-02-05 2018-02-05 Method and system for virtually staining cells

Country Status (1)

Country Link
CN (1) CN108319815B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019068415A1 (en) * 2017-10-02 2019-04-11 Room4 Group Limited Histopathological image analysis
CN110057743A (en) * 2019-05-06 2019-07-26 西安交通大学 The label-free cell three-dimensional Morphology observation method of blood film based on optic virtual dyeing
CN110070547A (en) * 2019-04-18 2019-07-30 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN113256617A (en) * 2021-06-23 2021-08-13 重庆点检生物科技有限公司 Pathological section virtual immunohistochemical staining method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090035766A1 (en) * 2002-04-25 2009-02-05 Government Of The United States, Represented By The Secretary, Department Of Health And Human Methods for Analyzing High Dimension Data for Classifying, Diagnosing, Prognosticating, and/or Predicting Diseases and Other Biological States
US20110106735A1 (en) * 1999-10-27 2011-05-05 Health Discovery Corporation Recursive feature elimination method using support vector machines
CN106248559A (en) * 2016-07-14 2016-12-21 中国计量大学 A kind of leukocyte five sorting technique based on degree of depth study
CN106650796A (en) * 2016-12-06 2017-05-10 国家纳米科学中心 Artificial intelligence based cell fluorescence image classification method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110106735A1 (en) * 1999-10-27 2011-05-05 Health Discovery Corporation Recursive feature elimination method using support vector machines
US20090035766A1 (en) * 2002-04-25 2009-02-05 Government Of The United States, Represented By The Secretary, Department Of Health And Human Methods for Analyzing High Dimension Data for Classifying, Diagnosing, Prognosticating, and/or Predicting Diseases and Other Biological States
CN106248559A (en) * 2016-07-14 2016-12-21 中国计量大学 A kind of leukocyte five sorting technique based on degree of depth study
CN106650796A (en) * 2016-12-06 2017-05-10 国家纳米科学中心 Artificial intelligence based cell fluorescence image classification method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张东波等: "局部二维结构描述的 HEp-2 染色模式分类", 《电子学报》 *
文登伟等: "融合纹理与形状特征的 HEp-2 细胞分类", 《电子与信息学报》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019068415A1 (en) * 2017-10-02 2019-04-11 Room4 Group Limited Histopathological image analysis
US11232354B2 (en) 2017-10-02 2022-01-25 Room4 Group Limited Histopathological image analysis
CN110070547A (en) * 2019-04-18 2019-07-30 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN110057743A (en) * 2019-05-06 2019-07-26 西安交通大学 The label-free cell three-dimensional Morphology observation method of blood film based on optic virtual dyeing
CN113256617A (en) * 2021-06-23 2021-08-13 重庆点检生物科技有限公司 Pathological section virtual immunohistochemical staining method and system
CN113256617B (en) * 2021-06-23 2024-02-20 重庆点检生物科技有限公司 Pathological section virtual immunohistochemical staining method and system

Also Published As

Publication number Publication date
CN108319815B (en) 2020-07-24

Similar Documents

Publication Publication Date Title
EP3469548B1 (en) System for bright field image simulation
CN108319815A (en) A kind of method and its system virtually dyed for cell
Kinkhabwala et al. MACSima imaging cyclic staining (MICS) technology reveals combinatorial target pairs for CAR T cell treatment of solid tumors
JP4376058B2 (en) Quantitative video microscopy and related systems and computer software program products
Wang et al. Automated quantitative RNA in situ hybridization for resolution of equivocal and heterogeneous ERBB2 (HER2) status in invasive breast carcinoma
Hoyt Multiplex immunofluorescence and multispectral imaging: forming the basis of a clinical test platform for immuno-oncology
US4615878A (en) Metachromatic dye sorption means for differential determination of sub-populations of lymphocytes
CN103454204A (en) Information processing apparatus, information processing method, and program
Uhr et al. Molecular profiling of individual tumor cells by hyperspectral microscopic imaging
CN108074243A (en) A kind of cellular localization method and cell segmentation method
CN106462767A (en) Examining device for processing and analyzing an image
CN109061131A (en) Dye picture processing method and processing device
Crothers et al. Proceedings of the American Society of Cytopathology companion session at the 2019 United States and Canadian Academy of Pathology meeting part 1: towards an international system for reporting serous fluid cytopathology
Surace et al. Characterization of the immune microenvironment of NSCLC by multispectral analysis of multiplex immunofluorescence images
Solorzano et al. Towards automatic protein co-expression quantification in immunohistochemical TMA slides
Bhaumik et al. Fluorescent multiplexing of 3D spheroids: Analysis of biomarkers using automated immunohistochemistry staining platform and multispectral imaging
Pacheco et al. Concordance between original screening and final diagnosis using imager vs. manual screen of cervical liquid-based cytology slides
US11893733B2 (en) Treatment efficacy prediction systems and methods
US9122904B2 (en) Method for optimization of quantitative video-microscopy and associated system
Quesada et al. Assessment of the murine tumor microenvironment by multiplex immunofluorescence
CN111596053B (en) Application of TPN molecules in preparation of circulating tumor cell detection reagent, detection reagent and kit
CN116539396B (en) Multi-cancer seed staining method based on direct standard fluorescence and multiple immunofluorescence
CN116087499B (en) Staining method and kit for cancer samples
CN116086919B (en) Staining method and kit for lung cancer and/or pancreatic cancer samples
US11908130B2 (en) Apparatuses and methods for digital pathology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant