CN110110634A - Pathological image polychromatophilia color separation method based on deep learning - Google Patents

Pathological image polychromatophilia color separation method based on deep learning Download PDF

Info

Publication number
CN110110634A
CN110110634A CN201910347578.7A CN201910347578A CN110110634A CN 110110634 A CN110110634 A CN 110110634A CN 201910347578 A CN201910347578 A CN 201910347578A CN 110110634 A CN110110634 A CN 110110634A
Authority
CN
China
Prior art keywords
matrix
image
model
dyeing
prediction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910347578.7A
Other languages
Chinese (zh)
Other versions
CN110110634B (en
Inventor
张堃
付君红
李子杰
姜朋朋
吴建国
张培建
陆平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong University
Original Assignee
Nantong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong University filed Critical Nantong University
Priority to CN201910347578.7A priority Critical patent/CN110110634B/en
Publication of CN110110634A publication Critical patent/CN110110634A/en
Application granted granted Critical
Publication of CN110110634B publication Critical patent/CN110110634B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/698Matching; Classification

Abstract

The pathological image polychromatophilia color separation method based on deep learning that the invention discloses a kind of, comprising the following steps: (1) optical density transformation is carried out to pathological staining image, obtain the optical density matrix of original pathological staining image;(2) the optical density matrix for obtaining step (1) constructs ResU-Net model;(3) the ResU-Net model obtained to step (2) is trained;(4) image dyeing separation is carried out by the ResU-Net model after step (3) training.This method can carry out Pixel-level analysis, same class loading in better separate picture, to improve dyeing separating property to original image.

Description

Pathological image polychromatophilia color separation method based on deep learning
Technical field
The present invention relates to technical field of image information processing, especially a kind of pathological image polychromatophilia color based on deep learning Separation method.
Background technique
Dyeing separation is a kind of preconditioning technique automatically analyzed for aid in tissue pathological image.With specific structure knot The identification of histological types can be enhanced in the chemical reagent of conjunction.In histopathology, most widely used coloring agent is H and E (H&E).Hematoxylin makes nucleus that navy blue or purple be presented in conjunction with nucleic acid, and Yihong is adhered to tissue In protein on, make cellular matrix present pink.Traditional dyeing divides method, such as color deconvolution (CD) and independent element point Analysis (ICA) is intended to find optimum dyeing matrix and dyeing concentration matrix, and the predefined value for dyeing matrix is inaccurate.And In practical applications, the hematoxylin and eosin dyeing effect observed will receive many factors influence, for example number used is swept Retouch instrument type and Staining Protocol.In recent years, compared with traditional computer vision methods, deep learning is significant to improve many groups Knit it is achievable in pathology task as a result, but research be concentrated mainly on classification and segmentation on, not yet dedicated for dyeing separation Task.Convolutional neural networks (CNN) extract color and texture information, and can obtain higher level by stacking convolutional layer Feature.When in conjunction with ICA, texture information, which has been displayed, can improve dyeing separating resulting.
Color deconvolution (CD) algorithm of Ruifrok and Johnston is first application for dyeing separation field.In coloured silk In color deconvolution frame, need for image to be transferred in light intensity spatial, and need dyed substrates for dye separation. Ruifrok and Johnston provides a small amount of example dyed substrates, but if dyeing condition changes, then these matrix will not It is applicable in again.In order to meet the needs for being directed to specific image optimization dyed substrates, develop several based on color deconvolution Method.In early stage, Macenko proposes a kind of automatic staining Matrix Prediction Method, one as dyeing method for normalizing Part, and estimate to dye vector using singular value decomposition (SVD).Gavrilovic assumes that perceiving similar color will connect each other Closely, and it was found that pixel is expected appear in in corresponding group of each dyeing in Maxwell's colorimetric plane.It then will be every A dyeing vector is estimated as the average value of its corresponding Gaussian Profile.Kather suggests the best table that dyeing is obtained using PCA Show.This is by projecting to the first two PCA component in the plane by using the dyeing vector creation for estimating dyeing Matrix Estimation Come what is realized.However, PCA assumes that there are orthogonalities between main component, but not such was the case with for situation, especially in H and E etc. In correlation dyeing.Trahearn, which is recently proposed, a kind of carries out the method deconvoluted of dyeing using ICA variant.This method be based on Lower hypothesis: dyeing vector can be modeled as by isolated component according to ICA model.As application ICA, it is contemplated that the pixel of identical stain By the main shaft distribution approximately along an isolated component, and the pixel of different stains will be along different main shaft distributions.However, Trahearn shows that in some cases original isolated component cannot provide enough deconvolution.Therefore, using aligning step To adjust the isolated component of estimation.This is found by minimizing the average value of the distance between each pixel and its nearest vector Group optimum dyeing vector, the stopping when reaching convergence.Rabinovic compares two kinds of color Deconvolution Methods, nonnegative matrix because Son decomposes (NMF) and independent component analysis (ICA).Although they show that NMF is performed better than, both methods has all been not enough to Deconvolute image entirely.
Summary of the invention
The pathological image polychromatophilia color separation method based on deep learning that the object of the present invention is to provide a kind of, is utilized CNN The various features that can be extracted, remain the structure of organization object, while also having separated the dye that other methods often cannot be distinguished Color spot.
To achieve the above object, the technical solution adopted by the present invention are as follows:
A kind of pathological image polychromatophilia color separation method based on deep learning, comprising the following steps:
(1) optical density transformation is carried out to pathological staining image, obtains the optical density matrix of original pathological staining image;
(2) the optical density matrix for obtaining step (1) constructs ResU-Net model;
(3) the ResU-Net model obtained to step (2) is trained;
(4) image dyeing separation is carried out by the ResU-Net model after step (3) training.
The specific steps of the step (1) are as follows: each pixel R, G, B tri- are calculated to original pathological staining image matrix first Optical density corresponding to channel obtains the optical density matrix of original pathological staining image;R, each channel institute in tri- channels G, B Corresponding optical density OD is expressed as follows:
Wherein, D is image in the space optical density OD, I0It is incident intensity, I is transmitted intensity.
The specific steps of the step (2) are as follows: optical density matrix is three row matrix D=[dr, dg, db]T, wherein each row pair Ying Yuyi Color Channel;Using optical density matrix as the input of model, constructs ResU-Net framework and carry out multitask dyeing point From;ResU-Net framework is supported by constricted path, bridge joint, path expander three parts, to complete the prediction of linear coloring color matrix, Non-linear dyed color Matrix prediction and dyeing concentration prediction;Wherein, constricted path is used to reduce the Spatial Dimension of characteristic pattern, together When successively increase the quantity of characteristic pattern, input picture is extracted as compact feature;Bridging part is for connecting constricted path and expansion Path is opened, and realizes linear coloring color matrix forecast function;Path expander is for the gradually details of recovery target and accordingly The path expander of Spatial Dimension, right side will up-sample, and every up-sampling is primary, and just port number corresponding with constricted path is identical Scale fusion is respectively used to dyed color matrix and dyeing concentration Matrix prediction will export.
In the step (2), constricted path has several residual blocks, in each residual block, by deconvolution, by feature Mapping reduces half;Correspondingly, path expander is also made of corresponding residual block;Before each residual block, exist from compared with The cascade of the up-sampling of the Feature Mapping of low level and the Feature Mapping from corresponding encoded path.
In the step (2), residual block is explained are as follows: it is assumed that the input of a neural network unit is x, desired output is H (x), it in addition defines a residual error and maps F (x)=H (x)-x, if x is directly passed to output, which will be learned The target of habit is exactly residual error mapping F (x)=H (x)-x, and residual error unit is made of a series of convolutional layers and a shortcut, defeated Enter the output that x passes to residual error unit by this shortcut, then the output of residual unit is z=F (x)+x, and z is to the inclined of x It leads as follows:
In formula, z is greater than 1 to x local derviation;Furthermore include batch standardization BN (Batch in each residual block ) and line rectification function ReLU (Rectified Linear Unit) normalization.
In the step (3), ResU-Net model includes prior model, posterior model and concentration model;Prior model is logical The single dyed color matrix for predicting each image is crossed, solves Nonnegative matrix factorization in combination with dyeing concentration matrix (NMF) problem, and posterior model predicts the dyed color matrix combination concentration prediction model of each pixel based on Pixel-level, it can Accurately every kind of dyeing independent characteristic is arrived in study;Given priori dyes matrix, poststaining matrix and shared concentration matrix, pass through by Concentration matrix dyes matrix with priori and posteriority dyeing matrix combines to form two reconstructions;By minimum input picture and each Reconstruction loss between reconstruction carrys out training pattern;Library is added between the distribution that prior model and posterior model are predicted and strangles Bark- Leibler (Kullback-Leibler) bound term, ensures that the prediction of posterior model is only offset slightly from prior model;Wherein, two KL divergence such as following formula between a gaussian variable:
In formula, σ1、σ2、μ1、μ2Respectively indicate the normal distribution standard difference of prior model prediction and posterior model prediction with it is equal Value;
The number of pixels that N is each image is defined, M is every wheel picture number, and K indicates dyeing type, and C indicates image channel Number, then obtaining bound term:
In formula, μm,n,k,cPrior model prediction is respectively indicated to predict with posterior model The mean value and standard deviation of normal distribution.Subscript m, n, k, c respectively indicate every wheel picture number, image pixel number, dyeing type Number and image channel number serial number;
Loss function is defined as follows in order to examine its separating effect for dyeing separation task:
In formula,Indicate the nth pixel of m-th of image,Indicate corresponding forecast image pixel.
In the step (4), feature is extracted from image optical density by ResU-net model, the feature of extraction is passed To each submodule, a series of ginseng of Gaussian Profiles that these submodules are predicted the dyeing concentration of each pixel and therefrom sampled Number, parameter includes mean value and variance;For pixel each in image, to hematoxylin coloured portions, eosin stains part, background It is predicted in each RGB color channel of 3 kinds of part situation;The affiliated area of each pixel is intuitively expressed by probability distribution Domain, to achieve the purpose that dyeing separation.
In the step (4), while realizing three kinds of staining conditions separation, when occurring stained region in image, by table Now it is the low probability event in the case of three kinds, spot can be recognized accurately at this time.
The utility model has the advantages that being used for polychromatophilia color separation (bush the invention proposes a kind of new unsupervised deep learning method Essence, Yihong, background and spot).Inspiration of this method by Nonnegative matrix factorization (NMF), and input picture is decomposed into Dyed color matrix and dyeing concentration matrix.Method of the invention predicts the Gaussian Profile about dyed color, from Gauss point Available each pixel belongs to the probability of each pigmented section in cloth, is statistically analyzed sufficiently to learn various dyeing Independent characteristic, can be to dyeing Accurate classification in image.By analyzing the Gaussian Profile of dyed color, converge to model as early as possible Optimum state and make model have good generalization ability.Can precisely identify hematoxylin in model, Yihong, background these three Under staining conditions, when occurring stained region in image, it will appear as the low probability event under three cases above, at this time can Spot is recognized accurately.In deep learning network, the dyeing matrix of image level is fixed, but is directed to real image, same dye Color region (i.e. similar tissue) will lead to same pigmented section due to the extraneous factors such as artificially colored, and dyeing matrix is also different 's.This method can carry out Pixel-level analysis, same class loading in better separate picture to original image, to improve dyeing point From performance.
Compared with prior art, the invention has the following advantages that
1. predicting the Gaussian Profile about dyed color, available each pixel belongs to each from Gaussian Profile The probability of pigmented section is statistically analyzed sufficiently to learn the independent characteristic of various dyeing, can be accurate to dyeing in image Classification.
2. making model converge to optimum state as early as possible by the Gaussian Profile of analysis dyed color and there is model very Good generalization ability.Hematoxylin can be precisely identified in model, Yihong, under these three staining conditions of background, when occurring in image Stained region will appear as the low probability event under three cases above, and spot can be recognized accurately at this time.
3. conventional method has limitation to the feature that single image learns, pass through mass data using deep learning The model that the feature practised is trained has good robustness in terms of dyeing separation.
Detailed description of the invention
Fig. 1 is image dyeing separation overall flow figure;
Fig. 2 is the detailed structure view of ResU-Net model;
Fig. 3 is prior model schematic diagram;
Fig. 4 is overall model schematic diagram;
Fig. 5 is the color Gaussian Profile figure obtained to a certain area sampling;
Fig. 6 is training front and back distribution of color exemplary diagram;
Fig. 7 is dyeing separating effect figure;
Fig. 8 is and existing method separation effect comparison figure;
Fig. 9 is the effect picture using method detection spot of the invention.
Specific embodiment
Further explanation is done to the present invention with reference to the accompanying drawing.
A kind of pathological image polychromatophilia color separation method based on deep learning of the invention, comprising the following steps:
(1) optical density transformation is carried out to pathological staining image, obtains the optical density matrix of original pathological staining image;
Dyeing separation frame leads to as shown in Figure 1, calculating each pixel R, G, B tri- to original pathological staining image matrix first Optical density corresponding to road obtains the optical density matrix of original pathological staining image;R, each channel institute in tri- channels G, B is right The optical density OD answered is expressed as follows:
Wherein, D is image in the space optical density OD, I0It is incident intensity, I is transmitted intensity.
(2) the optical density matrix for obtaining step (1) constructs ResU-Net model;
Optical density matrix is three row matrix D=[dr, dg, db]T, wherein each row is corresponding to a Color Channel;By optical density Input of the matrix as model, construction ResU-Net framework carry out multitask dyeing separation, as shown in Figure 2;ResU-Net framework By constricted path, bridge joint, path expander three parts support, to complete the prediction of linear coloring color matrix, non-linear dyed color Matrix prediction and dyeing concentration prediction;Wherein, constricted path is used to reduce the Spatial Dimension of characteristic pattern, while successively increasing feature Input picture is extracted as compact feature by the quantity of figure;Bridging part realizes line for connecting constricted path and path expander Property dyed color Matrix prediction function;Path expander is used for the gradually details of recovery target and corresponding Spatial Dimension, right side Path expander will up-sample, and every up-sampling is primary, and just port number same scale corresponding with constricted path is merged with will be defeated It is respectively used to dyed color matrix and dyeing concentration Matrix prediction out.
Wherein, constricted path has several residual blocks, and in each residual block, by deconvolution, Feature Mapping is reduced Half;Correspondingly, path expander is also made of corresponding residual block;Before each residual block, exist other from lower level The cascade of the up-sampling of Feature Mapping and the Feature Mapping from corresponding encoded path.Residual block is explained are as follows: it is assumed that a mind Input through network unit is x, and desired output is H (x), in addition defines a residual error and maps F (x)=H (x)-x, if x is direct Output is passed to, then the neural network unit target to be learnt is exactly residual error mapping F (x)=H (x)-x, residual error unit It is made of a series of convolutional layers and a shortcut, input x passes to the output of residual error unit by this shortcut, then residual error The output of unit is z=F (x)+x, and z is as follows to the local derviation of x:
In formula, z is greater than 1 to x local derviation, and the problem of gradient disappears is effectively prevented in back-propagation process;Furthermore each All comprising batch standardization BN (Batch normalization) and line rectification function ReLU (Rectified in residual block Linear Unit), effectively accelerate convergence rate.
(3) the ResU-Net model obtained to step (2) is trained;
The ResU-Net model of formation is as shown in figure 3, include prior model, posterior model and concentration model;Prior model By predicting the single dyed color matrix of each image, Nonnegative matrix factorization is solved in combination with dyeing concentration matrix (NMF) problem, and posterior model predicts the dyed color matrix combination concentration prediction model of each pixel based on Pixel-level, it can Accurately every kind of dyeing independent characteristic is arrived in study;Given priori dyes matrix, poststaining matrix and shared concentration matrix, pass through by Concentration matrix dyes matrix with priori and posteriority dyeing matrix combines to form two reconstructions;By minimum input picture and each Reconstruction loss between reconstruction carrys out training pattern;Library is added between the distribution that prior model and posterior model are predicted and strangles Bark- Leibler (Kullback-Leibler) bound term, ensures that the prediction of posterior model is only offset slightly from prior model;Wherein, two KL divergence such as following formula between a gaussian variable:
In formula, σ1、σ2、μ1、μ2Respectively indicate the normal distribution standard difference of prior model prediction and posterior model prediction with it is equal Value;
The number of pixels that N is each image is defined, M is every wheel picture number, and K indicates dyeing type, and C indicates image channel Number, then obtaining bound term:
In formula, μm,n,k,cPrior model prediction is respectively indicated to predict with posterior model The mean value and standard deviation of normal distribution.Subscript m, n, k, c respectively indicate every wheel picture number, image pixel number, dyeing type Number and image channel number serial number;
Loss function is defined as follows in order to examine its separating effect for dyeing separation task:
In formula,Indicate the nth pixel of m-th of image,Indicate corresponding forecast image pixel.
(4) image dyeing separation is carried out by the ResU-Net model after step (3) training;
Feature is extracted from image optical density by ResU-net model, the feature of extraction is passed to each submodule, A series of parameter of Gaussian Profiles that these submodules are predicted the dyeing concentration of each pixel and therefrom sampled, parameter include equal Value and variance;For pixel each in image, to hematoxylin coloured portions, eosin stains part, 3 kinds of situations of background parts It is predicted in each RGB color channel;Each pixel affiliated area is expressed, intuitively by probability distribution to reach dyeing point From purpose.While realizing three kinds of staining conditions separation, when occurring stained region in image, it will appear as in the case of three kinds Low probability event, spot can be recognized accurately at this time.
Below with reference to embodiment, the present invention will be further described.
Embodiment
Original image progress light intensity spatial conversion is input in the ResU-Net model of 12 level frameworks and carries out multitask Dyeing separation.ResU-Net model is supported by constricted path, bridge joint, path expander three parts to complete linear coloring color matrix Prediction, non-linear dyed color Matrix prediction and dyeing concentration prediction.Constricted path is made of preceding 1-4 grades of network, for reducing The Spatial Dimension of characteristic pattern, while successively increasing the quantity of characteristic pattern, input picture is extracted as compact feature.5th grade is bridge Socket part point connection is shunk and path expander, and realizes linear coloring color matrix forecast function.Path expander is by 6-9 grades of network structures At, it being up-sampled for gradually restoring details and corresponding Spatial Dimension, the path expander of target, every up-sampling is primary, Just port number same scale fusion corresponding with constricted path is respectively used to dyed color matrix and dyeing concentration square will export Battle array prediction.During linear prediction, Z5 feature is flattened into first as vector, deploys two full articulamentums, intermediate node is 500, output node 9 respectively indicates the dyed color matrix in three kinds of each channels staining conditions R, G, B.9 grades of Z9 feature is used for Two subtasks: non-linear dyed color prediction and dyeing concentration prediction.Non-linear dyed color is predicted by 10 grades and 11 grades Matrix, and 12 grades will realize that dyeing concentration is predicted to individual element.All ranks are all constructed using residual unit, and every level-one includes Two 3 × 3 convolution blocks and an identity map.Identity map connection unit is output and input.Include in each residual block BN and ReLU prevents gradient from disappearing and accelerates convergence rate.Design parameter and output size are as shown in table 1 in network.
Table 1
It in training process, for linear model, will a little be sampled to being randomly choosed in image Gaussian Profile, to be formed About the estimated value of field color a certain in image distribution, and for nonlinear model, to pixel Gaussian Profile stochastical sampling, with The dyed color matrix of prediction pixel grade.As shown in Figure 5.The process is repeated to each case, these distributions are combined to form The dyeing matrix of estimation.The mean value of each distribution indicates the value of model distribution maximum probability, and standard deviation describes model Accuracy.To further illustrate: assuming that distribution represents the red value in Yihong, if standard deviation is very low, the value sampled very may be used 0.5 can be connect.If the true red value in Yihong is close to 0.5, sampled value should cause to rebuild loss reduction;Therefore, if really Red value far from 0.5, then sampled value will lead to very high reconstruction loss.If the big standard deviation of model prediction, is adopted Sample value will change it is very big, therefore even if average value correctly can generate big reconstruction loss.Therefore, in order to find optimum value, often The mean value of a distribution must be close to true value, and standard deviation must be low as far as possible.Network weight W uses Gaussian Profile random initializtion. Learning rate is originally defined as 0.001, and when training reaches 250 wheel, training stops.Fig. 6 gives these distributions of training front and back Example.
The sampling from a variety of different organization types and two scanners (Philips and Aperio), it means that each The contrast of image and staining power difference.The model proposed is realized based on PyTorch deep learning frame, and is used NVidia GTX 1080Ti GPU is trained.Data set organizes picture group by 22000 RGB having a size of 128x128 pixel At.Dyeing disjunctive model is formed by 8 with the convolution feature extraction layer of ResU-net structural arrangement and 3 lesser convolutional layers, To export image mode and pixel-by-pixel dyeing matrix and dyeing concentration matrix.Using batch size be 64 trining mock up about Need to obtain for 30 minutes good as a result, although can realize good result with less achievement.Optimized using ADAM Device is gradually decreased at the end of each epoch using the initial learning rate of 1-e3.The loss of standard mean square error, which is used as to rebuild, punishes Penalize item.The result shows that the background of hematoxylin and eosin dyeing and hybrid RGB image can be had successfully been isolated, while retaining tissue Structure.The second row of Fig. 7 shows coloring and isolated hematoxylin dyeing.
Method of the invention is compared with many tradition and state-of-the-art method, Fig. 8 illustrates comparison result. The method of Mikto is used for cell segmentation, therefore can be only used for the separation of H dyeing.Following NMF method is shown using conventional method Solve the result of NMF;Although H dyeing is quite obvious, cytoplasmic structure is seriously reduced, and any spot all may cause separation Effect is poor.Color deconvolution (CD) is the isolated classical way of dyeing, and optimum dyeing matrix could be calculated by requiring manual intervention.With CD method can storage configuration well, it can however not rationally separating background color.SDSA is a kind of nearest strategy, it makes Spot is separated with the statistical analysis of multiresolution dyeing data.It can be seen that SDSA is successfully partitioned into H dyeing, but when figure When as in more than two kinds of dyeing, this method failure.Fig. 9 shows the result of method spot detection of the invention.Belong to spot Pixel has low probability in obtained probability distribution;It is less than the pixel of predefined thresholds λ by select probability, it can be to these Region is identified and is split.
The invention proposes a kind of unsupervised deep learning methods for polychromatophilia color separation.It is image in this frame Dyeing has been deployed separately different subtasks.In order to overcome the variation of dyeing, linear-nonlinear model is constructed, can be provided The dyeing of Pixel-level samples.In order to make minimization of loss, carry out training pattern using Gaussian Profile, and also using the distribution come into Row dyeing Matrix prediction and spot separation.
The above is only a preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art For member, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also answered It is considered as protection scope of the present invention.

Claims (10)

1. a kind of pathological image polychromatophilia color separation method based on deep learning, it is characterised in that: the following steps are included:
(1) optical density transformation is carried out to pathological staining image, obtains the optical density matrix of original pathological staining image;
(2) the optical density matrix for obtaining step (1) constructs ResU-Net model;
(3) the ResU-Net model obtained to step (2) is trained;
(4) image dyeing separation is carried out by the ResU-Net model after step (3) training.
2. the pathological image polychromatophilia color separation method according to claim 1 based on deep learning, it is characterised in that: described The specific steps of step (1) are as follows: original pathological staining image matrix is calculated corresponding to tri- channels each pixel R, G, B first Optical density obtains the optical density matrix of original pathological staining image;R, optical density corresponding to each channel in tri- channels G, B OD is expressed as follows:
Wherein, D is image in the space optical density OD, I0It is incident intensity, I is transmitted intensity;
The specific steps of the step (2) are as follows: optical density matrix is three row matrix D=[dr, dg, db]T, wherein each row is corresponding to one A Color Channel;Using optical density matrix as the input of model, constructs ResU-Net framework and carry out multitask dyeing separation; ResU-Net framework is predicted by constricted path, bridge joint, the support of path expander three parts with completing linear coloring color matrix, non- The prediction of linear coloring color matrix and dyeing concentration prediction;Wherein, constricted path is used to reduce the Spatial Dimension of characteristic pattern, simultaneously The quantity for successively increasing characteristic pattern, is extracted as compact feature for input picture;Bridging part is for connecting constricted path and expansion Path, and realize linear coloring color matrix forecast function;Path expander is for gradually restoring the details and sky accordingly of target Between dimension, the path expander on right side will up-sample, and every up-sampling is primary, just the identical ruler of port number corresponding with constricted path Degree fusion is respectively used to dyed color matrix and dyeing concentration Matrix prediction will export.
3. the pathological image polychromatophilia color separation method according to claim 2 based on deep learning, it is characterised in that: described In step (2), constricted path has several residual blocks, and in each residual block, by deconvolution, Feature Mapping is reduced one Half;Correspondingly, path expander is also made of corresponding residual block;Before each residual block, exist from the other spy of lower level Levy the up-sampling of mapping and the cascade of the Feature Mapping from corresponding encoded path.
4. the pathological image polychromatophilia color separation method according to claim 2 based on deep learning, it is characterised in that: described In step (2), residual block is explained are as follows: it is assumed that the input of a neural network unit is x, desired output is H (x), is in addition defined One residual error maps F (x)=H (x)-x, if x is directly passed to output, which is exactly Residual error maps F (x)=H (x)-x, and residual error unit is made of a series of convolutional layers and a shortcut, and input x passes through this victory Diameter passes to the output of residual error unit, then the output of residual unit is z=F (x)+x, and z is as follows to the local derviation of x:
In formula, z is greater than 1 to x local derviation;Furthermore comprising including batch standardization and line rectification function in each residual block.
5. the pathological image polychromatophilia color separation method according to claim 2 based on deep learning, it is characterised in that: described In step (2), constricted path is made of preceding 1-4 grades of network, for reducing the Spatial Dimension of characteristic pattern, while successively increasing feature Input picture is extracted as compact feature by the quantity of figure;5th grade is bridging part connection contraction and path expander, and realizes line Property dyed color Matrix prediction function;The path expander is made of 6-9 grades of networks, for gradually restoring the details and phase of target The Spatial Dimension answered, path expander will up-sample, and every up-sampling is primary, just the identical ruler of port number corresponding with constricted path Degree fusion is respectively used to dyed color matrix and dyeing concentration Matrix prediction will export;During linear prediction, Z5 feature It is flattened into first as vector, deploys two full articulamentums, intermediate node 500, output node 9 respectively indicates three kinds of dyes Pornographic condition R, the dyed color matrix in each channel G, B;9 grades of Z9 feature is used for two subtasks: non-linear dyed color prediction It is predicted with dyeing concentration;Non-linear dyed color matrix is predicted by 10 grades and 11 grades, and 12 grades will realize dye to individual element Colour saturation prediction;All ranks are all constructed using residual unit, and every level-one includes two 3 × 3 convolution blocks and an identity map; Identity map connection unit is output and input.
6. the pathological image polychromatophilia color separation method according to claim 1 based on deep learning, it is characterised in that: described In step (3), ResU-Net model includes prior model, posterior model and concentration model;Prior model is by predicting each figure The single dyed color matrix of picture, Nonnegative matrix factorization (NMF) is solved the problems, such as in combination with dyeing concentration matrix, then The dyed color matrix combination concentration prediction model that model predicts each pixel based on Pixel-level is tested, can accurately be learnt to every Kind dyeing independent characteristic;Given priori dyes matrix, poststaining matrix and shared concentration matrix, by by concentration matrix and priori Dyeing matrix and posteriority dyeing matrix combine to form two reconstructions;By minimizing the reconstruction between input picture and each reconstruction Loss carrys out training pattern;Library is added between the distribution that prior model and posterior model are predicted and strangles Bark-leibler bound term, Ensure that the prediction of posterior model is only offset slightly from prior model;Wherein, the KL divergence such as following formula between two gaussian variables:
In formula, σ1、σ2、μ1、μ2Respectively indicate the normal distribution standard difference and mean value of prior model prediction and posterior model prediction;
The number of pixels that N is each image is defined, M is every wheel picture number, and K indicates dyeing type, and C indicates image channel number, So obtain bound term:
In formula, μm,n,k,cIt respectively indicates prior model prediction and predicts normal state with posterior model The mean value and standard deviation of distribution.Subscript m, n, k, c respectively indicate every wheel picture number, image pixel number, dyeing type number And image channel number serial number;
Loss function is defined as follows in order to examine its separating effect for dyeing separation task:
In formula,Indicate the nth pixel of m-th of image,Indicate corresponding forecast image pixel.
7. the pathological image polychromatophilia color separation method according to claim 1 based on deep learning, it is characterised in that: described In step (3), in training process, for linear model, will a little it be sampled to being randomly choosed in image Gaussian Profile, with shape At the estimated value being distributed about field color a certain in image, and for nonlinear model, to pixel Gaussian Profile stochastical sampling, With the dyed color matrix of prediction pixel grade;The process is repeated to each case, these distributions are combined to form the dye of estimation Colour moment battle array;The mean value of each distribution indicates the value of model distribution maximum probability, and standard deviation describes the accuracy of model.
8. the pathological image polychromatophilia color separation method according to claim 7 based on deep learning, it is characterised in that: described In step (3), to find optimum value, the mean value of each distribution must be close to true value, and standard deviation must be low as far as possible.
9. the pathological image polychromatophilia color separation method according to claim 1 based on deep learning, it is characterised in that: described In step (4), feature is extracted from image optical density by ResU-net model, the feature of extraction is passed to each submodule Block, a series of parameter of Gaussian Profiles that these submodules are predicted the dyeing concentration of each pixel and therefrom sampled, parameter packet Include mean value and variance;For pixel each in image, to hematoxylin coloured portions, eosin stains part, 3 kinds of feelings of background parts It is predicted in each RGB color channel of condition;Each pixel affiliated area is expressed, intuitively by probability distribution to reach dye The purpose of color separation.
10. the pathological image polychromatophilia color separation method according to claim 9 based on deep learning, it is characterised in that: institute It states in step (4), while realizing three kinds of staining conditions separation, when occurring stained region in image, will appear as three kinds of feelings Spot can be recognized accurately in low probability event under condition at this time.
CN201910347578.7A 2019-04-28 2019-04-28 Pathological image multi-staining separation method based on deep learning Active CN110110634B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910347578.7A CN110110634B (en) 2019-04-28 2019-04-28 Pathological image multi-staining separation method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910347578.7A CN110110634B (en) 2019-04-28 2019-04-28 Pathological image multi-staining separation method based on deep learning

Publications (2)

Publication Number Publication Date
CN110110634A true CN110110634A (en) 2019-08-09
CN110110634B CN110110634B (en) 2023-04-07

Family

ID=67487107

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910347578.7A Active CN110110634B (en) 2019-04-28 2019-04-28 Pathological image multi-staining separation method based on deep learning

Country Status (1)

Country Link
CN (1) CN110110634B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111507992A (en) * 2020-04-21 2020-08-07 南通大学 Low-differentiation gland segmentation method based on internal and external stresses
CN111539883A (en) * 2020-04-20 2020-08-14 福建帝视信息科技有限公司 Digital pathological image H & E dyeing restoration method based on strong reversible countermeasure network
CN113538422A (en) * 2021-09-13 2021-10-22 之江实验室 Pathological image automatic classification method based on dyeing intensity matrix
CN114627010A (en) * 2022-03-04 2022-06-14 透彻影像(北京)科技有限公司 Dyeing space migration method based on dyeing density map

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104200428A (en) * 2014-08-18 2014-12-10 南京信息工程大学 Microscopic image color convolution removal method and cutting method based on non-negative matrix factorization (NMF)
CN106780498A (en) * 2016-11-30 2017-05-31 南京信息工程大学 Based on point depth convolutional network epithelium and matrix organization's automatic division method pixel-by-pixel
CN108470337A (en) * 2018-04-02 2018-08-31 江门市中心医院 A kind of sub- reality Lung neoplasm quantitative analysis method and system based on picture depth feature
CN108830813A (en) * 2018-06-12 2018-11-16 福建帝视信息科技有限公司 A kind of image super-resolution Enhancement Method of knowledge based distillation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104200428A (en) * 2014-08-18 2014-12-10 南京信息工程大学 Microscopic image color convolution removal method and cutting method based on non-negative matrix factorization (NMF)
CN106780498A (en) * 2016-11-30 2017-05-31 南京信息工程大学 Based on point depth convolutional network epithelium and matrix organization's automatic division method pixel-by-pixel
CN108470337A (en) * 2018-04-02 2018-08-31 江门市中心医院 A kind of sub- reality Lung neoplasm quantitative analysis method and system based on picture depth feature
CN108830813A (en) * 2018-06-12 2018-11-16 福建帝视信息科技有限公司 A kind of image super-resolution Enhancement Method of knowledge based distillation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HARIYANTI MOHD SALEH等: "Overlapping Chromosome Segmentation using U-Net: Convolutional Networks with Test Time Augmentation", 《RESEARCHGATE》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111539883A (en) * 2020-04-20 2020-08-14 福建帝视信息科技有限公司 Digital pathological image H & E dyeing restoration method based on strong reversible countermeasure network
CN111539883B (en) * 2020-04-20 2023-04-14 福建帝视信息科技有限公司 Digital pathological image H & E dyeing restoration method based on strong reversible countermeasure network
CN111507992A (en) * 2020-04-21 2020-08-07 南通大学 Low-differentiation gland segmentation method based on internal and external stresses
CN113538422A (en) * 2021-09-13 2021-10-22 之江实验室 Pathological image automatic classification method based on dyeing intensity matrix
CN114627010A (en) * 2022-03-04 2022-06-14 透彻影像(北京)科技有限公司 Dyeing space migration method based on dyeing density map

Also Published As

Publication number Publication date
CN110110634B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN110110634A (en) Pathological image polychromatophilia color separation method based on deep learning
CN109740413A (en) Pedestrian recognition methods, device, computer equipment and computer storage medium again
CN109784386A (en) A method of it is detected with semantic segmentation helpers
CN110059586B (en) Iris positioning and segmenting system based on cavity residual error attention structure
CN107563999A (en) A kind of chip defect recognition methods based on convolutional neural networks
US11593607B2 (en) Method and system for predicting content of multiple components in rare earth extraction process
US11605163B2 (en) Automatic abnormal cell recognition method based on image splicing
CN106156781A (en) Sequence convolutional neural networks construction method and image processing method and device
CN108846835A (en) The image change detection method of convolutional network is separated based on depth
CN111523521A (en) Remote sensing image classification method for double-branch fusion multi-scale attention neural network
CN115063796B (en) Cell classification method and device based on signal point content constraint
CN115705637A (en) Improved YOLOv5 model-based spinning cake defect detection method
CN110826411B (en) Vehicle target rapid identification method based on unmanned aerial vehicle image
CN109903339B (en) Video group figure positioning detection method based on multi-dimensional fusion features
CN106023121A (en) BGA position back bore manufacture method
Abousamra et al. Weakly-supervised deep stain decomposition for multiplex IHC images
CN109376753A (en) A kind of the three-dimensional space spectrum separation convolution depth network and construction method of dense connection
CN109410171A (en) A kind of target conspicuousness detection method for rainy day image
Liang et al. An improved DualGAN for near-infrared image colorization
CN115170427A (en) Image mirror surface highlight removal method based on weak supervised learning
CN112633257A (en) Potato disease identification method based on improved convolutional neural network
CN114511710A (en) Image target detection method based on convolutional neural network
CN111179272B (en) Rapid semantic segmentation method for road scene
CN115546620A (en) Lightweight target detection network and method based on YOLO (YOLO) and electronic equipment
CN113837154B (en) Open set filtering system and method based on multitask assistance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant