CN106780498A - Based on point depth convolutional network epithelium and matrix organization's automatic division method pixel-by-pixel - Google Patents

Based on point depth convolutional network epithelium and matrix organization's automatic division method pixel-by-pixel Download PDF

Info

Publication number
CN106780498A
CN106780498A CN201611085781.4A CN201611085781A CN106780498A CN 106780498 A CN106780498 A CN 106780498A CN 201611085781 A CN201611085781 A CN 201611085781A CN 106780498 A CN106780498 A CN 106780498A
Authority
CN
China
Prior art keywords
pixel
image
epithelium
depth convolutional
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611085781.4A
Other languages
Chinese (zh)
Inventor
徐军
骆小飞
周超
刘利卉
郎彬
季卫萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN201611085781.4A priority Critical patent/CN106780498A/en
Publication of CN106780498A publication Critical patent/CN106780498A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of method split automatically based on point depth convolutional network epithelium pixel-by-pixel and matrix organization, comprise the following steps:Pathological image pretreatment operation;Construction training set and test set;Build depth convolutional neural networks model (DCNN);Image pixel point in test set is predicted, classification results are obtained.And carry out pseudo-colours according to classification results;The present invention block-based epithelium with image slices vegetarian refreshments as research object and traditional and matrix automatic segmentation algorithm are contrasted, and under identical experiment condition, the method for the present invention is more accurate, and effect is more preferable;The inventive method makes displaying while segmentation result in artwork, just clinician's direct viewing, and makes follow-up diagnosis on this basis.

Description

Based on point depth convolutional network epithelium and matrix organization's automatic division method pixel-by-pixel
Technical field
The present invention relates to pathological image technical field of information processing, point depth convolutional network epithelium pixel-by-pixel is based particularly on With matrix organization's automatic division method.
Background technology
Epithelial tissue and matrix are the Liang Lei elementary organizations in breast tissue.80% tumor of breast originates from mammary gland Epithelial tissue, so being devoted to being applied to computer-aided diagnosis system to the epithelium in pathological image now with some scholars With the heterogeneity analysis of matrix organization.Automatically differentiate that epithelium and matrix organization are to quantify this heterogeneous premise, so Make it possible to carry out epithelial nucleus individually analysis.However, the complexity having by pathological tissue image, success By two class loadings separate be a challenging problem.
1) qualified big data:
The pathological tissue full scan section complete for one, its size is about 100000 × 700000 pixels, deposits Storage needs to take 1.43 hard drive spaces of G on computers, and this high-resolution, large scale image are to computer hardware and figure All it is very challenging as parser.
2) Pathological structure type is complicated, and morphological differences is very big
One pathological section has numerous pathologic structure types, comes in every shape.Even identical tissue, its structure, Form also can be very strange.Therefore, it is difficult to be described with a fixed model, greatly improve again to model robustness It is required that.
3) its tissue heterogeneity of different pathological grade is high
With the raising of cancer grade, the border of normal structure is constantly corroded by cancer cell, between epithelium and matrix organization Boundary information increasingly obscure.And fuzzy border improves the accuracy requirement of parted pattern.
4) other challenges
The background of organization chart picture is complicated, noise is big, there is a problem of dyeing inhomogeneities and image quality.
Because the pathological image of H&E dyeing (hematoxylin eosin staining) can embody the complicated morphological feature of pathological tissue, from And be widely used in clinic.But in H&E images, not only background is complicated, picture noise is big, also exists because section contaminates In color manufacturing process produce dyeing it is uneven, it is incorrect dyeing etc. the problems such as.In addition different scanner imagings and imaging The problems such as quality.These aspects all can bring huge challenge to image processing and analyzing algorithm.
Although there is above-mentioned challenge, still there are many scholars to divide automatically in the epithelium of pathological image and matrix organization Contribution is made that in cutting, the development of research is promoted.
Different from traditional method, deep learning is formed more on the basis of mass data by combining low-level feature Abstract high-level feature.With deepening continuously that deep learning and big data are analyzed and researched, make the goal in research of people from Simple image is changed into the large-scale image of complexity.And the complexity that Histopathology image has conforms exactly to this Point.
The content of the invention
The technical problems to be solved by the invention are to overcome the deficiencies in the prior art, and provide a kind of deep based on putting pixel-by-pixel The method that degree convolutional network epithelium and matrix organization are split automatically, compared with block-based epithelium and matrix organization's dividing method, Either from the point of view of qualitative results or quantitative result, the accuracy rate of classification is obtained for larger lifting.
The present invention uses following technical scheme to solve above-mentioned technical problem:
According to it is proposed by the present invention it is a kind of based on pixel-by-pixel point depth convolutional network epithelium and matrix organization automatically segmentation Method, comprises the following steps:
Step 1, pretreatment operation is carried out to all pathological images, get rid of the color between pathological image and pathological image Luminance difference;
Step 2, pretreated segment pathology image is randomly selected as training sample, remaining is used as test sample;
Step 3, the tissue regions figure according to artificial mark, choose from the epithelium in training sample and matrix organization inside Block;
Step 4, the tissue regions figure according to artificial mark, choose from the epithelium in training sample and matrix organization edge Block;
Step 5, the block that step 3 and step 4 are obtained is integrated and is randomly divided into training set and test set;
Step 6, build a depth convolutional neural networks model DCNN, the model contain convolutional layer, pond layer, linearly Correct function activation primitive, local acknowledgement normalization layer and grader;Should using the training set in step 5 and test set training Depth convolutional neural networks model;
Step 7, the pathological image taken out in the test sample of step 2, centered on each point in pathological image, structure Make a block of Q × Q;Wherein, Q is the size that depth convolutional neural networks are input into size;
Step 8, by step 7 construct block be input in the depth convolutional neural networks model that step 6 is trained, obtain Classification results.
As it is of the present invention it is a kind of based on pixel-by-pixel point depth convolutional network epithelium and matrix organization automatically segmentation The further prioritization scheme of method, pseudo-colours is carried out according to the classification results that step 8 is obtained.
As it is of the present invention it is a kind of based on pixel-by-pixel point depth convolutional network epithelium and matrix organization automatically segmentation The further prioritization scheme of method, Q is 32.
As it is of the present invention it is a kind of based on pixel-by-pixel point depth convolutional network epithelium and matrix organization automatically segmentation The further prioritization scheme of method, the step 4 is specific as follows:According to the tissue regions figure of artificial mark, in finding training sample Epithelium and matrix organization boundary line, the coordinate that expansive working obtains the point near boundary line is carried out to boundary line, with these 32 × 32 block is built centered on point, if central point falls in epithelial tissue, the block epithelial tissue fritter is considered, otherwise It is then matrix organization's fritter.
As it is of the present invention it is a kind of based on pixel-by-pixel point depth convolutional network epithelium and matrix organization automatically segmentation The further prioritization scheme of method, builds a depth convolutional neural networks model DCNN in the step 6, specific as follows:
Weight matrix in the model used when successfully distinguishing CIFAR-10 data using Alex initializes depth volume Product neutral net;
The concrete structure of depth convolutional neural networks:
1) convolutional layer
Assuming that wave filter group isEach input size is w1-1×w1-1BlockBy m1 ×mlWave filter slip over the whole local experiences domain of image, and carry out convolution operation with each local experiences domain, and export knot Really;Individual wave filter has generation altogetherIndividual Feature Mapping figure, and the size of each mapping graph is (wl-1-ml+1)×(wl-1-ml+ 1), this linear filtering is expressed asWherein,It is one l layers of a ml×mlWave filter, mlRepresent L layers of size of median filter of network structure, It is l layers of wave filter group WlIn wave filter Number;
2) expression formula of linearity rectification function activation primitive is as follows:
3) pond layer
The operation of pond layer is that a pyramid operation for down-sampling is carried out after last layer convolution Feature Mapping, in part Receptive field in the range of, extract the characteristic value that its maximum or average value are as next layer, after nonlinear operation, the feature of image Map sizes are changed into:
Wherein, s is the size of pond layer operation;
4) local acknowledgement's normalization layer
Subtract and do except and normalize for local doing;
5) output layer
Last layer of whole network is exactly output layer, and output layer is exactly a grader, and the input of grader is nerve Last layer of network, the output of grader is classification number, in depth convolutional neural networks, the Softmax classification of two classification The Logic Regression Models of device are:
Wherein, x is the characteristic vector of sample, and T is transposition symbol, and θ is parameter;
The input of Softmax graders is the output of last layer of DCNN networks, by minimizing following loss letter Number J (θ) obtains the parameter θ of Softmax graders;
Wherein, m is sample size, y(i)It is i-th sample labeling, x(i)It is i-th characteristic vector of sample, k is classification Number;
θ represents all of model parameter, as follows:
Wherein,The parameter used during jth class is categorized into, while being also the jth in θ this all model parameter OK, 0<j<K+1 and j are integer;
According to the parameter θ of the Softmax for obtaining, each can first carry out DCNN by the image block that sliding window is obtained Propagated forward obtain characteristic vector x(i), then the probable value obtained between 0~1 in Logic Regression Models is sent to, finally The classification of image blockFor:
Wherein, e be the nature truth of a matter, k=2,Be categorized into the parameter used during l classes, at the same be also θ this own L rows in model parameter.
The present invention uses above technical scheme compared with prior art, with following technique effect:
(1) under same experiment condition, the Detection accuracy of the inventive method is more accurate than the dividing method based on block of pixels True rate is high;
(2) it is contemplated that classifying to each pixel, it is to avoid variety classes present in the segmentation based on block of pixels Pixel be divided into a problem for block;
(3) the inventive method is directed to edge tissues, takes the method for mirror image edge pixel to expand edge, so as to come to it Classified;
(4) the inventive method makes displaying while segmentation result in artwork, just clinician's direct viewing, and Follow-up diagnosis are made on the basis of this.
Brief description of the drawings
Fig. 1 is the structure chart of depth volume and neutral net.
Fig. 2 is the experiment overall flow figure of depth volume and neutral net;Wherein, (a) is original H&E pathological images;(b) It is the fritter of the 32x32 taken out by sliding window from (a);C () is that fritter is input into entire depth convolutional neural networks In (schematic diagram), and obtain classification results;D classification results are carried out to the central point pixel of the fritter in (b) in (c) according to () Pseudo-colours is dyeed;E () is the result obtained after all of slip fritter of whole pictures is all colored, used as segmentation result.
Fig. 3 is the schematic diagram of the small block method of taking-up organization edge in the present invention;Wherein, (a) is original H&E pathology figures Picture;B () is that (Dark grey is epithelium, and light gray is matrix, and black is unconcerned area for the result that is manually marked by pathologist Domain);C () obtains the cut-off rule of epithelium and matrix and makees expansion process according to artificial mark;(d) cut-off rule region with Machine is taken a little, and fritter is built centered on the point;E () is matrix fritter;F () is epithelium fritter.
Fig. 4 is different models to the pseudo-colours result after epithelium in pathological image and matrix organization's segmentation;Wherein, (a) Original pathological image, (b) is the artificial mark accurately marked by pathologist, (c) be by it is proposed by the present invention based on by The method of pixel and depth convolutional neural networks;D ()-(i) is respectively SW-SVM, SW-SMC, Ncut-SVM, Ncut-SMC, The pseudo-colours segmentation result figure that SLIC-SVM, DCNN-SLIC-SMC are obtained.
Fig. 5 a are right for ROC curve of the inventive method with the existing dividing method based on block of pixels on NKI data sets Than.
Fig. 5 b are right for ROC curve of the inventive method with the existing dividing method based on block of pixels on VGH data sets Than.
Specific embodiment
In order that the object, technical solutions and advantages of the present invention are clearer, below in conjunction with the accompanying drawings and the specific embodiments The present invention will be described in detail.
Step 1, pathological image pretreatment operation, get rid of the colour brightness difference between image and image;
The method chooses a width pathological image as target image in advance, and other pathological images are after color normalization All there will be identical distribution of color with target image.Specific method is by target image and pathological image to be normalized from RGB Color space conversion carries out a linear transformation, then to LAB color spaces to three gray values of each pixel of passage The pathological image to be normalized of the LAB color spaces after linear transformation is reduced to RGB color, just can be made to be normalized Pathological image have with target image as distribution of color.
, used as training sample, remaining is used as test sample for step 2, taking-up segment pathology image;
Picture in random selection data, while ensuring that training sample is completely separable with test sample.
Step 3, according to expert mark, from epithelium and matrix organization inside choose image block;
The image block that all pixels point belongs to epithelial tissue or matrix organization is chosen in pathological image.On organization chart As the selection of block, the clinician by possessing professional pathology knowledge completely carries out tissue regions mark in significantly sectioning image, It is 32 square image blocks of pixel that the length of side is therefrom chosen in the region that program can be marked according to these.Selected wherein in epithelial tissue , used as positive sample, the block chosen in matrix organization is used as negative sample for the block for taking.
Step 4, marked according to expert, image block is chosen from epithelium and matrix organization edge;
Marked according to expert, find the border of the epithelium and matrix organization in training sample, expansion behaviour is carried out to boundary line Obtain the coordinate of the point near boundary line.32 × 32 fritter is built centered on these points, if central point falls in epithelium group In knitting, then the fritter is considered epithelial tissue fritter.Otherwise it is then matrix organization's fritter;
Step 5, the fritter that step 3 and step 4 are obtained is integrated and is randomly divided into training set and test set;
By the data obtained in random screening integration step 3,4, wherein organization internal fritter:Organization edge fritter ratio Substantially 1:4.
Step 6, one depth convolutional neural networks model (DCNN) of structure, model contain convolutional layer, linearity rectification letter Number activation primitive, pond layer, local acknowledgement normalization layer and last grader;
Depth convolutional neural networks (DCNN) are one kind of artificial neural network, and the shared network structure of its weights is allowed to more Similar to biological neural network, the complexity of network model is reduced, reduce the quantity of weights.Input of the advantage in network What is showed when being multidimensional image becomes apparent, allow image directly as the input of network, it is to avoid in tional identification algorithm Complicated feature extraction and data reconstruction processes.Depth convolutional network is a multilayer of the particular design for identification two-dimensional shapes Perceptron, this network structure has height consistency to translation, proportional zoom, inclination or the deformation of his form common.
The quality of depth convolutional neural networks model performance is somewhat dependent upon training sample and initial nerve Network weight.Method using random initializtion easily sinks into local optimum, thus used here as using well-known scholar Alex into Work(distinguishes the weight matrix in the model used during CIFAR-10 data to initialize depth convolutional neural networks of the invention.
The concrete structure of lower depth convolutional neural networks is described below.
1) convolutional layer
Assuming that wave filter group isEach(wherein) it is a l One m of layerl×mlWave filter,It is l layers of wave filter group WlIn wave filter number.Each input size is wl-1× wl-1BlockBy ml×mlWave filter slip over the whole local experiences domain of image, and rolled up with each local experiences domain Product operation, and output result.Individual wave filter has generation altogetherIndividual Feature Mapping figure, and the size of each mapping graph is (wl-1- ml+1)×(wl-1-ml+ 1), this linear filtering can be simply expressed as
2) ReLu activation primitives
In order to imitate the operation principle of people's brain neuron, our data message is represented also for preferably fitting, to every The Feature Mapping figure obtained after one layer of linear filtering, will enter line activating, herein by a nonlinear activation primitive Using Relu activation primitives, expression formula is as follows:
Traditional sigmod activation primitives are compared to, Relu activation primitives have unsaturation, declined in training gradient When can more rapidly restrain, so as to accelerate the training speed of whole network.
3) Pooling layers (S)
Pooling layers of operation is that a pyramid operation for down-sampling is carried out after last layer convolution feature map, in office In the range of the receptive field in portion, the characteristic value that its maximum (or average value) is as next layer is extracted, so being do not have at pooling layers With the presence of parameter, it is only necessary to do a nonlinear operation, reason for doing so is that in a secondary significant image In, the information of regional area has redundancy, and exactly extracting that we to be done can represent and reflect the feature of its peak response. After pooling operations, the feature map sizes of image are changed into:
Wherein s is the size of pooling operations.
4) local acknowledgement's normalization layer
What the module was substantially carried out is that local doing subtracts and do except (local subtractive and divisive Normalizations) and normalize, it can force the adjacent feature in feature map to carry out local competition, can also force The feature of the same space position of different characteristic maps is at war with.Subtraction normalization operation is carried out a given position, The value of the actually position subtracts the value after the weighting of each pixel of neighborhood, and weights are different from the positional distance in order to distinguish Influence is different, and weights can be determined by a Gauss weighting windows.Division normalization actually first calculates each feature maps In the value of the weighted sum of the neighborhood of same locus, the average of this value of all feature maps is then taken, then each is special The value for levying the map positions is recalculated as being the value of the point divided by max (that average, weighted sum of this in the neighborhood of the map Value).What denominator was represented is poor in the weighting standard of the same spatial neighborhood of all feature maps.If actually for one It is exactly average and normalized square mean if individual image, that is, feature normalization.This is actually by computational neuroscience mould Type inspires what is obtained.Local acknowledgement normalizes the lateral inhibition mechanism of mimic biology nervous system layer by layer, the work to local neuron It is dynamic to create competition mechanism so that the larger value of response ratio is relatively bigger, improves model generalization ability.Implementation method is exactly at each Given position carries out subtraction normalization operation, and the value of the actually position subtracts the value after the weighting of each pixel of neighborhood, Weights are different from the positional distance Different Effects in order to distinguish, and weights can be determined by a Gauss weighting windows.
5) output layer
Last layer of whole network is exactly output layer, and output layer is exactly a grader, and the input of grader is nerve Last layer of network, the output of grader is classification number, in depth convolutional neural networks, the Softmax classification of two classification The Logic Regression Models of device are:
Wherein, training set is made up of m marked sample:{(x(1),y(1)),…,(xm,ym), x is the feature of sample Vector, T is transposition symbol, and θ is parameter;
The input of Softmax graders is the output of last layer of DCNN networks, by minimizing following loss letter Number J (θ) obtains the parameter θ of Softmax graders;
Wherein, m is sample size, y(i)It is i-th sample labeling, x(i)It is i-th characteristic vector of sample;
For convenience's sake, all of model parameter is represented used here as symbol theta, it is as follows:
Wherein θ subscripts are that, specifically with which class, subscript T is transposition symbol, and k is classification sum;
According to the parameter θ of the Softmax for obtaining, each can first carry out DCNN by the image block that sliding window is obtained Propagated forward obtain characteristic vector x(i), then the probable value obtained between 0~1 in Logic Regression Models is sent to, finally The classification of image blockFor:
Wherein, e is the nature truth of a matter, k=2.Be categorized into the parameter used during jth class, at the same be also θ this own Jth row in model parameter.The parameter used during l classes is categorized into, while in being also θ this all model parameter L rows.
A pathological image in test sample in step 7, taking-up step 2, centered on each point in image, construction The fritter of one 32 × 32;
Centered on each pixel, up take 15 pixels, down take 16 pixels so as to constitute one 32 × 32 fritter.For edge tissues, block is taken for convenience, take the method for mirror image edge pixel to expand edge, so as to come right They are classified;
Step 8, the fritter in step 7 is input in the good depth convolutional neural networks of training in advance, obtains classification knot Really.And carry out pseudo-colours according to classification results;
The fritter taken out in step 7 is input in the depth convolutional neural networks model trained in step 6, and is obtained Final output result.If result is 0, then it is assumed that the central pixel point of the fritter is epithelial tissue pixel, is dyed Dark grey.If result is 1, then it is assumed that the central pixel point of the fritter is matrix organization's pixel, is dyed light gray.Together When find expert mark in black region position, the same location in pseudo-colours result is dyed into black.
For the ease of public understanding technical solution of the present invention, a specific embodiment is given below.
The present embodiment is applied the breast cancer group in h and E dyeing (H&E) by technical scheme provided by the present invention Knit on image set.The inventive method is tested in two databases, is respectively:Teh Netherlands Cancer Inst (NKI) and temperature The data that (VGH) Liang Ge mechanisms of Ge Hua general hospitals provide respectively.It includes and goes out epithelium and base by pathologist hand labeled 157 pathological images (NKI, 106 of matter tissue;VGH, 51).Every image is contaminated from the H&E of 20 × optical resolution Cut out what is come in the breast cancer tissue microarray (TMA) of color, picture size is 1128 × 720.
In the present embodiment, tissue signature extracts part and takes depth convolutional neural networks, classified part to classify for softmax Device, in order to verify the validity based on the epithelium of point depth convolutional network and matrix organization's dividing method pixel-by-pixel of the invention, Epithelium and matrix organization's dividing method that use depth convolutional neural networks several frequently seen in addition extract small block feature are compared for, Including SW-SVM (sliding window+support vector cassification), SW-SMC (sliding window+softmax classification), Ncut-SVM (rule Generalized figure cuts+support vector cassification), Ncut-SMC (standardization figure cuts+softmax classification), (simple linear changes SLIC-SVM For cluster+support vector cassification), SLIC-SMC (simple linear iteration cluster+softmax classification).
Step 1, pathological image pretreatment operation, get rid of the colour brightness difference between image and image;
The method chooses a width pathological image as target image in advance, and other pathological images are after color normalization All there will be identical distribution of color with target image.Specific method is by target image and pathological image to be normalized from RGB Color space conversion carries out a linear transformation, then to LAB color spaces to three gray values of each pixel of passage The pathological image to be normalized of the LAB color spaces after linear transformation is reduced to RGB color, just can be made to be normalized Pathological image have with target image as distribution of color.
Grey scale pixel value linear change formula:Define hereinEach passage all pixels of respectively LAB The mean square deviation and average of gray value.Target is target image, and original is the image before standardization, and mapped is standardization Image afterwards.
, used as training sample, remaining is used as test sample for step 2, taking-up segment pathology image;
Picture in random selection data, while ensuring that training sample is completely separable with test sample.
Step 3, according to expert mark, from epithelium and matrix organization inside choose image block;
The image block that all pixels point belongs to epithelial tissue or matrix organization is chosen in pathological image.On organization chart As the selection of block, the clinician by possessing professional pathology knowledge completely carries out tissue regions mark in significantly sectioning image, It is 32 square image blocks of pixel that the length of side is therefrom chosen in the region that program can be marked according to these.Selected wherein in epithelial tissue , used as positive sample, the block chosen in matrix organization is used as negative sample for the block for taking.
Step 4, marked according to expert, image block is chosen from epithelium and matrix organization edge;
As schemed, ((b) in Fig. 3) is marked according to expert, find the border of the epithelium and matrix organization in training sample, it is right Boundary line carries out the border after morphologic expansive working ((c) in Fig. 3) is expanded.Therefrom obtain belonging to the boundary line The coordinate of point.32 × 32 fritter is built centered on these points, if central point falls in epithelial tissue, the fritter is thought It is epithelial tissue fritter ((f) in Fig. 3).Otherwise it is then matrix organization's fritter ((e) in Fig. 3);In order to preferably show effect Really, original image ((a) in Fig. 3) is obtained into border schematic diagram (Fig. 3 with border ((c) in Fig. 3) image co-registration after expansion In (d)).
Step 5, the fritter that step 3 and step 4 are obtained is integrated and is randomly divided into training set and test set;
By the data obtained in random screening integration step 3,4, wherein organization internal fritter:Organization edge fritter ratio Substantially 1:4.Sample size is as shown in table 1.
The training samples number of table 1
Step 6, one depth convolutional neural networks model (DCNN) of structure, model contain convolutional layer, linearity rectification letter Number activation primitive, pond layer, local acknowledgement normalization layer and last grader;
For convolutional neural networks, framework used in the present invention is the Caffe frameworks of current awfully hot door.Network structure is such as Shown in Fig. 1:
Ground floor carries out convolution operation (convolution kernel size Kernel size=to image using 32 convolution kernels (conv) 5;Step-length Stride=1;Image border mirror image filler pixels Pad=2;).
The second layer carries out down-sampling (pond core size Kernel to convolution results using the mode of maximum pond (pool) Size=3;Step-length Stride=2;Image border mirror image filler pixels Pad=0;).
Then (LRN) is normalized using ReLU activation primitives with local acknowledgement.
Third layer carries out convolution operation (convolution kernel size Kernel size=5 using 32 convolution collecting images;Step-length Stride=1;Image border mirror image filler pixels Pad=2;).
Then use ReLU activation primitives.
The 4th layer of mode using maximum pond carries out down-sampling (pond core size Kernel size to convolution results =3;Step-length Stride=2;Image border mirror image filler pixels Pad=0;).
Then normalized using local acknowledgement.
Layer 5 carries out convolution operation (convolution kernel size Kernel size=5 using 64 convolution collecting images;Step-length Stride=1;Image border mirror image filler pixels Pad=2;).
Then use ReLU activation primitives.
Layer 6 carries out down-sampling (pond core size Kernel size to convolution results using the mode in maximum pond =3;Step-length Stride=2;Image border mirror image filler pixels Pad=0;).
Layer 7 carries out full attended operation using 64 full connection units (ip) and last layer.
8th output category result and the loss values contrasted with actual value.
A pathological image in test sample in step 7, taking-up step 2, centered on each point in image, construction The fritter of one 32 × 32;
Centered on each pixel, up take 15 pixels, down take 16 pixels so as to constitute one 32 × 32 fritter.For edge tissues, block is taken for convenience, take the method for mirror image edge pixel to expand edge, so as to come right They are classified;
Step 8, the fritter in step 7 is input in the good depth convolutional neural networks of training in advance, obtains classification knot Really.And carry out pseudo-colours according to classification results;
The fritter taken out in step 7 is input in the depth convolutional neural networks model trained in step 6, and is obtained Final output result.If result is 0, then it is assumed that the central pixel point of the fritter is epithelial tissue pixel, is dyed Dark grey.If result is 1, then it is assumed that the central pixel point of the fritter is matrix organization's pixel, is dyed light gray.Together When find expert mark in black region position, the same location in pseudo-colours result is dyed into black.
Fig. 2 is the experiment overall flow figure of depth volume and neutral net;Wherein, (a) is original H&E pathological images;(b) It is the fritter of the 32x32 taken out by sliding window from (a);C () is that fritter is input into entire depth convolutional neural networks In (schematic diagram), and obtain classification results;D classification results are carried out to the central point pixel of the fritter in (b) in (c) according to () Pseudo-colours is dyeed;E () is the result obtained after all of slip fritter of whole pictures is all colored, used as segmentation result.
It is of the invention based on the epithelium of point depth convolutional network pixel-by-pixel and the validity of matrix organization's segmentation in order to verify, Compared for epithelium based on block of pixels and matrix that several frequently seen use depth convolutional neural networks in addition extract small block feature Tissue segmentation methods, including (sliding window+softmax points of SW-SVM (sliding window+support vector cassification), SW-SMC Class), Ncut-SVM (standardization figure cuts+support vector cassification), Ncut-SMC (standardization figure cuts+softmax classification), SLIC-SVM (simple linear iteration cluster+support vector cassification), SLIC-SMC (+softmax points of simple linear iteration cluster Class).
Fig. 4 illustrates different models to the pseudo-colours result after epithelium in pathological image and matrix organization's segmentation.Wherein, (a) in Fig. 4 is original pathological image;(b) in Fig. 4 is the artificial mark accurately marked by pathologist, wherein dark-grey Color part represents epithelial tissue, and bright gray parts represent matrix organization, and black portions are background area, i.e., be not concerned Region;(c) in Fig. 4 is the method based on point and depth convolutional neural networks pixel-by-pixel proposed by this section;In Fig. 4 (d-i) it is respectively SW-SVM, SW-SMC, Ncut-SVM, Ncut-SMC, SLIC-SVM, the pseudo-colours that DCNN-SLIC-SMC is obtained Segmentation result figure, it is the region of epithelial tissue that its Oxford gray represents grader classification results, and light gray represents grader classification Result is the region of matrix organization, and black region is the background area not being concerned.
It can be seen from the results that skimming background area, i.e., it is the region of black, algorithm proposed by the present invention in expert's mark It is very high with the result similarity that expert marks, with obvious advantage.
For quantitative expression experimental result, used derivative parameter in confusion matrix (confused Matrix) and ROC curve carrys out comparative experiments result.
TP represents true positives, i.e. expert labeled as epithelial tissue, and grader is considered the number of the pixel of epithelial tissue;
FP represents false positive, i.e. expert labeled as matrix organization, and grader is considered the number of the pixel of epithelial tissue;
FN represents false negative, i.e. expert labeled as epithelial tissue, and grader is considered the number of the pixel of matrix organization.
TN represents true negative, i.e. expert labeled as matrix organization, and grader is considered the number of the pixel of matrix organization;
As shown in table 2, True Positive Rate (TPR), true negative rate (TNR) is positive for the computing formula of the derivative parameter of confusion matrix Property predicted value (PPV), negative predictive value (NPV), false positive rate (FPR), false negative rate (FNR), pseudo- discovery rate (FDR), accurately Property (ACC), F1 scores (F1), and Matthews's coefficient correlation (MCC) is spread out by the parameter in aforementioned four confusion matrix Born evaluation index.Wherein ACC, F1, and MCC are the indexs to the assessment of model integration capability.
The derivative parameter equation of the confusion matrix of table 2
Table 3 below represents the qualitative assessment (%) of the segmentation result of different model, wherein, boldface letter is optimal in index Value.
Table 3
Show that the figure of ROC curve is referred to as " ROC figures ".When the comparing of multiple learners is carried out, if a learner ROC curve by the curve of another learner completely " encasing ", then it can be asserted that the former performance be better than the latter;If two The intersection that the ROC curve of device occurs is practised, is then difficult to assert both performances which is better and which is worse.One rational criterion of comparing just compares Area below ROC curve, i.e. AUC (Area Under ROC Curve).From definition, AUC can be by bent to ROC The area of each several part is sued for peace and is obtained under line.AUC is bigger, illustrates that effect is better.Fig. 5 a illustrate several method in NKI data sets The ROC curve of upper segmentation effect, Fig. 5 b illustrate several method, and the ROC curve of segmentation effect can by AUC on VGH data sets Know, epithelium and the automatic segmentation effect of matrix based on point depth convolutional network pixel-by-pixel proposed by the present invention are better than on block-based Skin and matrix are split automatically.

Claims (5)

1. it is a kind of based on the automatic method split of point depth convolutional network epithelium pixel-by-pixel and matrix organization, it is characterised in that bag Include following steps:
Step 1, pretreatment operation is carried out to all pathological images, get rid of the colour brightness between pathological image and pathological image Difference;
Step 2, pretreated segment pathology image is randomly selected as training sample, remaining is used as test sample;
Step 3, the tissue regions figure according to artificial mark, block is chosen from the epithelium in training sample and matrix organization inside;
Step 4, the tissue regions figure according to artificial mark, block is chosen from the epithelium in training sample and matrix organization edge;
Step 5, the block that step 3 and step 4 are obtained is integrated and is randomly divided into training set and test set;
Step 6, one depth convolutional neural networks model DCNN of structure, the model contain convolutional layer, pond layer, linearity rectification Function activation primitive, local acknowledgement normalization layer and grader;The depth is trained using the training set and test set in step 5 Convolutional neural networks model;
Step 7, the pathological image taken out in the test sample of step 2, centered on each point in pathological image, construction one The block of individual Q × Q;Wherein, Q is the size that depth convolutional neural networks are input into size;
Step 8, by step 7 construct block be input in the depth convolutional neural networks model that step 6 is trained, classified As a result.
2. it is according to claim 1 it is a kind of based on pixel-by-pixel point depth convolutional network epithelium and matrix organization automatically segmentation Method, it is characterised in that carry out pseudo-colours according to the classification results that step 8 is obtained.
3. it is according to claim 1 it is a kind of based on pixel-by-pixel point depth convolutional network epithelium and matrix organization automatically segmentation Method, it is characterised in that Q is 32.
4. it is according to claim 3 it is a kind of based on pixel-by-pixel point depth convolutional network epithelium and matrix organization automatically segmentation Method, it is characterised in that the step 4 is specific as follows:According to the tissue regions figure of artificial mark, find upper in training sample Skin and the boundary line of matrix organization, the coordinate that expansive working obtains the point near boundary line is carried out to boundary line, is with these points The block of center construction 32 × 32, if central point falls in epithelial tissue, is considered epithelial tissue fritter, otherwise be then by the block Matrix organization's fritter.
5. it is according to claim 4 it is a kind of based on pixel-by-pixel point depth convolutional network epithelium and matrix organization automatically segmentation Method, it is characterised in that a depth convolutional neural networks model DCNN is built in the step 6, it is specific as follows:
Weight matrix in the model used when successfully distinguishing CIFAR-10 data using Alex is refreshing to initialize depth convolution Through network;
The concrete structure of depth convolutional neural networks:
1) convolutional layer
Assuming that wave filter group isEach input size is wl-1×wl-1BlockBy ml×ml Wave filter slip over the whole local experiences domain of image, and carry out convolution operation, and output result with each local experiences domain; Individual wave filter has generation altogetherIndividual Feature Mapping figure, and the size of each mapping graph is (wl-1-ml+1)×(wl-1-ml+ 1), this Linear filtering is expressed asWherein,It is one l layers of a ml×mlWave filter, mlRepresent network knot L layers of size of median filter of structure, It is l layers of wave filter group W1In wave filter number;
2) expression formula of linearity rectification function activation primitive is as follows:
x l = f ( g k l ) = m a x ( 0 , g k l ) ;
3) pond layer
The operation of pond layer is that a pyramid operation for down-sampling is carried out after last layer convolution Feature Mapping, in local sense In the range of domain, the characteristic value that its maximum or average value are as next layer is extracted, after nonlinear operation, the feature map of image Size is changed into:
( w l - 1 - m l + 1 ) s &times; ( w l - 1 - m l + 1 ) s
Wherein, s is the size of pond layer operation;
4) local acknowledgement's normalization layer
Subtract and do except and normalize for local doing;
5) output layer
Last layer of whole network is exactly output layer, and output layer is exactly a grader, and the input of grader is neutral net Last layer, the output of grader is classification number, in depth convolutional neural networks, the Softmax graders of two classification Logic Regression Models are:
h &theta; ( x ) = 1 1 + exp ( - &theta; T x ) ;
Wherein, x is the characteristic vector of sample, and T is transposition symbol, and θ is parameter;
The input of Softmax graders is the output of last layer of DCNN networks, by minimizing following loss function J (θ) obtains the parameter θ of Softmax graders;
J ( &theta; ) = - 1 m &lsqb; &Sigma; i = 1 m y ( i ) l o g h &theta; ( x ( i ) ) + ( 1 - y ( i ) ) l o g ( 1 - h &theta; ( x ( i ) ) ) &rsqb;
Wherein, m is sample size, y(i)It is i-th sample labeling, x(i)It is i-th characteristic vector of sample, k is classification number;
θ represents all of model parameter, as follows:
&theta; = &theta; 1 T &theta; 2 T ... &theta; k T
Wherein,The parameter used during jth class is categorized into, while being also the jth row in θ this all model parameter, 0<j< K+1 and j are integer;
According to the parameter θ of the Softmax for obtaining, before each can carry out DCNN first by the image block that sliding window is obtained Characteristic vector x is obtained to propagation(i), then it is sent to the probable value obtained between 0~1 in Logic Regression Models, final image The classification of blockFor:
i ^ = arg max p ( y ( i ) = j | x ( i ) ; &theta; ) ;
p ( y ( i ) = j | x ( i ) ; &theta; ) = e &theta; j T x ( i ) &Sigma; l = 1 k e &theta; l T x ( i )
Wherein, e be the nature truth of a matter, k=2,The parameter used during l classes is categorized into, while being also this all model of θ L rows in parameter.
CN201611085781.4A 2016-11-30 2016-11-30 Based on point depth convolutional network epithelium and matrix organization's automatic division method pixel-by-pixel Pending CN106780498A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611085781.4A CN106780498A (en) 2016-11-30 2016-11-30 Based on point depth convolutional network epithelium and matrix organization's automatic division method pixel-by-pixel

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611085781.4A CN106780498A (en) 2016-11-30 2016-11-30 Based on point depth convolutional network epithelium and matrix organization's automatic division method pixel-by-pixel

Publications (1)

Publication Number Publication Date
CN106780498A true CN106780498A (en) 2017-05-31

Family

ID=58914891

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611085781.4A Pending CN106780498A (en) 2016-11-30 2016-11-30 Based on point depth convolutional network epithelium and matrix organization's automatic division method pixel-by-pixel

Country Status (1)

Country Link
CN (1) CN106780498A (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107293289A (en) * 2017-06-13 2017-10-24 南京医科大学 A kind of speech production method that confrontation network is generated based on depth convolution
CN108197606A (en) * 2018-01-31 2018-06-22 浙江大学 The recognition methods of abnormal cell in a kind of pathological section based on multiple dimensioned expansion convolution
CN108447062A (en) * 2018-02-01 2018-08-24 浙江大学 A kind of dividing method of the unconventional cell of pathological section based on multiple dimensioned mixing parted pattern
CN108492297A (en) * 2017-12-25 2018-09-04 重庆理工大学 The MRI brain tumors positioning for cascading convolutional network based on depth and dividing method in tumor
CN108510485A (en) * 2018-03-27 2018-09-07 福州大学 It is a kind of based on convolutional neural networks without reference image method for evaluating quality
CN108629768A (en) * 2018-04-29 2018-10-09 山东省计算中心(国家超级计算济南中心) The dividing method of epithelial tissue in a kind of oesophagus pathological image
CN108647732A (en) * 2018-05-14 2018-10-12 北京邮电大学 A kind of pathological image sorting technique and device based on deep neural network
CN108766555A (en) * 2018-04-08 2018-11-06 深圳大学 The computer diagnosis method and system of Pancreatic Neuroendocrine Tumors grade malignancy
CN109003274A (en) * 2018-07-27 2018-12-14 广州大学 A kind of diagnostic method, device and readable storage medium storing program for executing for distinguishing pulmonary tuberculosis and tumour
CN109325495A (en) * 2018-09-21 2019-02-12 南京邮电大学 A kind of crop image segmentation system and method based on deep neural network modeling
CN109781732A (en) * 2019-03-08 2019-05-21 江西憶源多媒体科技有限公司 A kind of small analyte detection and the method for differential counting
CN110110634A (en) * 2019-04-28 2019-08-09 南通大学 Pathological image polychromatophilia color separation method based on deep learning
CN110598781A (en) * 2019-09-05 2019-12-20 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN110659692A (en) * 2019-09-26 2020-01-07 重庆大学 Pathological image automatic labeling method based on reinforcement learning and deep neural network
CN110796661A (en) * 2018-08-01 2020-02-14 华中科技大学 Fungal microscopic image segmentation detection method and system based on convolutional neural network
CN111325103A (en) * 2020-01-21 2020-06-23 华南师范大学 Cell labeling system and method
CN111798428A (en) * 2020-07-03 2020-10-20 南京信息工程大学 Automatic segmentation method for multiple tissues of skin pathological image
CN112990214A (en) * 2021-02-20 2021-06-18 南京信息工程大学 Medical image feature recognition prediction model
CN113052124A (en) * 2021-04-09 2021-06-29 济南博观智能科技有限公司 Identification method and device for fogging scene and computer-readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150213302A1 (en) * 2014-01-30 2015-07-30 Case Western Reserve University Automatic Detection Of Mitosis Using Handcrafted And Convolutional Neural Network Features
CN106022384A (en) * 2016-05-27 2016-10-12 中国人民解放军信息工程大学 Image attention semantic target segmentation method based on fMRI visual function data DeconvNet

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150213302A1 (en) * 2014-01-30 2015-07-30 Case Western Reserve University Automatic Detection Of Mitosis Using Handcrafted And Convolutional Neural Network Features
CN106022384A (en) * 2016-05-27 2016-10-12 中国人民解放军信息工程大学 Image attention semantic target segmentation method based on fMRI visual function data DeconvNet

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HAI SU等: ""region segmentation in histopathological breast cancer images using deep convolutional neural network"", 《IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING》 *
JUN XU 等: ""A Deep Convolutional Neural Network for segmenting and classifying epithelial and stromal regions in histopathological images"", 《NEUROCOMPUTING》 *
龚磊等: ""基于多特征描述的乳腺癌肿瘤病理自动分级"", 《计算机应用》 *

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107293289A (en) * 2017-06-13 2017-10-24 南京医科大学 A kind of speech production method that confrontation network is generated based on depth convolution
CN107293289B (en) * 2017-06-13 2020-05-29 南京医科大学 Speech generation method for generating confrontation network based on deep convolution
CN108492297A (en) * 2017-12-25 2018-09-04 重庆理工大学 The MRI brain tumors positioning for cascading convolutional network based on depth and dividing method in tumor
CN108197606A (en) * 2018-01-31 2018-06-22 浙江大学 The recognition methods of abnormal cell in a kind of pathological section based on multiple dimensioned expansion convolution
CN108447062A (en) * 2018-02-01 2018-08-24 浙江大学 A kind of dividing method of the unconventional cell of pathological section based on multiple dimensioned mixing parted pattern
CN108447062B (en) * 2018-02-01 2021-04-20 浙江大学 Pathological section unconventional cell segmentation method based on multi-scale mixed segmentation model
CN108510485B (en) * 2018-03-27 2022-04-05 福州大学 Non-reference image quality evaluation method based on convolutional neural network
CN108510485A (en) * 2018-03-27 2018-09-07 福州大学 It is a kind of based on convolutional neural networks without reference image method for evaluating quality
CN108766555A (en) * 2018-04-08 2018-11-06 深圳大学 The computer diagnosis method and system of Pancreatic Neuroendocrine Tumors grade malignancy
CN108629768B (en) * 2018-04-29 2022-01-21 山东省计算中心(国家超级计算济南中心) Method for segmenting epithelial tissue in esophageal pathology image
CN108629768A (en) * 2018-04-29 2018-10-09 山东省计算中心(国家超级计算济南中心) The dividing method of epithelial tissue in a kind of oesophagus pathological image
CN108647732A (en) * 2018-05-14 2018-10-12 北京邮电大学 A kind of pathological image sorting technique and device based on deep neural network
CN109003274A (en) * 2018-07-27 2018-12-14 广州大学 A kind of diagnostic method, device and readable storage medium storing program for executing for distinguishing pulmonary tuberculosis and tumour
CN110796661A (en) * 2018-08-01 2020-02-14 华中科技大学 Fungal microscopic image segmentation detection method and system based on convolutional neural network
CN110796661B (en) * 2018-08-01 2022-05-31 华中科技大学 Fungal microscopic image segmentation detection method and system based on convolutional neural network
CN109325495B (en) * 2018-09-21 2022-04-26 南京邮电大学 Crop image segmentation system and method based on deep neural network modeling
CN109325495A (en) * 2018-09-21 2019-02-12 南京邮电大学 A kind of crop image segmentation system and method based on deep neural network modeling
CN109781732A (en) * 2019-03-08 2019-05-21 江西憶源多媒体科技有限公司 A kind of small analyte detection and the method for differential counting
CN110110634A (en) * 2019-04-28 2019-08-09 南通大学 Pathological image polychromatophilia color separation method based on deep learning
CN110110634B (en) * 2019-04-28 2023-04-07 南通大学 Pathological image multi-staining separation method based on deep learning
CN110598781A (en) * 2019-09-05 2019-12-20 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN110659692A (en) * 2019-09-26 2020-01-07 重庆大学 Pathological image automatic labeling method based on reinforcement learning and deep neural network
CN111325103A (en) * 2020-01-21 2020-06-23 华南师范大学 Cell labeling system and method
CN111325103B (en) * 2020-01-21 2020-11-03 华南师范大学 Cell labeling system and method
CN111798428A (en) * 2020-07-03 2020-10-20 南京信息工程大学 Automatic segmentation method for multiple tissues of skin pathological image
CN111798428B (en) * 2020-07-03 2023-05-30 南京信息工程大学 Automatic segmentation method for multiple tissues of skin pathology image
CN112990214A (en) * 2021-02-20 2021-06-18 南京信息工程大学 Medical image feature recognition prediction model
CN113052124A (en) * 2021-04-09 2021-06-29 济南博观智能科技有限公司 Identification method and device for fogging scene and computer-readable storage medium

Similar Documents

Publication Publication Date Title
CN106780498A (en) Based on point depth convolutional network epithelium and matrix organization&#39;s automatic division method pixel-by-pixel
CN108765408B (en) Method for constructing cancer pathological image virtual disease case library and multi-scale cancer detection system based on convolutional neural network
Silva-Rodríguez et al. Going deeper through the Gleason scoring scale: An automatic end-to-end system for histology prostate grading and cribriform pattern detection
CN110428432B (en) Deep neural network algorithm for automatically segmenting colon gland image
Wan et al. Robust nuclei segmentation in histopathology using ASPPU-Net and boundary refinement
Pan et al. Mitosis detection techniques in H&E stained breast cancer pathological images: A comprehensive review
CN107977671A (en) A kind of tongue picture sorting technique based on multitask convolutional neural networks
CN109635875A (en) A kind of end-to-end network interface detection method based on deep learning
CN107680678A (en) Based on multiple dimensioned convolutional neural networks Thyroid ultrasound image tubercle auto-check system
CN104346617B (en) A kind of cell detection method based on sliding window and depth structure extraction feature
CN112215117A (en) Abnormal cell identification method and system based on cervical cytology image
CN110942446A (en) Pulmonary nodule automatic detection method based on CT image
CN107274386A (en) A kind of cervical cell liquid-based smear artificial intelligence aids in diagosis system
CN109389129A (en) A kind of image processing method, electronic equipment and storage medium
CN109086836A (en) A kind of automatic screening device of cancer of the esophagus pathological image and its discriminating method based on convolutional neural networks
CN106096654A (en) A kind of cell atypia automatic grading method tactful based on degree of depth study and combination
CN106778687A (en) Method for viewing points detecting based on local evaluation and global optimization
CN109635846A (en) A kind of multiclass medical image judgment method and system
CN107665492A (en) Colon and rectum panorama numeral pathological image tissue segmentation methods based on depth network
CN111415352B (en) Cancer metastasis panoramic pathological section analysis method based on deep cascade network
CN108053398A (en) A kind of melanoma automatic testing method of semi-supervised feature learning
CN106157279A (en) Eye fundus image lesion detection method based on morphological segment
CN112263217B (en) Improved convolutional neural network-based non-melanoma skin cancer pathological image lesion area detection method
CN110188767A (en) Keratonosus image sequence feature extraction and classifying method and device based on deep neural network
CN104299242A (en) Fluorescence angiography fundus image extraction method based on NGC-ACM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170531