CA2948499C - System and method for classifying and segmenting microscopy images with deep multiple instance learning - Google Patents

System and method for classifying and segmenting microscopy images with deep multiple instance learning Download PDF

Info

Publication number
CA2948499C
CA2948499C CA2948499A CA2948499A CA2948499C CA 2948499 C CA2948499 C CA 2948499C CA 2948499 A CA2948499 A CA 2948499A CA 2948499 A CA2948499 A CA 2948499A CA 2948499 C CA2948499 C CA 2948499C
Authority
CA
Canada
Prior art keywords
cnn
microscopy images
neural network
layer
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CA2948499A
Other languages
French (fr)
Other versions
CA2948499A1 (en
Inventor
Oren Kraus
Brendan Frey
Jimmy Ba
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dewpoint Therapeutics Inc
Original Assignee
Phenomic Ai Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Phenomic Ai Inc filed Critical Phenomic Ai Inc
Priority to CA2948499A priority Critical patent/CA2948499C/en
Publication of CA2948499A1 publication Critical patent/CA2948499A1/en
Application granted granted Critical
Publication of CA2948499C publication Critical patent/CA2948499C/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/698Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computational Linguistics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

Systems and methods that receive as input microscopy images, extract features, and apply layers of processing units to compute one or more set of cellular phenotype features, corresponding to cellular densities and/or fluorescence measured under different conditions. The system is a neural network architecture having a convolutional neural network followed by a multiple instance learning (MIL) pooling layer. The system does not necessarily require any segmentation steps or per cell labels as the convolutional neural network can be trained and tested directly on raw microscopy images in real-time. The system computes class specific feature maps for every phenotype variable using a fully convolutional neural network and uses multiple instance learning to aggregate across these class specific feature maps. The system produces predictions for one or more reference cellular phenotype variables based on microscopy images with populations of cells.

Description

2 WITH DEEP MULTIPLE INSTANCE LEARNING
3 TECHNICAL FIELD
4 [0001] The following relates generally to microscopy imaging and more specifically to the classification and segmentation of microscopy images utilizing deep learning with a multiple 6 instance learning pooling layer.

8 [0002] High-content screening (HCS) technologies that combine automated fluorescence 9 microscopy with high-throughput biotechnology have become powerful systems for studying cell biology and for drug screening. However, these systems can produce more than 105 images per 11 day, making their success dependent on automated image analysis.
Traditional analysis 12 pipelines heavily rely on hand-tuning the segmentation, feature extraction and classification 13 steps for each assay. Although comprehensive tools have become available, they are typically 14 optimized for mammalian cells and not directly applicable to model organisms such as yeast and Caenorhabditis elegans. Researchers studying these organisms often manually classify 16 cellular patterns by eye.
17 [0003] Recent advances in deep learning indicate that deep neural networks trained end-to-end 18 can learn powerful feature representations and outperform classifiers built on top of extracted 19 features. Although object recognition models, particularly convolutional networks, have been successfully trained using images with one or a few objects of interest at the center of the 21 image, microscopy images often contain hundreds of cells with a phenotype of interest, as well 22 as outliers.
23 [0004] Fully convolutional neural networks (FCNNs) have been applied to natural images for 24 segmentation tasks using ground truth pixel-level labels. These networks perform segmentation for each output category instead of producing a single prediction vector. For microscopy data, 26 convolutional sparse coding blocks have also been used to extract regions of interest from 27 spiking neurons and slices of cortical tissue without supervision. Other approaches utilize 28 FCNNs to perform segmentation using weak labels. However, while these techniques aim to 29 segment or localize regions of interest within full resolution images, they do not classify populations of objects in images of arbitrary size based on only training with weak labels. These 31 techniques suffer because dense pixel level ground truth labels are expensive to generate and 32 arbitrary, especially for niche datasets such as microscopy images.

1 [0005] Thus, there is a lack of automated cellular classification systems using full resolution 2 images. Applying deep neural networks to microscopy screens has been challenging due to the 3 lack of training data specific to cells; i.e., a lack of large datasets labeled at the single cell level.

[0006] In one aspect, a neural network architecture for classifying microscopy images 6 representing one or more cell classes is provided, the neural network architecture comprising: a 7 convolutional neural network (CNN) comprising: an input layer for inputting the microscopy 8 images; one or more hidden layers of processing nodes, each processing node comprising a 9 processor configured to apply an activation function and a weight to its inputs, a first of the hidden convolutional layers receiving an output of the input layer and each subsequent hidden 11 layer receiving an output of a prior hidden layer, each hidden layer comprising a convolutional 12 layer; and a hidden layer to generate one or more class specific feature maps for cellular 13 features of one or more cell classes present in the microscopy images;
and a global pooling 14 layer configured to receive the feature maps for cellular features and to apply a multiple instance learning pooling function to produce a prediction for each cell class present in the 16 microscopy images.
17 [0007] In another aspect, a method for classifying microscopy images representing one or more 18 cell classes using a neural network is provided, the method comprising:
applying a convolutional 19 neural network (CNN) to the microscopy images, the CNN comprising: an input layer for inputting the microscopy images; one or more hidden layers of processing nodes, each 21 processing node comprising a processor configured to apply an activation function and a weight 22 to its inputs, a first of the hidden convolutional layers receiving an output of the input layer and 23 each subsequent hidden layer receiving an output of a prior hidden layer, each hidden layer 24 comprising a convolutional layer; and a hidden layer to generate one or more class specific feature maps for cellular features of one or more cell classes present in the microscopy images;
26 and applying a global pooling layer to the feature maps for cellular features to apply a multiple 27 instance learning pooling function to produce a prediction for each cell class present in the 28 microscopy images.
29 [0008] These and other aspects are contemplated and described herein. It will be appreciated that the foregoing summary sets out representative aspects of convolutional neural networks 31 and microscopy imaging systems and methods for the classification and segmentation of 32 microscopy images utilizing deep multiple instance learning to assist skilled readers in 33 understanding the following detailed description.

2 [0009] The features of the invention will become more apparent in the following detailed 3 description in which reference is made to the appended drawings wherein:
4 [0010] Fig. 1 is a system for classifying and segmenting microscopy images;
[0011] Fig. 2 is an exemplary CNN and MIL pooling layer in accordance with the system for 6 classifying and segmenting microscopy images;
7 [0012] Fig. 3 shows MIL pooling functions with class specific feature map activations (47) for a 8 drug screen data sample;
9 [0013] Fig. 4 shows class feature map probabilities for a test sample labeled as "5" overlaid onto the input image;
11 [0014] Fig. 5 shows a benchmarking dataset of MFC-7 breast cancer cells;
12 [0015] Fig. 6 shows a yeast protein localization screen; and 13 [0016] Fig. 7 shows localizing cells with Jacobian maps generated for an image with transient, 14 cell cycle dependent protein localizations.
DETAILED DESCRIPTION
16 [0017] Embodiments will now be described with reference to the figures.
For simplicity and 17 clarity of illustration, where considered appropriate, reference numerals may be repeated 18 among the Figures to indicate corresponding or analogous elements. In addition, numerous 19 specific details are set forth in order to provide a thorough understanding of the embodiments described herein. However, it will be understood by those of ordinary skill in the art that the 21 embodiments described herein may be practised without these specific details. In other 22 instances, well-known methods, procedures and components have not been described in detail 23 so as not to obscure the embodiments described herein. Also, the description is not to be 24 considered as limiting the scope of the embodiments described herein.
[0018] Various terms used throughout the present description may be read and understood as 26 follows, unless the context indicates otherwise: "or" as used throughout is inclusive, as though 27 written "and/or"; singular articles and pronouns as used throughout include their plural forms, 28 and vice versa; similarly, gendered pronouns include their counterpart pronouns so that 29 pronouns should not be understood as limiting anything described herein to use, implementation, performance, etc. by a single gender; "exemplary" should be understood as 1 "illustrative" or "exemplifying" and not necessarily as "preferred" over other embodiments.
2 Further definitions for terms may be set out herein; these may apply to prior and subsequent 3 instances of those terms, as will be understood from a reading of the present description.
4 [0019] Any module, unit, component, server, computer, terminal, engine or device exemplified herein that executes instructions may include or otherwise have access to computer readable 6 media such as storage media, computer storage media, or data storage devices (removable 7 and/or non-removable) such as, for example, magnetic disks, optical disks, or tape. Computer 8 storage media may include volatile and non-volatile, removable and non-removable media 9 implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data.
Examples of computer 11 storage media include RAM, ROM, EEPROM, flash memory or other memory technology, CD-12 ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, 13 magnetic disk storage or other magnetic storage devices, or any other medium which can be 14 used to store the desired information and which can be accessed by an application, module, or both. Any such computer storage media may be part of the device or accessible or connectable 16 thereto. Further, unless the context clearly indicates otherwise, any processor or controller set 17 out herein may be implemented as a singular processor or as a plurality of processors. The 18 plurality of processors may be arrayed or distributed, and any processing function referred to 19 herein may be carried out by one or by a plurality of processors, even though a single processor may be exemplified. Any method, application or module herein described may be implemented 21 using computer readable/executable instructions that may be stored or otherwise held by such 22 computer readable media and executed by the one or more processors.
23 [0020] The following provides a system and method for classifying microscopy images with 24 deep multiple instance learning (MIL) without the prior need of segmentation. The system described herein is also capable of performing segmentation. This method described herein 26 allows the provided system to learn instance and bag level classifiers for full resolution 27 microscopy images without ever having to segment or label single cells.
28 [0021] In particular, the system comprises a convolutional neural network (CNN) having an 29 output linked to a pooling layer configured using MIL (alternatively described herein as a "convolutional MIL network"). The convolutional MIL network described herein uses MIL to 31 simultaneously classify and segment microscopy images with populations of cells. In an 32 embodiment, the CNN outputs class-specific feature maps representing the probabilities of the 33 classes for different locations in the input image and the MIL pooling layer is applied to these 1 feature maps. The system can be trained using only whole microscopy images with image level 2 labels, without requiring any segmentation steps. Processed images can be of arbitrary size and 3 contain varying number of cells. Individual cells can be classified by passing segmented cells 4 through the trained CNN or by mapping the probabilities in class specific feature maps back to the input space.
6 [0022] The systems and methods described herein relate, in part, to the problem of classifying 7 and segmenting microscopy images using only whole image level annotations. This problem 8 has implications in several industrial categories under the broad umbrellas of 'medicine' and 9 'imaging', including cellular microscopy, molecular diagnostics and pharmaceutical development.
11 [0023] MIL deals with problems for which labels only exist for sets of data points. In MIL, sets of 12 data points are typically referred to as bags and specific data points are referred to as 13 instances. A commonly used assumption for binary labels is that a bag is considered positive if 14 at least one instance within the bag is positive. Representative functions for mapping the instance space to the bag space include Noisy-OR, log-sum-exponential (LSE), generalized 16 mean (GM) and the integrated segmentation and recognition (ISR) model.
17 [0024] In an embodiment of the present system, the MIL pooling layer implements a pooling 18 function defined herein as "Noisy-AND". Unlike the aforementioned mapping functions, the 19 Noisy-AND pooling function is robust to outliers and large numbers of instances.
[0025] Referring now to Fig. 1, shown therein is a system 100 for classifying and segmenting 21 microscopy images, comprising a convolutional neural network 101, which is alternatively 22 referred to herein as a "convolutional network" or "CNN", an MIL pooling layer 109 and a 23 memory 106 communicatively linked to the CNN 101. The CNN comprises an input layer 103 24 that takes as input a set of microscopy images potentially depicting one or more cell classes and exhibiting cellular densities and florescence related to protein localization, one or more 26 hidden layers 105 for processing the images, and an output layer 107 that produces feature 27 maps for every output category (i.e., cell class) to mimic a specific cell phenotype and/or 28 localized protein. In embodiments, the CNN may provide other outputs in addition to feature 29 maps. An MIL pooling layer 109 is applied to the feature maps to generate predictions of the cell classes present in the image.
31 [0026] In Fig. 1, an illustrated embodiment of the system 100 in which the CNN 101 has a 32 plurality of hidden layers 105 (i.e. deep) is shown. Each hidden layer 105 of the CNN 101
5 1 generally comprises a convolutional layer 111 followed optionally by a pooling layer 113. The 2 pooling layer 113 of a layer i will be connected to a convolutional layer 111 of a layer 4/. These 3 pooling layers 113 should not be confused with the MIL pooling layer 109 of the system 100 as 4 a whole.
[0027] Each of the convolutional layers 111 and pooling layers 113 comprise a plurality of
6 processing units. Each processing unit may be considered a processing "node" of the network
7 and one or more nodes may be implemented by processing hardware, such as a single or multi-
8 core processor and/or graphics processing unit(s) (GPU(s)). Further, it will be understood that
9 each processing unit may be considered to be associated with a hidden unit or an input unit of the neural network for a hidden layer or an input layer, respectively. The use of large (many 11 hidden variables) and deep (multiple hidden layers) neural networks may improve the predictive 12 performances of the CNN compared to other systems.
13 [0028] Each node is configured with an activation function (acting as a feature detector) and a 14 weighting. The activation functions are fixed for each of the processing nodes and the weighting is stored in the memory 106, which is linked to each such node. The weights are determined 16 during a training stage of the CNN 101 and stored in the memory 106.
17 [0029] In embodiments, inputs to the input layer 103 of the CNN 101 are microscopy images 18 that are associated or associable with microscopy information, such as cellular densities, size, 19 cellular division, features derived from fluorescence detection, and features providing extra information (e.g. ultrastructure, protein-protein interactions and cell cycle), while outputs at the 21 output layer 109 of the CNN 101 are feature maps. The MIL pooling layer 109 generates 22 predictions of cell classes present in the images based on the feature maps.
23 [0030] The memory 106 may comprise a database for storing activations and learned weights 24 for each feature detector, as well as for storing datasets of microscopy information and extra information and optionally for storing outputs from the CNN 101 or MIL pooling layer 109. The 26 microscopy information may provide a training set comprising training data. The training data 27 may, for example, be used for training the CNN 101 to generate feature maps, in which visually 28 assigning annotations from a known screen may be provided; specifically, optionally labelling 29 proteins that are annotated to localize to more than one sub-cellular compartment. The memory 106 may further store a validation set comprising validation data.
31 [0031] Generally, during the training stage, the CNN 101 learns optimized weights for each 32 processing unit. After learning, the optimized weight configuration can then be applied to test 1 data (and the validation data prior to utilizing the neural network for test data). Stochastic 2 gradient descent can be used to train feedforward neural networks. A
learning process 3 (backpropagation), involves for the most part matrix multiplications, which makes them suitable 4 for speed up using GPUs. Furthermore, the dropout technique may be utilized to prevent overfitting.
6 [0032] The system may further comprise a computing device 115 communicatively linked to the 7 convolutional MIL network for controlling operations carried out in the convolutional MIL
8 network. The computing device 115 may comprise further input and output devices, such as 9 input peripherals (such as a computer mouse or keyboard), and/or a display. Cellular feature maps representing the probabilities of the classes for different locations in the input image, 11 and/or predictions generated by the MIL pooling layer 109, may be visualized and displayed to a 12 user via the display.
13 [0033] Referring now to Fig. 2, an exemplary convolutional MIL network is shown. Assuming 14 that the total number of classes is Ac for a full resolution image I, each class i may be treated as a separate binary classification problem with label t, E {0,1}.
Under this exemplary 16 MIL formulation, one is given a bag of N instances that are denoted as X
= {X1, = = =,,CN} and 17 X, E RD is the feature vector for each instance, The class labels t, are associated with the 18 entire bag instead of each instance. A binary instance classifier p(t,=
Ilx,) is used to generate 19 predictions pu across the instances in a bag. The instance predictions {AI} are combined through an aggregate function g(.), e.g. noisy-OR, to map the set of instance predictions to the probability of the final bag label p(t,=1Ixi,= = ). In an exemplary CNN, each activation in the 22 feature map is computed through the same set of filter weights convolved across the input 23 image. The pooling layers then combine activations of feature maps in convolutional layers. If 24 class specific feature maps are treated as bags of instances, the classical approaches in MIL
can be generalized to global pooling layers over these feature maps.
26 [0034] The MIL pooling layer in a convolutonal MIL network may be formulated as a global 27 pooling layer over a class specific feature map for class i referred to as the bag A. Assume 28 that the im class specific convolutional layer in a CNN computes a mapping directly from input 29 images to sets of binary instance predictions I -->fj,= = =,p.1 . It first outputs the logit values 2,1 in the feature map corresponding to instance j in the bag I. The feature level probability of 1 an instance j belonging to class i is defined as p,, where pt,, = o-(z,,) and a is the sigmoid 2 function. The image level class prediction is obtained by applying the global pooling function 3 g-(.) over all elements . The global pooling function g(=) maps the instance space 4 probabilities to the bag space such that the bag level probability for class i is defined by (1) 6 [0035] The global pooling function g(.) essentially combines the instance probabilities from 7 each class specific feature map p into a single probability. This reduction allows training and 8 evaluation of the convolutonal MIL network on inputs of arbitrary size.
9 [0036] While the MIL pooling layer learns the relationship between instances of the same class, the co-occurrence statistics of instances from different classes within the bag could also be 11 informative for predicting the bag label. An extension of the convolutional MIL network is 12 provided to learn relationships between classes by adding an additional fully connected layer 13 117 following the MIL pooling layer. This layer 117 can either use softmax or sigmoid activations 14 for either multi-class or multi-label problems, respectively. The softmax output from this layer 117 for each class i is defined as y,. A joint cross entropy objective function is formulated at 16 both the MIL pooling layer and the additional fully connected layer defined by Ar 17 = ¨ (log 1)(1,1 + log p(t, I y,)). (2) 18 where p(t 'Pt) is the binary class prediction from the MIL layer, and where 19 p(t , P,) = ' (1¨ 11 11) and p(t, y1) are either the binary or the multi-class prediction from the fully connected layer.
21 [0037] Prior MIL formulations are based on the assumption that at least one instance needs to 22 be positive for the bag level to be positive. However, due to heterogeneity within cellular 23 populations, imaging artifacts, and the large number of potential instances in an image, it cannot 24 be assumed that images with a negative label do not contain any instances of the specific phenotype. A more reasonable assumption is that bag labels are determined by a certain 26 proportion of instances being present.
27 [0038] In an embodiment, bag predictions are expressed as the geometric or arithmetic mean 28 of instances. This may address some of the challenges associated with imaging cellular 1 population and represent a generalized MIL problem. Prior generalizations for MIL are based on 2 the assumption that all instances collectively contribute to the bag label. However, for 3 microscopy images, it cannot be assumed that all bags require the same proportion of instances 4 to be positive.
[0039] In another embodiment, the use of several different global pooling functions g(=) in the 6 MIL pooling layer may be employed, where j indexes the instance within a bag. Previously 7 proposed global pooling functions for MIL have been designed as differentiable approximations 8 to the max function in order to satisfy the standard MIL assumption:
9 g(fp3l)=1-11(1¨ pi) Noisy¨OR, g(IPM=E __ PI 1(1+E __ Pi ) ISR, , 1¨ pi , 1¨ pi 11 Generalizechnean, 12 g({p,})¨ ¨1 log( __ C') LSE.
13 [0040] The inclusion of Noisy-OR and ISR can be sensitive to outliers and challenging to work 14 with microscopy datasets (as shown in Fig. 3). LSE and GM both have a parameter r that controls their sharpness. As r increases, the functions get closer to representing the max of the 16 instances. However, the present system utilizes a lowered r to allow more instances in the 17 feature maps to contribute to the pooled value.
18 [0041] Preferably, a pooling function defined herein as the Noisy-AND
pooling function is used.
19 In Noisy-AND, it may be assumed that a bag is positive if the number of positive instances in the bag surpasses a certain predefined threshold. The Noisy-AND pooling function is defined as:
o-(a(p--b,))¨o-(¨abi) 21 P,= g,(tPõ ) ______________ (3) o-(a(1¨h,))¨o-(¨ab,) 22 where pi ¨ __ 1 [0042] The Noisy-AND pooling function is designed to activate a bag level probability P, once 2 the mean of the instance level probabilities surpasses a certain threshold. This behaviour 3 mimics the logical AND function in the probabilistic domain. The parameters a and bi control 4 the shape of the activation function. b, is a set of parameters learned during training and represents an adaptable soft threshold for each class i. a is a fixed parameter that controls the 6 slope of the activation function. The terms cr(-ab,) and G(aO-4)) are included to normalized 7 P, to [OA for hi in [0,11 and a >0. Fig. 3 shows plots of relevant pooling functions. The top 8 exemplary graph shows pooling function activations by ratio of feature map activated ()5Ti). The 9 bottom exemplary graph shows activation functions learned by Noisy-AND
ale, (nand_a = 10.0) for different classes of an exemplary benchmarking dataset of breast cancer cells. For all of the 11 bag level evaluations used in this example, the Noisy-AND pooling function performs best. The 12 Noisy-AND pooling function accommodates variability related to different phenotypes by 13 learning an adaptive threshold for every class.
14 [0043] In another embodiment, the convolutional MIL network is used to localize regions of the full resolution input images that are responsible for activating the class specific feature maps.
16 This extension may be particularly useful for researchers conducting HCS
experiments who are 17 interested in obtaining statistics from single cell measurements of their screens. The pre-18 softmax activations of specific output nodes are back-propagated through a classification 19 network to generate Jacobian maps with respect to specific class predictions. The following general recursive non-linear back-propagation process is defined for computing a backward 21 activation a for each layer, analogous to the forward propagation:
--(1-0 a?) -0 a - f ( a ) (4) 23 where f (x) = max(0,x),a,, = p =
24 au) is the hidden activations in layer I, and z(/) is pre-nonlinearity activations 26 [0044] To start, the non-linear back-propagation (at) from the MIL layer using its sigmoidal 27 activations for the class i specific feature maps {AI} is multiplied by the pooling activation for 1 each class P, = . Applying the ReLU activation function to the partial derivatives during back 2 propagation generates Jacobian maps that are sharper and more localized to relevant objects in 3 the input. To generate segmentation masks, the sum of the Jacobian maps is thresholded along 4 the input channels. To improve the localization of cellular regions loopy belief propagation may be employed in an MRF to de-noise the thresholded Jacobian maps.
6 [0045] The CNN is designed such that an input the size of a typical cropped single cell 7 produces output feature maps of size 1x1. The same network can be convolved across larger 8 images of arbitrary size to produce output feature maps representing probabilities of target 9 labels for different locations in the input image. Training such a CNN
end-to-end allows the CNN
to work on vastly different datasets.
11 [0046] An exemplary embodiment is now described. In validation tests, the following CNN was 12 trained using two exemplary datasets while keeping the architecture and number of parameters 13 constant.
14 [0047] The basic convolutional MIL network architecture includes the following layers:
ave_poo10_3x3, conv1_3x3x32, conv2_3x3_64, poo11_3x3, conv3_5x5_64, p0012_3x3, conv4_3x3_128, p0013_3x3, conv5_3x3_128, p0014_3x3, conv6_1x1_1000, conv7_1x1_ , 17 MIL_pool, FC_N,, (as shown in Fig. 2). However, it will be appreciated that the convolutional 18 MIL network architecture may not be limited to this architecture.
19 [0048] A global pooling function g(=) is used as the activation function in the MIL pool layer.
g(.) transforms the output feature maps z, into a vector with a single prediction P, for each 21 class 1. In an exemplary embodiment, all of the above-mentioned pooling functions are defined 22 for binary categories and may be used in a multi-label setting (where each output category has 23 a separate binary target). In another embodiment, an additional fully connected output layer 24 may be added to the MIL_pool layer in order to learn relations between different categories.
Exemplary activations include softmax activation and sigmoidal activation. In this example, both 26 exemplary MIL activations are trained with a learning rate of 10-3 using the Adam optimization 27 algorithm. Slightly smaller crops of the original images may be extracted to account for 28 variability in image sizes within the screens. The images are normalized by subtracting the 29 mean and dividing by the standard deviation of each channel in the training sets. During training, random patches are cropped from the full resolution images and random rotations and 31 reflections to the patches applied. The ReLU activation for the convolutional layers may be used 1 and, as an example, 20% dropout to the pooling layers and 50% dropout to layer conv6 may be 2 applied. In the following example data sets, the CNNs may be trained within 1-2 days on a Tesla 3 K80 GPU using 9 Gb of memory with a batch size of 16.
4 [0049] Following training, an image of any size can be passed through the convolutional MIL
network. This can be useful for classifying individual cropped objects or image patches. One 6 could use a separate segmentation algorithm (such as Otsu thresholding, mixture of Gaussians, 7 region growing, graphical models, etc.) to identify object locations, crop bounding boxes around 8 them, and pass them through the convolutional MIL network in order to produce single cell 9 predictions. Alternatively, the cellular regions can be identified by back propagating errors through the network to the input space, as earlier described.
11 [0050] Referring now to Fig. 4, shown therein is an example image representing a first dataset 12 used for validating the effectiveness of the present system. The convolutional MIL network is 13 used to classify images based on the presence of a threshold number of handwritten digits.
14 Four example feature maps are shown for feature map activations P, (for I = 4, 5, 6 or 7) for a test sample labeled with handwritten digits including "0" and "5" overlaid onto the input image.
16 Each image in the dataset contains 100 digits cluttered on a black background of 512 x 512 17 pixels. The dataset may contain nine categories (digits E {1,2,..,9D and zeros may be used as 18 distractors. To simulate the conditions in cell culture microscopy, among the 100 digits, x 19 samples may be chosen from a single category and the remaining 100- x samples are zeros. x is fixed for each category and is equal to 10 times the digit value of the chosen category, as 21 shown in Fig. 4. For example, an image with label "5" contains 50 fours and 50 zeros. In the 22 exemplary embodiment, 50 images were used per category for training and
10 images per 23 category for testing.
24 [0051] In the example conducted, the CNNs trained on the cluttered hand written digits achieved 0% test error across all classes. These error rates were achieved despite the fact 26 images labeled as one actually contain 90 zeros and only 10 ones. The reason the 27 convolutional MIL network does not confuse zeros for ones in these samples is because zeros 28 also appear in images labeled with other categories, hence the convolutional MIL network is 29 capable of determining that the zeros constitute distractions. Another important element is that since there are only 50 training samples per digit, the CNN only sees 500 distinct ones during 31 training. The classic MNIST training dataset contains 6,000 cropped and centered samples per 32 category. The provided superior test performance with fewer training samples using the MIL
33 formulation is the result of the convolutional MIL network predictions being based on 1 aggregating over multiple instances. The convolutional MIL network may ignore samples that 2 are difficult to classify but still rely on easier instances to predict the overall image correctly.
3 Because different sampling rates for each digit category may be utilized, this exemplary 4 embodiment also shows that the convolutional MIL pooling layers are robust to different frequencies of the label class being present in the input image. In the specific image analyzed in 6 Fig. 4, class specific feature map activations (P) is shown for a test sample labeled as "5"
7 overlaid onto the input image. The convolutional MIL network successfully classifies almost all 8 the "5"s in the image and is not sensitive to the background or distractors (i.e. zeros).
9 [0052] Referring now to Fig. 6, a genome wide screen of protein localization in yeast is shown.
The screen contains images of 4,144 yeast strains from the yeast GFP
collection representing
11 71% of the yeast proteome. The images contain 2 channels, with fluorescent markers for the
12 cytoplasm and a protein from the GFP collection at a resolution of 1010x1335. This exemplary
13 embodiment sampled 6% of the screen and used 2200 whole microscopy images for training
14 and 280 for testing. The whole images of strains were characterized into 17 localization classes based on visually assigned localization annotations from a previous screen.
These labels 16 include proteins that were annotated to localize to more than one sub-cellular compartment.
17 [0053] Table 1 provides the yeast dataset results on whole images. The results include the 18 accuracy and mean classifier accuracy across 17 classes for a subset of 998 proteins annotated 19 to localize to one sub-cellular compartment and the mean average precision for the all the proteins analyzed from the screen (2592), including proteins that localize to multiple 21 compartments. The "Huh" column indicates agreement with manually assigned protein 22 localizations. The "Single loc acc" and "single loc mean acc" columns indicate the accuracy and 23 mean accuracy, respectively, across all classes for a subset of proteins that localize to a single 24 compartment. The "full image" column indicates mean average precision on a full resolution image test set.

Mean average prec. Classification Model full image Huh single single Inc inc acc, mean ace.
Chong et al. (2015) 0.703 0.935 0.808 Noisy-AND aç 0.921 0.815 0.942 0.821 Noisy-AND a7 5 0.920 0.846 0.963 0.834 Noisy-AND al() 0.950 0.883 0.953 0.876 LSE r1 0.925 0.817 0.945 0.828 LSE r25 0.925 0.829 0.953 0.859 LSE r s 0.933 0.861 0.960 0.832 GM r1 (avg. pooling) 0.915 0.822 0.938 0.862 GM /Is 0.888 0.837 0.922 0.778 GM r 5 0.405 0.390 0.506 0.323 max pooling 0.125 0.133 0.346 0.083 2 [0054] In addition to the performance on full resolution images, yeast dataset results on 3 segmented cells is provided in Table 2.
4 [0055] From a previous analysis pipeline using CellProfiler, the center of mass coordinates of segmented cells may be extracted and these coordinates used to crop single cells (for example, 6 crop size of 64 x 64) from the full resolution images. The dataset reflected in the results of Table 7 2 were annotated according to the labels from the full resolution images and likely includes 8 mislabelled samples. Also included is performance on 6,300 manually labelled segmented cells 9 used to train the SVM classifiers described in Chong,Y.T. et al. (2015) Yeast proteome dynamics from single cell imaging and automated analysis. Cell, 161, 1413-1424. For these 11 predictions we use the output from the MIL_pool layer.
12 [0056] Table 2 compares performance of a traditional CNN trained on the segmented cells with 13 noisy, whole image level labels on a dataset of manually labeled segmented cells. As an 14 additional baseline, a traditional CNN trained on the manually labeled cells achieved a test accuracy of 89.8%.
16 [0057] This dataset may be annotated according to the labels from the full resolution images 17 and may include mislabelled samples. For these predictions the output from the MIL pooling 18 layer may be utilized.

Mean average precision Model Segmented cells Segmented cells with noisy labels with manual labels CNN trained on 0.855 0.742 segmented cells with noisy labels Noisy-AND as 0.701 0.750 Noisy-AND el-7,5 0.725 0.757 Noisy-AND al') 0.701 0.738 LSE r1 0.717 0.763 LSE r2.5 0.715 0.762 LSE r5 0.674 0.728 GM r1 (avg. pooling) 0.705 0.741 GM r2.5 0.629 0.691 GMrs 0.255 0.258 max pooling 0.111 0.070 2 [0058] Referring now to Fig. 5, a breast cancer screen is shown. More specifically, shown 3 therein is a benchmarking dataset of MFC-7 breast cancer cells available from the Broad 4 Bioimage Benchmark Collection (image set BBBCO21v1). The images contain 3 channels with fluorescent markers for DNA, actin filaments, and ,6 -tubulin at a resolution of 1024x1280.
6 Within this exemplary dataset, 103 treatments (compounds at active concentrations) have 7 known effects on cells based on visual inspection and prior literature and can be classified into 8 12 distinct categories referred to as mechanism of action (MOA). This exemplary embodiment 9 sampled 15% of images from these 103 treatments to train and validate the CNN. The same proportion of the data was used to train the best architecture reported in Ljosa,V. et al. (2013) 11 Comparison of methods for image-based profiling of cellular morphological responses to small-12 molecule treatment. J. Biomol. Screen., 18, 1321-1329. In total, 300 whole microscopy images 13 were used during training and 40 for testing. Evaluation of all the images in the screen is 14 provided and reports on the predicted treatment accuracy across the treatments.
[0059] Table 3 provides the breast cancer dataset results on whole images. The "full image"
16 column indicates accuracy on a full resolution image test set. The "treatment" column indicates 17 accuracy predicting treatment MOA by taking the median prediction over three experimental 18 replicates of the screen. For these predictions the output from the last layer of the network may 19 be used.

Model full image treatment Ljosa etal. (2013) 0.94 Noisy-AND a5 0.915 0.957 Noisy-AND a7.5 0.915 0.957 Noisy-AND am 0.958 0.971 LSE rj 0.915 0.943 LSE r2 5 0.888 0.871 LSE r5 0.940 0.957 GM rj (average pooling) 0.924 0.943 GM r25 0.924 0.957 GM rs 0.651 0.686 max pooling 0.452 0.429 2 [0060] Referring now to Fig. 7, a set of cell localizations is shown. The trained CNN described 3 herein can be used to locate regions with cells without additional training. Segmentation maps 4 may be generated to identify cellular regions in the input by back-propagating activations as described above with relation to the input space as shown in Fig. 7. We refer to gradients with 6 respect to the input as Jacobian maps. To evaluate the segmentation method the mean 7 intersection over union (IU) between the calculated maps and segmentation maps generated 8 optionally using the global Otsu thresholding module in CellProfiler. A
mean IU of 81.2% was 9 achieved using this method. The mask pairs with low IU were mostly incorrect using Otsu thresholding. The CNN may generate class specific segmentation maps by back-propagating 11 individual class specific feature maps while setting the rest of the feature maps to zero.
12 Specifically, Fig. 7 shows the Jacobian maps generated for an image with transient, cell cycle 13 dependent protein localizations.
14 [0061] For all of the bag level evaluation shown above, we see that the Noisy-AND models perform best, which follows from the pooling functions plotted in Fig. 3.
Setting the scaling 16 factors (a, r) to lower values make the pooling functions approach mean of the feature maps, 17 while for higher values the functions approach the max function. Since different phenotype 18 categories may have vastly different densities of cells neither extreme suits all classes. The 19 Noisy-AND pooling function accommodates this variability by learning an adaptive threshold for every class, as shown in Fig. 3.
21 [0062] Although the foregoing has been described with reference to certain specific 22 embodiments, various modifications thereto will be apparent to those skilled in the art without App. No 2948499 Agent ref 410-002CAP
1 departing from the spirit and scope of the invention as outlined in the appended claims.

Claims (14)

We claim:
1. A computer-implemented neural network architecture for classifying microscopy images representing one or more cell classes, the neural network architecture comprising:
a convolutional neural network (CNN) comprising:
an input layer for inputting the microscopy images;
one or more hidden layers of processing nodes configured to apply an activation function and a weight to its inputs, a first of the hidden convolutional layers receiving an output of the input layer and each subsequent hidden layer receiving an output of a prior hidden layer, each hidden layer comprising a convolutional layer, the processing nodes being implemented by one or more processing units;

and a hidden layer to generate one or more class specific feature maps for cellular features of one or more cell classes present in the microscopy images, the class specific feature maps represent probabilities of the cell classes for various locations in the microscopy image; and a global pooling layer configured to receive the feature maps for cellular features and to apply a multiple instance learning pooling function to combine respective probabilities from each class specific feature map to produce a prediction for each cell class present in the microscopy images;
the CNN configured to generate class specific feature maps and the global pooling layer configured to produce class predictions for microscopy images of arbitrary size irrespective of images with which the CNN is trained.
2. The neural network architecture of claim 1, wherein the CNN is trained using a training set comprising only whole microscopy images with image level labels.
3. The neural network architecture of claim 1, wherein the CNN is configured to generate the class specific feature maps for microscopy images having any number of present cell classes irrespective of images with which the CNN is trained.
4. The neural network architecture of claim 1, wherein the pooling function maps instance space probabilities to bag space to define a bag level probability, the bag level probability is utilized to produce a bag level prediction and the bag level prediction is expressed as a mean of instance proportions among the cell classes.
5. The neural network architecture of claim 1, wherein the CNN is configured to generate Jacobian maps for specific cell class predictions by back-propagating through the CNN for a particular image.
6. The neural network architecture of claim 1, wherein the CNN is further configured to performing segmentation in the microscopy images.
7. The neural network architecture of claim 6, wherein the CNN performs classification and segmentation simultaneously.
8. A computer-implemented method for classifying microscopy images representing one or more cell classes using a neural network, the method executed on one or more processing units, the method comprising:
applying a convolutional neural network (CNN) to the microscopy images, the CNN
comprising:
an input layer for inputting the microscopy images;
one or more hidden layers of processing nodes, each processing node comprising a processor configured to apply an activation function and a weight to its inputs, a first of the hidden convolutional layers receiving an output of the input layer and each subsequent hidden layer receiving an output of a prior hidden layer, each hidden layer comprising a convolutional layer; and a hidden layer to generate one or more class specific feature maps for cellular features of one or more cell classes present in the microscopy images, the class specific feature maps represent probabilities of the cell classes for various locations in the microscopy image; and applying a global pooling layer to the feature maps for cellular features to apply a multiple instance learning pooling function to combine instance probabilities from each class specific feature map to produce a prediction for each cell class present in the microscopy images;

the CNN configured to generate class specific feature maps and the global pooling layer configured to produce class predictions for microscopy images of arbitrary size irrespective of images with which the CNN is trained.
9. The method of claim 8, wherein the CNN is trained using a training set comprising only whole microscopy images with image level labels.
10. The method of claim 8, wherein the CNN is configured to generate the class specific feature maps for microscopy images having any number of present cell classes irrespective of images with which the CNN is trained.
11. The method of claim 8, wherein the pooling function maps instance space probabilities to bag space to define a bag level probability, the bag level probability is utilized to produce a bag level prediction and the bag level prediction is expressed as a mean of instance proportions among the cell classes.
12. The method of claim 8, wherein the CNN is configured to generate Jacobian maps for specific cell class predictions by back-propagating through the CNN for a particular image.
13. The method of claim 8, wherein the CNN is further configured to performing segmentation in the microscopy images.
14. The method of claim 13, wherein the CNN performs classification and segmentation simultaneously.
CA2948499A 2016-11-16 2016-11-16 System and method for classifying and segmenting microscopy images with deep multiple instance learning Active CA2948499C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CA2948499A CA2948499C (en) 2016-11-16 2016-11-16 System and method for classifying and segmenting microscopy images with deep multiple instance learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CA2948499A CA2948499C (en) 2016-11-16 2016-11-16 System and method for classifying and segmenting microscopy images with deep multiple instance learning

Publications (2)

Publication Number Publication Date
CA2948499A1 CA2948499A1 (en) 2018-05-16
CA2948499C true CA2948499C (en) 2020-04-21

Family

ID=62143697

Family Applications (1)

Application Number Title Priority Date Filing Date
CA2948499A Active CA2948499C (en) 2016-11-16 2016-11-16 System and method for classifying and segmenting microscopy images with deep multiple instance learning

Country Status (1)

Country Link
CA (1) CA2948499C (en)

Families Citing this family (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108830837B (en) * 2018-05-25 2021-03-09 北京百度网讯科技有限公司 Method and device for detecting steel ladle corrosion defect
EP3629242B1 (en) * 2018-09-28 2024-07-03 Siemens Healthcare Diagnostics Inc. Method for configuring an image evaluating device and image evaluation method and image evaluating device
CN109858487B (en) * 2018-10-29 2023-01-17 温州大学 Weak supervision semantic segmentation method based on watershed algorithm and image category label
DE102018220711A1 (en) * 2018-11-30 2020-06-04 Robert Bosch Gmbh Measuring the vulnerability of AI modules to attempts at deception
CN109614921B (en) * 2018-12-07 2022-09-30 安徽大学 Cell segmentation method based on semi-supervised learning of confrontation generation network
CN111178121B (en) * 2018-12-25 2023-04-07 中国科学院合肥物质科学研究院 Pest image positioning and identifying method based on spatial feature and depth feature enhancement technology
CN109740560B (en) * 2019-01-11 2023-04-18 山东浪潮科学研究院有限公司 Automatic human body cell protein identification method and system based on convolutional neural network
US10699192B1 (en) * 2019-01-31 2020-06-30 StradVision, Inc. Method for optimizing hyperparameters of auto-labeling device which auto-labels training images for use in deep learning network to analyze images with high precision, and optimizing device using the same
CN109993212B (en) * 2019-03-06 2023-06-20 西安电子科技大学 Position privacy protection method in social network picture sharing and social network platform
CN109815945B (en) * 2019-04-01 2024-04-30 上海徒数科技有限公司 Respiratory tract examination result interpretation system and method based on image recognition
CN110222716B (en) * 2019-05-08 2023-07-25 天津大学 Image classification method based on full-resolution depth convolution neural network
CN110298831A (en) * 2019-06-25 2019-10-01 暨南大学 A kind of magic magiscan and its method based on piecemeal deep learning
CN110599457B (en) * 2019-08-14 2022-12-16 广东工业大学 Citrus huanglongbing classification method based on BD capsule network
CN110598711B (en) * 2019-08-31 2022-12-16 华南理工大学 Target segmentation method combined with classification task
CN110796046B (en) * 2019-10-17 2023-10-10 武汉科技大学 Intelligent steel slag detection method and system based on convolutional neural network
DE102019130930A1 (en) * 2019-11-15 2021-05-20 Carl Zeiss Microscopy Gmbh Microscope and method with performing a convolutional neural network
CN110969117A (en) * 2019-11-29 2020-04-07 北京市眼科研究所 Fundus image segmentation method based on Attention mechanism and full convolution neural network
CN111161278B (en) * 2019-12-12 2023-04-18 西安交通大学 Deep network aggregation-based fundus image focus segmentation method
CN111310191B (en) * 2020-02-12 2022-12-23 广州大学 Block chain intelligent contract vulnerability detection method based on deep learning
CN111539250B (en) * 2020-03-12 2024-02-27 上海交通大学 Image fog concentration estimation method, system and terminal based on neural network
CN111612722B (en) * 2020-05-26 2023-04-18 星际(重庆)智能装备技术研究院有限公司 Low-illumination image processing method based on simplified Unet full-convolution neural network
CN111524137B (en) * 2020-06-19 2024-04-05 平安科技(深圳)有限公司 Cell identification counting method and device based on image identification and computer equipment
CN111860459B (en) * 2020-08-05 2024-02-20 武汉理工大学 Gramineae plant leaf pore index measurement method based on microscopic image
CN114723652A (en) * 2021-01-04 2022-07-08 富泰华工业(深圳)有限公司 Cell density determination method, cell density determination device, electronic apparatus, and storage medium
CN112884001B (en) * 2021-01-15 2024-03-05 广东省特种设备检测研究院珠海检测院 Automatic grading method and system for graphitization of carbon steel
CN113096096B (en) * 2021-04-13 2023-04-18 中山市华南理工大学现代产业技术研究院 Microscopic image bone marrow cell counting method and system fusing morphological characteristics
CN113343757A (en) * 2021-04-23 2021-09-03 重庆七腾科技有限公司 Space-time anomaly detection method based on convolution sparse coding and optical flow
CN113095279B (en) * 2021-04-28 2023-10-24 华南农业大学 Intelligent visual recognition method, device and system for flower quantity of fruit tree and storage medium
CN114550223B (en) * 2022-04-25 2022-07-12 中国科学院自动化研究所 Person interaction detection method and device and electronic equipment
CN114826776B (en) * 2022-06-06 2023-05-02 中国科学院高能物理研究所 Weak supervision detection method and system for encrypting malicious traffic
CN115791640B (en) * 2023-02-06 2023-06-02 杭州华得森生物技术有限公司 Tumor cell detection equipment and method based on spectroscopic spectrum
CN116630313B (en) * 2023-07-21 2023-09-26 北京航空航天大学杭州创新研究院 Fluorescence imaging detection system and method thereof
CN116823823B (en) * 2023-08-29 2023-11-14 天津市肿瘤医院(天津医科大学肿瘤医院) Artificial intelligence cerebrospinal fluid cell automatic analysis method

Also Published As

Publication number Publication date
CA2948499A1 (en) 2018-05-16

Similar Documents

Publication Publication Date Title
CA2948499C (en) System and method for classifying and segmenting microscopy images with deep multiple instance learning
US10303979B2 (en) System and method for classifying and segmenting microscopy images with deep multiple instance learning
US20220237788A1 (en) Multiple instance learner for tissue image classification
US11901077B2 (en) Multiple instance learner for prognostic tissue pattern identification
Kraus et al. Classifying and segmenting microscopy images with deep multiple instance learning
Nawaz et al. A robust deep learning approach for tomato plant leaf disease localization and classification
Hollandi et al. A deep learning framework for nucleus segmentation using image style transfer
Ayachi et al. A convolutional neural network to perform object detection and identification in visual large-scale data
Balomenos et al. Image analysis driven single-cell analytics for systems microbiology
Sadanandan et al. Segmentation and track-analysis in time-lapse imaging of bacteria
Yu et al. A recognition method of soybean leaf diseases based on an improved deep learning model
Momeni et al. Deep recurrent attention models for histopathological image analysis
Zhu et al. A deep learning-based method for automatic assessment of stomatal index in wheat microscopic images of leaf epidermis
Wang et al. Fine-grained grape leaf diseases recognition method based on improved lightweight attention network
Gupta et al. Simsearch: A human-in-the-loop learning framework for fast detection of regions of interest in microscopy images
Noshad et al. A new hybrid framework based on deep neural networks and JAYA optimization algorithm for feature selection using SVM applied to classification of acute lymphoblastic Leukaemia
Khan et al. Volumetric segmentation of cell cycle markers in confocal images using machine learning and deep learning
Xiao et al. A computer vision and residual neural network (ResNet) combined method for automated and accurate yeast replicative aging analysis of high-throughput microfluidic single-cell images
KR101913952B1 (en) Automatic Recognition Method of iPSC Colony through V-CNN Approach
Wang et al. A systematic evaluation of computation methods for cell segmentation
Siedhoff A parameter-optimizing model-based approach to the analysis of low-SNR image sequences for biological virus detection
Alim et al. Integrating convolutional neural networks for microscopic image analysis in acute lymphoblastic leukemia classification: A deep learning approach for enhanced diagnostic precision
Wu et al. A deep semantic network-based image segmentation of soybean rust pathogens
Kinose et al. Tiller estimation method using deep neural networks
Soleimany et al. Image segmentation of liver stage malaria infection with spatial uncertainty sampling

Legal Events

Date Code Title Description
EEER Examination request

Effective date: 20191125