WO2023073405A1 - Digital microscopy tissue image analysis method and system for digital pathology - Google Patents

Digital microscopy tissue image analysis method and system for digital pathology Download PDF

Info

Publication number
WO2023073405A1
WO2023073405A1 PCT/IB2021/059956 IB2021059956W WO2023073405A1 WO 2023073405 A1 WO2023073405 A1 WO 2023073405A1 IB 2021059956 W IB2021059956 W IB 2021059956W WO 2023073405 A1 WO2023073405 A1 WO 2023073405A1
Authority
WO
WIPO (PCT)
Prior art keywords
annotation
image
cells
contour
digital
Prior art date
Application number
PCT/IB2021/059956
Other languages
French (fr)
Inventor
Davide MARINO
Original Assignee
Cloud Pathology Group Srl
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloud Pathology Group Srl filed Critical Cloud Pathology Group Srl
Priority to PCT/IB2021/059956 priority Critical patent/WO2023073405A1/en
Publication of WO2023073405A1 publication Critical patent/WO2023073405A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/469Contour-based spatial representations, e.g. vector-coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro

Definitions

  • the step of generating at least one between an annotation arc and a compound annotation arc involves applying a nearest neighbour criterion to identify annotation points that are adjacent to each other.
  • each tile I r.c is associated with position information identifying the portion of WSI IVto which it corresponds - for example, the position information comprises a two-dimensional coordinate indicating the row r and column c associated with the tile IVr,c in the matrix arrangement of the plurality of tiles IVr,c.
  • the artificial intelligence algorithm Al is a convolutional neural network or CNN.
  • the convolutional neural network is selected from:

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a method (1000; 2100; 2200) of analysing a digital microscopy image, wherein said digital microscopy image depicts a biological tissue. The method comprises the steps of: by means of a digital microscope: - acquiring (1001) a digital microscopy image of a biological tissue to be analysed, by means of a computer: - dividing (1003) the digital microscopy image into a plurality of image portions, - by means of an artificial intelligence algorithm: - in each image portion, identifying (1007) at least one group of cells of the same type, - generating (1009) an annotation superimposable on the digital microscopy image, where said annotation highlights said at least one group of cells of the same type when superimposed on the digital microscopy image, Advantageously, the method further comprises the step of: - in each image portion, highlighting (1005) a contour or contour portion of at least one group of cells by means of a contour recognition operator In addition, the step of generating (1009) an annotation superimposable on the digital microscopy image comprises: - identifying (10091 ) the type of cells circumscribed by said contour of at least one group of cells, and - using (10093) said contour to generate the annotation superimposable on the digital microscopy image.

Description

DIGITAL MICROSCOPY TISSUE IMAGE ANALYSIS METHOD AND SYSTEM FOR DIGITAL PATHOLOGY DESCRIPTION
TECHNICAL FIELD
The present invention relates to the computer systems sector. In detail, the present invention relates to a digital pathology method and system. In more detail, the present invention relates to a method and a related system for identifying pathologies in tissues by analysing digital images thereof.
BACKGROUND
In the field of digital pathology, it is known to carry out the analysis of digital microscopy images, called Whole Slide Images - WSI, i.e. digital images of slides, obtained by means of a so-called Virtual Microscopy system in order to automatically detect the presence of cells affected by pathologies, - such as tumours - lesions and the like, particularly in the case of histological analyses.
In order to automate and assist human personnel in the histological analysis of WSIs, artificial intelligencebased systems configured to classify and/or identify tissue samples or portions thereof subject to a pathology portrayed in WSIs have been proposed in the prior art.
For example, US 2020/0364867 describes the use of a convolutional neural network, or CNN, for identifying tumours in a histological image. The CNN includes a channel for each class of tissue to be identified, where there is a class for each type of tumour or healthy tissue. The CNN is configured to perform a multistage convolution on a portion of a histological image followed by a multistage transposed convolution to generate a layer corresponding to the size of the portion of the histological image analysed so as to obtain a corresponding output image portion with each pixel associated with one of the available classes. Finally, the output portions of the image are combined to form a probability map of the presence of tumour cells superimposable on the histological image.
The classification procedure described in US 2020/0364867 is particularly complex and costly to implement in terms of hardware and time resources. In detail, this procedure requires the tissue samples portrayed in the histological image to be analysed, as well as in the images used for CNN training, to be stained with biomarkers and/or other contrast media, thus increasing the complexity and cost in terms of time and resources required to perform the procedure and carry out the CNN training. In addition, the procedure described in US 2020/0364867 requires the use of two convolutional neural networks in order to correctly classify essentially a single type of cell of interest. As a result, the execution of this procedure is particularly burdensome in terms of hardware resources and the classification capacity of the WSI is rather limited compared to the variety of cell types that can be identified in a WSI.
OBJECTS AND SUMMARY OF THE INVENTION
An object of the present invention is to overcome the drawbacks of the prior art.
In particular, it is an object of the present invention to provide a method and a related system configured to perform pathology identification in tissues by analysing digital images thereof in an efficient and rapid manner using contained hardware resources.
A further object of the present invention is to provide a method and a related system configured to perform pathology identification in tissues by analysing digital images thereof irrespective of whether or not staining by biomarkers and/or other contrast media is used or the type of biomarkers and/or other contrast media used.
These and other objects of the present invention are achieved by a system incorporating the features of the annexed claims, which form an integral part of the present description.
According to a first aspect, the present invention relates to a method of analysing a digital microscopy image, or WSI, depicting a biological tissue. The method comprises the following steps.
Using a digital microscope, performing the step of:
- acquiring a digital microscopy image of a biological tissue to be analysed.
In addition, using a computer, performing the steps of:
- dividing the digital microscopy image into a plurality of image portions,
- by means of an artificial intelligence algorithm, run by the computer:
- in each image portion, identifying at least one group of cells of the same type, generating an annotation superimposable on the digital microscopy image, where said annotation highlights said at least one group of cells of the same type when superimposed on the digital microscopy image.
Advantageously, the method further comprises the step of:
- in each image portion, highlighting a contour or contour portion of at least one group of cells by means of a contour recognition operator.
In addition, the step of generating an annotation superimposable on the digital microscopy image comprises:
- identifying the type of cells circumscribed by said contour of the at least one group of cells, and
- using said contour to generate the annotation superimposable on the digital microscopy image.
With this solution, it is possible to efficiently and reliably obtain a precise identification of one or more groups of cells belonging to one or more corresponding types of interest. In particular, annotations identifying groups of cells are processed reliably and in a timely manner even on computers with limited hardware resources such as a workstation used in hospital imaging.
In one embodiment, the step of highlighting a contour or portion of a contour of at least one group of cells by means of a contour recognition operator comprises, for each image portion:
- normalizing the red, green and blue values of each pixel in the image portion,
- desaturating the normalized image portion, and
- applying a Sobel operator to the desaturated image portion
The sequence of operations outlined above makes it possible to effectively highlight the contours of any group of cells present in the portions of the image analysed by the artificial intelligence algorithm. In particular, these operations are carried out quickly by computers with limited hardware resources, particularly in terms of volume of volatile memory, or RAM, and graphics processing capacity, i.e. GPU computing power.
In one embodiment, the step of generating an annotation superimposable on the digital microscopy image comprises, for each image portion:
- generating an annotation point for each pixel comprised in the contour of the at least one group of cells of the same type, wherein each point comprises position information in a two-dimensional space corresponding to a position of the associated pixel in the image portion and an indication of the cell type of the group of cells, and
- generating the annotation superimposable on the digital microscopy image as a graph of annotation points comprising the indication of the same cell type as the group of cells.
Preferably, the annotation superimposable on the digital microscopy image as a graph of annotation points comprises:
- generating a partial annotation for each image portion, said partial annotation comprising at least one annotation arc, and wherein said at least one arc is formed by a sequence of points adjacent to one another and comprised in a portion of said two-dimensional space corresponding to said image portion.
Even more preferably, the step of generating an annotation superimposable on the digital microscopy image further comprises, for each annotation arc that does not define a closed line:
- generating a compound annotation arc by joining together annotation arcs of image portions adjacent to each other and comprising the indication as the same cell type of the group of cells, wherein an extreme point of each arc in a first image portion is adjacent to an extreme point of each arc in a second image portion adjacent to the first image portion.
By generating points and arcs, it is possible to reliably create an annotation that precisely circumscribes each cell group identified in the analysed digital microscopy image.
In one embodiment, the step of generating at least one between an annotation arc and a compound annotation arc involves applying a nearest neighbour criterion to identify annotation points that are adjacent to each other.
The use of this type of criteria makes it possible to obtain precise and reliable annotations with a particularly low use of hardware resources, which allows this part of the method to be carried out on consumer-grade computers.
In one embodiment, artificial intelligence algorithm is a convolutional neural network selected from:
- ResNet,
- WideResNet,
- DenseNet,
- GoogleNet,
- ShuffleNet,
- MobileNet, and
- SqueezeNet.
Preferably, the convolutional neural network is selected from the subgroup comprising: ResNet;
ShuffleNet, and
MobileNet.
Studies carried out by the Applicant have shown that the above convolutional network types give the best results in the classification of various cell types portrayed in a digital microscopy image.
In one embodiment, the artificial intelligence algorithm comprises a convolutional neural network of the ResNet type, in which the last layer is modified to comprise:
- a two-dimensional linear convolution function, where the first dimension corresponds to the partial annotations of the image portions, and the second dimension corresponds to the image portions processed by means of said contour recognition operator;
- a linear activation function, and
- an average pooling function on said two dimensions.
Modification of the last layer or level of the convolutional neural network described above makes it possible to obtain precision, accuracy, F1 and recall parameter values greater than 90%, irrespective of the types of cells to be classified and/or the number of different types of cells to be researched, using limited hardware resources.
In one embodiment, the method also comprises training the artificial intelligence algorithm by means of the steps of:
- providing as input to the artificial intelligence algorithm a plurality of image portions of at least one digital microscopy image comprising at least one group of cells belonging to a cell type to be identified,
- providing as input to the artificial intelligence algorithm a respective partial annotation associated with each of said image portions, wherein each partial annotation is generated from an annotation performed by a human operator on the digital microscopy image,
- iteratively training the artificial intelligence algorithm to recognize the contour of said at least one group of cells by processing the plurality of image portions and corresponding partial annotations received as input.
In particular, training the artificial intelligence algorithm involves performing a dropout operation, in which a randomly selected portion of nodes in hidden layers of the convolutional neural network is ignored during a predetermined number of training iterations.
Thanks to these steps, the trained convolutional neural network is particularly effective in recognizing the desired cell types without introducing overfitting.
A different aspect of the present invention concerns a system for analysing a digital microscopy image. This system comprises:
- a digital microscope, and
- a computer connected to said digital microscope.
Advantageously, the computer executes an artificial intelligence algorithm and is configured to implement the method according to any of the embodiments described above. Further features and advantages of the present invention will be more evident from the description of the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention is described hereinbelow with reference to certain examples provided by way of non-limiting example and illustrated in the accompanying drawings. These drawings illustrate different aspects and embodiments of the present invention and reference numerals illustrating structures, components, materials and/or similar elements in different drawings are indicated by similar reference numerals, where appropriate.
Figure 1 is a block diagram of a system in which the method according to one embodiment of the present invention is implemented;
Figure 2 is a flow chart of a procedure for identifying groups of cells of the same type according to an embodiment of the present invention;
Figure 3 is a Whole Slide Image subdivided into a plurality of tiles according to an embodiment of the present invention;
Figure 4 is a pre-processed tile according to an embodiment of the present invention;
Figure 5 is the tile in Figure 4 to which an annotation generated by the procedure in Figure 2 has been superimposed;
Figure 6 qualitatively illustrates the connection between adjacent tile annotations according to an embodiment of the present invention;
Figure 7 is a Whole Slide Image to which an annotation according to the embodiment of the present invention has been superimposed;
Figure 8 is a flow chart of a procedure for preparing annotated tiles for training an artificial intelligence algorithm according to the present invention;
Figure 9 is a Whole Slide Image annotated by a human operator;
Figure 10 schematically illustrates a pyramid representation based on a Whole Slide Image and a relationship between tiles of different images of the pyramid, and
Figure 11 is a flow chart of a procedure for training an artificial intelligence to recognize groups of cell types according to an embodiment of the present invention.
DETAILED DESCRIPTION OF THE INVENTION
While the invention is susceptible to various modifications and alternative constructions, certain preferred embodiments are shown in the drawings and are described hereinbelow in detail. It must in any case be understood that there is no intention to limit the invention to the specific embodiment illustrated, but, on the contrary, the invention intends covering all the modifications, alternative and equivalent constructions that fall within the scope of the invention as defined in the claims.
The use of Tor example,” “etc.”, “or” indicates non-exclusive alternatives without limitation, unless otherwise indicated. The use of “includes” means “includes, but not limited to” unless otherwise indicated.
With reference to Figure 1 a system 1 configured to implement a method of histological analysis based on digital images according to an embodiment of the present invention.
In detail, the system 1 comprises a processing device, for example a workstation 10, and a digital image acquisition device, for example a digital microscope 12 configured to acquire Whole Slide Image - or WSI - and transmit them to the workstation 10.
Preferably, the system 1 also comprises a remote processing device, for example a server 14, configured to exchange data with the workstation 10.
In the example considered, the workstation 10 comprises a processor module 101 - e.g. comprising one or more processors, volatile and non-volatile memory units, graphics accelerators, etc., a data storage module 102, a communication module 103 - e.g. a modem - configured to manage data exchange with the digital microscope 12 and/or the server 14, and an interface module 104 comprising input and output elements to allow interaction by a human operator.
Similarly, the server 14 comprises one or more processor modules 141, one or more data storage modules 142 configured to store large amounts of data - for example organized in databases - and at least one communication module 143. Preferably, the server 14 is implemented in a distributed manner and/or by means of one or more virtual machines.
The described system 1 is configured to perform a histological analysis procedure 1000 according to an embodiment of the present invention, of which Figure 2 is a flowchart. The procedure 1000 generates an annotation Nwsi that allows groups of cells of one or more desired types to be identified and determines a contour that circumscribes one or more groups of cells belonging to one or more desired types. In the preferred embodiment, the annotation Nwsi generated by the procedure 1000 is a graphical element - for example, described by a Scalable Vector Graphics or SVG file - superimposable on the WSI I — as a mask -, which comprises a set of one or more continuous lines, each outlining a contour of a respective group of cells GC identified in the WSI IVand identifying the type of cells circumscribed by that line - for example, based on the colour of the line.
The procedure 1000 involves acquiring a WSI I through the digital microscope 12 and transmitting the WSI IVto the workstation 10 (block 1001). In particular, the WSI I portrays a tissue sample for analysis.
The WSI I is divided into a plurality of WSI portions or tiles lVr,c (block 1003). For example, each tile IVr,c has the same square or rectangular shape of the same size - for example, each tile I r, c is a square of 1024 x 1024 pixels. In detail, the WSI I is divided into a plurality of tiles IVr,c in a matrix arrangement, i.e. aligned in rows r (with 0 < r< R, where R is a positive integer) and columns c (with 0 < c< C. where C is a positive integer) - as can be seen in Figure 3. Advantageously, each tile I r.c is associated with position information identifying the portion of WSI IVto which it corresponds - for example, the position information comprises a two-dimensional coordinate indicating the row r and column c associated with the tile IVr,c in the matrix arrangement of the plurality of tiles IVr,c.
In the preferred embodiment, each tile I r.c is represented by a tensor match and processed in this form. Preferably, each tile I r,c is described by a tensor with dimensions LxHxRGB where:
L indicates the positions of the pixels along the length direction of the tile IVr, c (corresponding to the direction of the rows rof the WSI I );
H indicates the positions of the pixels along the height direction of the tile IVr, c (corresponding to the direction of the columns c of the WSI l/V),
RGB indicates the red (R), green (G) and blue (B) values of each pixel of the tile l/l/r,c.
Subsequently, each tile Wr,c is subjected to a pre-processing procedure (block 1005), configured to highlight edges of the tissue, or portions of the tissue, portrayed in the tile l/l/r,c under consideration. In a preferred embodiment, the pre-processing procedure involves performing a normalization of the R, G and B values associated with each pixel constituting the tile l/l/r,c (sub-block 10051). This makes it possible to use WSIs generated by different electron microscopes and/or where tissues are highlighted using different types of dyes. After the normalization of the R, G and B values, the tile l/l/r,c undergoes desaturation - i.e. a conversion to black and white - (sub-block 10053). Finally, the tile l/l/r,c is processed by a Sobel operator, for example using a Sobel-Gauss operator (sub-block 10055).
The resulting pre-processed tile Wr,c'- an example of which is illustrated in Figure 4 - is provided as input to an artificial intelligence algorithm Al run by the workstation 10, which is configured to identify the contours of the groups of cells GC of the tissue portrayed in WSI l/l/ and identify the type of groups of cells GC circumscribed by such contours.
In an embodiment of the present invention, system 1 is configured to analyse tissues of the digestive system, in particular the colon. In such a case, the artificial intelligence algorithm Al is configured to identify at least one of, but preferably a plurality or all of, the groups G comprised in the following non-limiting list:
- normal epithelium,
- hyperplastic epithelium,
- adenocarcinoma,
- tunica muscularis,
- necrosis,
- lymphocyte aggregates,
- mucinous component,
- adipose tissue,
- ganglia,
- granulation tissue,
- mucosa,
- desmoplastic reaction,
- low grade dysplasia,
- high grade dysplasia,
- negative resection margin,
- low dysplasia resection margin,
- high dysplasia resection margin,
- erythrocytes, and
- plasma cells.
In the embodiments of the present invention, the artificial intelligence algorithm Al is a convolutional neural network or CNN. Preferably, the convolutional neural network is selected from:
- ResNet, - WideResNet,
- DenseNet,
- GoogleNet,
- ShuffleNet,
- MobileNet, and
- SqueezeNet.
More preferably, the convolutional neural network is selected from the subgroup comprising:
- ResNet;
- ShuffleNet, and
- MobileNet.
In the preferred embodiment, the convolutional neural network used for the artificial intelligence algorithm Al is of the ResNet type modified as described below.
The artificial intelligence algorithm Al, initially searches, in the tissue portrayed in each pre-processed tile l/l/r, c' for one or more contours of groups of cells GC of one or more different types (decision block 1007).
If at least one contour of a corresponding group of cells GC is detected in the generic pre-processed tile Wr,c' (output branch Y of block 1007), the artificial intelligence algorithm Al is configured to generate an annotation portion Nr,c - as shown in Figure 5 - that identifies the cell group GC contours detected in the pre-processed tile Wr,c' (block 1009).
In detail, for each pixel belonging to a contour detected in the pre-processed tile l/l/r, c', a point p (sub-block 10091) is defined. Preferably, each point p is defined by the following information: a. a pixel co-ordinate of the pre-processed tile l/l/r, c' corresponding to the point p, b. an identifier of the cell type of the group of cells GC delimited by the contour including the point p, and c. pointers to an earlier point pp.i and a later point pp+jat the point p considered - initially null and determined as described below.
Once the points pare defined, one or more arcs a are defined, i.e. a set of points p forming the same contour or portion of a contour (sub-block 10093). Preferably, each arc a includes a list with the following information: i. a starting point p0,
II. a final point pt, ill. each intermediate point p, (with 0 < i < E, where E is a positive integer), comprised between the starting point po and the final point pc, iv. an identifier of the annotation Nr,c to which it belongs, and v. optionally an identifier of the corresponding tile l/l/r, c.
In the preferred embodiment, each arc a is defined by randomly selecting a point p from a set of points associated with the same type of cells and identifying the earlier point pp.i and the later point pp+iaf that point p, until one or more arcs a comprising all the points p comprised in the set of points are defined. In particular, each point p is associated with a single arc a. Preferably, the earlier point pp.i and the later point pp+i are associated with pixels of the WSI I that are adjacent to the pixel of the WSI IV associated with the generic point p. Similarly, the starting point po and the finale point PE correspond to pixels of the WSI I that are adjacent to a single pixel - i.e., the pixels associated with the intermediate points pi and PE-I of the arc a, respectively.
In summary, the artificial intelligence algorithm Al is configured to generate the arcs a by joining the points p, associated with the pixels of the contours identified through the processing of the pre-processed tiles IVr,c', by means of a proximity search criterion, preferably a nearest neighbour search criterion. In other words, the partial annotation Nr,c is a graph of the points p joined in arcs a.
Conversely, if the artificial intelligence algorithm does not detect at least one group of cells GC (output branch N of block 1007) or once the partial annotation Nr, c for the pre-processed tile IVr,c’has been generated, it is verified whether there is a new tile Wr+ 1,c+i to be analysed (decision block 1011 ).
In the affirmative case (output branch Y of block 1011 ), pre-processing of the new tile Wr+i:C+i and analysis by artificial intelligence Al is performed - in other words, the method is repeated starting from the preprocessing step described in relation to block 1005.
Conversely, if all the tiles IVr,c comprising the WSI IVhave been analysed (output branch N of block 1011 ), the arcs a belonging to the annotation portion Nr, c of each tile IVr,c are combined to form the annotation Nwsi associated with the entire WSI I (block 1013). Again, a proximity search criterion - preferably, the same nearest neighbour search criterion as used in the previous step - is applied to generate the annotation Nwsi of the WSI IV.
In particular, for each arc a that does not represent a closed line (i.e., po + PE), included in the partial annotation Nr,c of a generic tile IVr,c, a corresponding point p' adjacent to the initial point po and/or a corresponding point //’adjacent to the final point PE belonging to an adjacent tile Wr+1,c, Wr-1,c; Wr,c+1 o Wr,c-1 to the tile IVr,cis searched for. Advantageously, the adjacent tile Wr+1,c, Wr-1,c; Wr,c+1 or Wr,c-1 whose pixels are closest to the start pointing poand/or final point PE of the arc a considered is selected. In other words, the pixel of the WSI IVassociated with point p’ is adjacent to the pixel of the WSI IVassociated with starting point poand the pixel of the WSI IVassociated with the point p” is adjacent to the pixel of the WSI IVassociated with final point pe. In this way, all the arcs a, of different tiles IVr,c, are connected together to form a single overall arc that defines the contour of a group of cells GC'that extends between two or more of the tiles IVr,c into which the WCI IV is divided - as shown qualitatively in Figure 6 in which the connection between arc a 1 and arc a2, and between arc a3and arc a4 of the annotations Nr, c and Nr- 1,c of the tile IVr, c and the adjacent tile Wr-1,c, respectively, is illustrated.
The annotation Nwsi, thus defined, is used to create a graphic layer comprising a line for each arc a (block 1015). The annotation layer Nwsi is displayed, superimposed on the WSI IV, through the interface 104 of the workstation 10 so that it can be viewed by an operator (block 1017) - as shown schematically in Figure 7.
Optionally, one or more of the WSI IV, the tiles IVr,c, the pre-processed tiles IVr,c', the annotations Nr,c and/or the global annotation Nb are transmitted to the server 14 to be stored and/or used for an optimization of the artificial intelligence algorithm (block 1019). In an embodiment of the present invention, the artificial intelligence algorithm Al is trained to identifya plurality of different groups of cells GC as described below. Preferably, the procedures described below are performed by the server 14 and the artificial intelligence Al, once trained, is distributed to the workstation 10.
The training of the artificial intelligence algorithm Al involves an initial procedure 2100 of creating a training dataset, of which Figure 8 is a flow chart.
Initially, a sample set of WSI l/l/is submitted to at least one human operator - not illustrated, e.g. an anatomical pathologist - via the interface 104 of the workstation 10 (block 2101 ). Through the interface 104 of the workstation 10, the human operator annotates each of the WSIs 1/14 in the sample set creating an operator annotation Nx that identifies the groups of cells of one or more of the searched types portrayed in the I/I4 (block 2103). In the preferred embodiment, through the interface 104 of the workstation 10 the human operator creates the annotation Nx by graphically bounding - for example, drawing a line - the contours of each group of cells GC of interest that he identifies in the WSI 1/14 as illustrated in Figure 9. For example, the workstation 10 is configured to run a software application such as OpenSeadragon comprising the Annotorious functionality.
Each contour defined by the human operator is converted into a corresponding annotation NT (block 2105) and stored in the storage module 142 of the server 14 (block 2107) associated with an identifier code of the corresponding WSI 1/1 . Preferably, the generic annotation NT comprises points p and arcs a in a similar manner to that described above in relation to the block 1009 and the block 101 1 of the method 1000. In particular, each point p of the generic annotation NT is defined by a pair of coordinates of a two-dimensional space (x, y) corresponding to a pixel of the WSI I/I4, identifiable by a row value of pixels xand a column value of pixels yof the WSI I/I - having a resolution of xx y pixels.
Subsequently, each WSI 1/14 is processed to obtain at least one, preferably two additional WSIs I/I and I/I4,” in which the portrayed tissue sample has a different, preferably lower, magnification factor than the magnification factor of the original WSI I/I4 (block 2109) - as is qualitatively illustrated in Figure 10. Preferably, the additional WSIs I/I and 1/14” are generated by means of a pyramid image processing procedure - also known in the art by the term pyramid representation and not described in detail here for the sake of brevity. For example, starting from an original WSI 1/14 depicting a tissue sample at 40 times magnification, a first additional WSI 1/14 at 20 times magnification and a second additional WSI l/14-at 2.5 times magnification are generated. In other words, the additional I/I4and l/l/rare scaled-up versions of the original WSI 1/14 by means of one or more smoothing and downsampling operations.
Each WSI 1/14, I/I4and I/I4” is, then, subdivided into tiles WTr,c, WT’r,c and WT”r,c (block 21 1 1 ) in a manner similar to that described above in relation to the block 1003 of the method 1000. In particular, the tiles WTr,c, WT’r, c and Wt”r, c are all of the same size of b x b pixels (e.g. 1024 x 1024 pixels), so the corresponding WSI 1/14 is divided into a matrix of tiles WTr, ccomprising r/b columns and c/b rows.
A corresponding partial annotation NTr,c, comprised in the annotation Mr associated with the WSI l/IZ, is defined for each tile WTr,c (block 21 13). In other words, each partial annotation NTPC indicates which points p and which arcs a or portions of arcs a of the annotation A/yare included in the corresponding tile WTr,c.
Similarly, additional partial annotations A4r,c’and A4r,c”are defined for each of the additional tiles WTr, c and WTr,C” (block 21 15). The additional partial annotations A4r,c’ and A4r,c”are based on the correspondences between the pixel coordinates of the original WSI 1/14 and the pixel coordinates of the additional WSIs Wv and l/l/r,” the coordinates of the points p and arcs a of the annotation Mr and the scale factor between the additional WSIs l/l/r and l/l/rand the original WS I l/l/r.
Finally, the additional WSIs l/l/r-and l/l r, - the tiles WTr,c, WT’r, c and l/l/t”r,c, the annotation Mr and the partial annotations A/r/c, A/rr,c’and Nrr,c” are stored in the storage module 142 of the server 14 (block 21 17).
The training of the artificial intelligence algorithm comprises an actual training procedure 2200, of which Figure 1 1 is a flow chart.
Initially, the procedure 2200 involves selecting the cell types of interest to be identified - for example, all cell types comprised in the list above in the description of the procedure 1000 - (block 2201 ).
Accordingly, annotations Mr associated with cell groups GC of the selected type (block 2203) are identified from among the annotations Mr stored in the storage module 142 of the server 14.
Next, from among the plurality of tiles l/l/Tr,c, l/l/Tr,c and l/l/T”r,c stored in the storage module 142 of the server 14 the l/l/Tr,c, l/l/Tr,c and WT”r,c tiles are selected that depict at least a portion of a group T of the cells selected based on the identified annotations N (block 2205).
The selected tiles WTr,c, WT’r,c and WT”r,c are subjected to pre-processing (block 2207), configured to highlight contours of groups of cells or portions of groups of cells, of the tissue portrayed in the tile WTr,c, WT’r,c and WT”r,c considered, in a similar manner to that described in relation to the block 1005 of the procedure 1000 and not repeated herein for brevity.
A portion of the tiles WTr,c, WT’r,c and WT”r,c associated with the corresponding partial annotations Nrr,c, NTTC’ e NT , c” axe, then, provided to the artificial intelligence algorithm Al (block 2209) which uses this information for training (block 221 1 ). For example, the part of tiles WTr,c, WT’r, c and Wt”r,c used for training is between 70 % and 90 %, preferably around 80 %, of the total number of tiles.
As mentioned above, the artificial intelligence algorithm Al comprises a convolutional neural network, preferably a ResNet. In the preferred embodiment, the ResNet network used comprises a modified output layer, which comprises:
- a two-dimensional linear convolution function, where the first dimension is a partial annotation Nrr,c, NrTc’and NTTC,” and the second dimension is the tensor which represents the pre-processed tiles WTr,c, HZT’ cand WT”r,c,
- a linear activation function - preferably, an increasing monotone line with origin in 0 and defined as positive only, and
- an average pooling function on the two dimensions, i.e. a random and averaged subsampling of all the features that the convolutional neural network has learned.
In the preferred embodiment, artificial intelligence training involves performing a so-called dropout phase in which a randomly selected portion of the nodes in the hidden layers of the convolutional neural network is ignored during each training epoch - i.e., a set of training iterations. In particular, the percentage of ignored nodes is a function of the dimensions - i.e. resolution - of the tiles WTr,c, WT’r,c and WT”r,c - for example, in the case of tiles WTr,c, WT’r, c and WT”r, c of 1024 x 1024 pixels the dropout percentage is comprised between 40% and 60%, preferably 50%, on the other hand, if the WSI is divided into tiles WTr,c, WT’r,c and WT”r,c with a lower resolution such as 128 x 128 pixels, the dropout percentage is between 20% and 30%, preferably 25%.
Finally, the trained artificial intelligence algorithm Al is subjected to a validation test (decision block 2213). In particular, the remainder of the tiles WTr,c, WT’r, c and Wt”r, cand the corresponding partial annotations Nrr,c, / r.c'and /rr, c”not used for training are provided as input to the trained algorithm Al and the aforementioned partial annotations PN-rr,c, PNrr,c’ and PNrr,c” are compared with the corresponding partial annotations Nrr,c, / r.c'and / r,c”associated with the same tile WTr,c, WT’r,c and WT”r,c.
In case the performance - for example, at least one from among the precision, accuracy, F1 and/or recall values - of the artificial intelligence algorithm Al is lower than one or more minimum threshold values (output branch N of block 2213), it is contemplated to perform a so-called fine-tuning step (block 2215) in which one or more parameters of the artificial intelligence algorithm Al are adjusted to achieve an improvement in the performance of the artificial intelligence algorithm Al.
Once fine-tuning has been carried out, artificial intelligence training is repeated, i.e. the procedure 2200 is repeated from block 2209.
When the performance of the artificial intelligence algorithm Al equals or exceeds the one or more minimum threshold values (output branch Y of block 2213), the identification of the selected cell type performed by the artificial intelligence algorithm Al is deemed reliable and the training of the artificial intelligence algorithm Al is concluded. As a result, the artificial intelligence algorithm Al is distributed to the workstation for use (block 2217).
However, it is clear that the above examples must not be interpreted in a limiting sense and the invention thus conceived is susceptible of numerous modifications and variations.
For example, it will be clear to a person skilled in the art that the system may comprise several workstations connected to the same digital microscope and/or server.
Alternatively or additionally, the system may comprise one or more user devices - such as personal computers, tablets, smartphones and the like - through which one or more human operators annotate sample WSI fortraining or artificial intelligence algorithm correction procedures. In this case, the server is preferably configured to make available the WSIs and the tools for performing annotations digitally available through a remotely accessible service, e.g. an online platform.
Similarly, a single, or a combination of two or more, of the above procedures form a method of analysing digital pathology tissue microscopy images. In addition, one or more steps of the same procedure or of different procedures may be performed in parallel between each other or according to an order different from the above described one.
Similarly, one or more optional steps can be added or removed from one or more of the procedures described above.
For example, in alternative embodiments, the analysis procedure involves first pre-processing all the tiles into which the WSI is divided and then determining all the partial annotations of the tiles.
In an embodiment of the present invention (not illustrated) there is a procedure for verifying the results of training performed by a human operator.
In particular, this procedure involves a human operator reviewing the annotations generated by the artificial intelligence algorithm and giving an assessment of the degree of correctness and/or correcting the annotation to modify it. The correct evaluations and/or annotations are then stored in the server's data storage module and used to retrain the artificial intelligence algorithm.
Furthermore, there is nothing to prevent the use of tiles and/or their annotations generated in previous iterations to optimize the operation of the artificial intelligence algorithm.
In one embodiment, the number of WSI tiles used for training is increased through dataset augmentation procedures. In other words, new tiles are generated by introducing noise, symmetrically reflecting and/or applying other artefacts to the tiles obtained from the WSI pyramid.
In a simplified embodiment, the annotations provided by human operators are rectangles enclosing the groups of cells of interest or portions thereof.
Naturally, all the details can be replaced with other technically-equivalent elements.
In conclusion, the materials used, as well as the contingent shapes and dimensions of the aforementioned devices, apparatuses and terminals, may be any according to the specific implementation requirements without thereby abandoning the scope of protection of the following claims.

Claims

1. Method (1000; 2100; 2200) of analysing a digital microscopy image, wherein said digital microscopy image depicts a biological tissue, the method comprising the steps of: by means of a digital microscope:
- acquiring (1001 ) a digital microscopy image of a biological tissue to be analysed, by means of a computer:
- dividing (1003) the digital microscopy image into a plurality of image portions,
- by means of an artificial intelligence algorithm:
- in each image portion, identifying (1007) at least one group of cells of the same type, generating (1009) an annotation superimposable on the digital microscopy image, where said annotation highlights said at least one group of cells of the same type when superimposed on the digital microscopy image, characterized in that it further comprises the step of:
- in each image portion, highlighting (1005) a contour or contour portion of at least one group of cells by means of a contour recognition operator, and wherein the step of generating (1009) an annotation superimposable on the digital microscopy image comprises:
- identifying (10091 ) the type of cells circumscribed by said contour of at least one group of cells, and
- using (10093) said contour to generate the annotation superimposable on the digital microscopy image.
2. The method (1000; 2100; 2200) according to claim 1 , wherein the step of highlighting (1005) a contour or contour portion of at least one group of cells by means of a contour recognition operator comprises, for each image portion:
- normalizing (10051 ) the red, green and blue values of each pixel in the image portion,
- desaturating (10053) the normalized image portion, and
- applying (10055) a Sobel operator to the desaturated image portion.
3. The method (1000; 2100; 2200) according to claim 2 or 3, wherein the step of generating (1009) an annotation superimposable on the digital microscopy image comprises, for each image portion:
- generating an annotation point for each pixel comprised in the contour of the at least one group of cells of the same type, wherein each point comprises position information in a two-dimensional space corresponding to a position of the associated pixel in the image portion and an indication of the cell type of the group of cells, and
- generating the annotation superimposable on the digital microscopy image as a graph of annotation points comprising the indication of the same cell type as the group of cells.
4. The method (1000; 2100; 2200) according to claim 3, wherein generating (1009) the annotation superimposable on the digital microscopy image as a graph of annotation points comprises:
- generating a partial annotation for each image portion, said partial annotation comprising at least one annotation arc, and wherein said at least one arc is formed by a sequence of points adjacent to one another and comprised in a portion of said two-dimensional space corresponding to said image portion.
5. The method (1000; 2100; 2200) according to claim 4, wherein the step of generating (1009) an annotation superimposable on the digital microscopy image further comprises, for each annotation arc that does not define a closed line:
- generating a compound annotation arc by joining together annotation arcs of image portions adjacent to each other and comprising the indication as the same cell type of the group of cells, wherein an extreme point of each arc in a first image portion is adjacent to an extreme point of each arc in a second image portion adjacent to the first image portion.
6. The method (1000; 2100; 2200) according to claim 4 or 5, wherein the step of generating (1009) at least one between an annotation arc and a compound annotation arc comprises applying a nearest neighbour criterion to identify annotation points adjacent to each other.
7. The method (1000; 2100; 2200) according to any one of the preceding claims, wherein said artificial intelligence algorithm is a convolutional neural network selected from:
- ResNet,
- WideResNet,
- DenseNet,
- GoogleNet,
- ShuffleNet,
- MobileNet, and
- SqueezeNet, preferably, the convolutional neural network is selected from the subgroup comprising:
- ResNet;
- ShuffleNet, and
- MobileNet.
8. The method (1000; 2100; 2200) according to claims 7 and 4, wherein the artificial intelligence algorithm comprises a convolutional neural network of the ResNet type, wherein the last layer is modified to comprise:
- a two-dimensional linear convolution function, where the first dimension corresponds to the partial annotations of the image portions, and the second dimension corresponds to the image portions processed by means of said contour recognition operator;
- a linear activation function, and
- an average pooling function on said two dimensions.
9. Method (1000; 2100; 2200) according to claim 7 or 8, further comprising the steps of:
- providing (2201 -2209) as input to the artificial intelligence algorithm a plurality of image portions of at least one digital microscopy image comprising at least one group of cells belonging to a cell type to be identified,
- providing (2201 -2209) as input to the artificial intelligence algorithm a respective partial annotation associated with each of said image portions, wherein each partial annotation is generated from an annotation performed by a human operator on the digital microscopy image,
- iteratively training (221 1 -2217) the artificial intelligence algorithm to recognize the contour of said at least one group of cells by processing the plurality of image portions and corresponding partial annotations received as input, and 16 wherein training the artificial intelligence algorithm involves performing a dropout operation, in which a randomly selected portion of nodes in hidden layers of the convolutional neural network is ignored during a predetermined number of training iterations.
10. System (1 ) for analysing a digital microscopy image, the system comprising: - a digital microscope (12), and
- a computer (10) connected to said digital microscope (12), wherein the computer (10) runs an artificial intelligence algorithm (Al) and is configured to implement the method according to any one of the preceding claims.
PCT/IB2021/059956 2021-10-28 2021-10-28 Digital microscopy tissue image analysis method and system for digital pathology WO2023073405A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/IB2021/059956 WO2023073405A1 (en) 2021-10-28 2021-10-28 Digital microscopy tissue image analysis method and system for digital pathology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IB2021/059956 WO2023073405A1 (en) 2021-10-28 2021-10-28 Digital microscopy tissue image analysis method and system for digital pathology

Publications (1)

Publication Number Publication Date
WO2023073405A1 true WO2023073405A1 (en) 2023-05-04

Family

ID=79171108

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2021/059956 WO2023073405A1 (en) 2021-10-28 2021-10-28 Digital microscopy tissue image analysis method and system for digital pathology

Country Status (1)

Country Link
WO (1) WO2023073405A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020243545A1 (en) * 2019-05-29 2020-12-03 Leica Biosystems Imaging, Inc. Computer supported review of tumors in histology images and post operative tumor margin assessment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020243545A1 (en) * 2019-05-29 2020-12-03 Leica Biosystems Imaging, Inc. Computer supported review of tumors in histology images and post operative tumor margin assessment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
OLIVIER DEBEIR ET AL: "Characterization of Posidonia Oceanica Seagrass Aerenchyma through Whole Slide Imaging: A Pilot Study", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 7 March 2019 (2019-03-07), XP081130720 *

Similar Documents

Publication Publication Date Title
Chen et al. Banet: Bidirectional aggregation network with occlusion handling for panoptic segmentation
Margffoy-Tuay et al. Dynamic multimodal instance segmentation guided by natural language queries
Xie et al. Beyond classification: structured regression for robust cell detection using convolutional neural network
CN108229490B (en) Key point detection method, neural network training method, device and electronic equipment
Raza et al. Mimo-net: A multi-input multi-output convolutional neural network for cell segmentation in fluorescence microscopy images
US7949181B2 (en) Segmentation of tissue images using color and texture
KR20210097772A (en) Medical image segmentation method and device, electronic device and storage medium
Nateghi et al. A deep learning approach for mitosis detection: application in tumor proliferation prediction from whole slide images
Zhang et al. ReYOLO: A traffic sign detector based on network reparameterization and features adaptive weighting
CN109919149A (en) Object mask method and relevant device based on object detection model
Shaga Devan et al. Weighted average ensemble-based semantic segmentation in biological electron microscopy images
EP3686841B1 (en) Image segmentation method and device
CN113706514B (en) Focus positioning method, device, equipment and storage medium based on template image
Shao et al. A novel hybrid transformer-CNN architecture for environmental microorganism classification
JP2021533473A (en) Diagnosis result generation system and method
Zhou et al. Cross-scale collaborative network for single image super resolution
Jiao et al. Staining condition visualization in digital histopathological whole-slide images
WO2023073405A1 (en) Digital microscopy tissue image analysis method and system for digital pathology
Amorim et al. Analysing rotation-invariance of a log-polar transformation in convolutional neural networks
Shi et al. Modified U-net architecture for ischemic stroke lesion segmentation and detection
JP4957924B2 (en) Document image feature value generation apparatus, document image feature value generation method, and document image feature value generation program
KR102476888B1 (en) Artificial diagnostic data processing apparatus and its method in digital pathology images
Watkins et al. msemalign: a pipeline for serial section multibeam scanning electron microscopy volume alignment
Ritter et al. Multi-Channel Colocalization Analysis and Visualization of Viral Proteins in Fluorescence Microscopy Images
Yang et al. Depth super-resolution via fully edge-augmented guidance

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21835368

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 112024000072082

Country of ref document: IT

NENP Non-entry into the national phase

Ref country code: DE