WO2014102428A1 - Method for the automatic interpretation of images for the quantification of nucelar tumour markers - Google Patents

Method for the automatic interpretation of images for the quantification of nucelar tumour markers Download PDF

Info

Publication number
WO2014102428A1
WO2014102428A1 PCT/ES2013/070920 ES2013070920W WO2014102428A1 WO 2014102428 A1 WO2014102428 A1 WO 2014102428A1 ES 2013070920 W ES2013070920 W ES 2013070920W WO 2014102428 A1 WO2014102428 A1 WO 2014102428A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
staining
images
procedure according
segmentation
Prior art date
Application number
PCT/ES2013/070920
Other languages
Spanish (es)
French (fr)
Inventor
Jose Antonio PIEDRA FERNANDEZ
Manuel CANTÓN GARBÍN0
Francisco Jose GOMEZ NAVARRO
Emilia MEDINA ESTEVEZ
Original Assignee
Universidad De Almeria
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Universidad De Almeria filed Critical Universidad De Almeria
Publication of WO2014102428A1 publication Critical patent/WO2014102428A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B10/00Other methods or instruments for diagnosis, e.g. instruments for taking a cell sample, for biopsy, for vaccination diagnosis; Sex determination; Ovulation-period determination; Throat striking implements
    • A61B10/0041Detection of breast cancer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/695Preprocessing, e.g. image segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro

Definitions

  • the present invention is encompassed within the techniques of image treatment and its application to the medical sector, in particular to the procedures of image treatment of tumor tissue samples stained with immunohistochemical staining techniques (IHC) for later analysis by an expert .
  • IHC immunohistochemical staining techniques
  • the Pathological Anatomy services of the hospitals have specialized doctors who evaluate the level of expression of the nuclear tumor markers in tissue samples with immunohistochemical staining by direct observation under a conventional optical microscope. This evaluation consists in determining the percentage of tinted nuclei of a given region of the sample, as well as the intensity of their staining. This assessment gives information to the oncologist and other medical specialists about the possible prognosis and treatment of the patient. It is therefore semi-quantitative quantification tests, strongly subjective.
  • the present invention proposes a procedure where, starting from previously acquired images and subjected to known staining processes, the following steps are performed:
  • the following implementations are the most advantageous, although the invention is not limited to them:
  • step a change from RGB to HSL.
  • step b the extraction of the L-channel from the HSL.
  • step c the use of a bilateral filter.
  • step e For region-based segmentation (step e), the following steps are performed, which can be used independently or in combination with each other:
  • Canny's algorithm For segmentation based on contour detection, one of the applicable algorithms is Canny's algorithm. This step may be followed by the application of morphological operations of erosion, dilation and their combinations for edge profiling.
  • the invention finds a practical application in the fast and reliable study of results, for example by presenting the original image with the colored pixels according to the criterion of Figure 12 next to the original image and the percentages of pixels classified according to their level of staining.
  • Figure 1 is an image of the area of interest of a sample with immuno-histochemical staining for the determination of estrogen receptors
  • Figure 2 is an image resulting from the change of RGB color space to HLS
  • Figure 3 is an image of the L channel extracted from the HLS color space.
  • Figure 4 It is the result of applying a bilateral filter to the L channel.
  • Figure 5 shows the result of equalizing the histogram of the image of the filtered L-channel.
  • Figure 6 is the image resulting from automatic thresholding.
  • Figure 7 represents the image resulting from the application of the watershed algorithm.
  • Figure 8 is the image resulting from applying morphological operations after the previous step.
  • Figure 9 is the result of applying Canny's algorithm to the image of the L channel.
  • Figure 10 definitive mask obtained from the sum of the segmentation based on regions and contours.
  • Figure 11 is an image that shows the area of interest corresponding to the nuclei
  • Figure 12 shows the classification criteria of the cores and the associated color for the final overprint.
  • Figure 13 shows the final result with the pixels colored with the color corresponding to the level of staining assigned.
  • Figure 14 It is a block diagram with the steps of the invention.
  • an image is first acquired by means of a microscope equipped with a camera or with a sample scanner.
  • the extension is, preferably but not necessarily, in the 40 increases.
  • the pathologist selects the area of interest of the image to be analyzed, just as he would in the case of the conventional evaluation.
  • the preprocessing begins with the change of the color space of the selected image from RGB to another in which one of its channels represents the light intensity of the pixel, such as HSL, HSI, HSV, HSB or others.
  • HSL color space shown in Figure 2 is used.
  • the channel corresponding to the intensity value (L, I, V or B) is extracted, in this preferential example the L.
  • the channel L of the color space HSL Hue, Saturation, Lightness
  • Figure 3 shows the grayscale image obtained from the L (Lightness) channel.
  • the bilateral filter has been used. Filtering is one of the fundamental operations of image processing.
  • the bilateral filter allows the elimination of noise in the flat areas of the image where the signal varies little, preserving the edges of the areas with great variation of it (figure
  • the next step is the equalization of the histogram.
  • the equalization is achieved by applying to the original histogram of the grayscale image of the previous step a LUT (Look-Up Table) or assignment table, which redistributes it until it occupies the entire range of gray levels that allow the depth of the image and increase the contrast.
  • LUT Look-Up Table
  • assignment table which redistributes it until it occupies the entire range of gray levels that allow the depth of the image and increase the contrast.
  • the resulting image can be seen in Figure 5.
  • a segmentation based on regions is carried out.
  • the first step of this segmentation is the thresholding, followed by the delimitation of regions and contour profiling.
  • the objective of the thresholding is to obtain from the previous image in grayscale a monochrome image in which the nuclei can be distinguished from the rest of the elements.
  • the thresholding step is autonomous and does not require any parameters to be entered by the operator. To achieve this, the Otsu algorithm is applied. This method uses a threshold value of the gray level of the image to assign the different pixels the value 0 or 1, depending on whether their gray level is above this threshold value or not.
  • this threshold value makes a statistical analysis of the histogram of gray levels of the image, more accurately it uses the value of the variance as a measure of this dispersion and calculates the threshold value that allows the least dispersion within each segment (pixels at which will be assigned 0 or 1) and the greatest dispersion between the two segments.
  • the result of the thresholding is seen in Figure 6.
  • the next step is the delimitation of regions using the Watershed algorithm.
  • the principle of operation of the Watershed algorithm is based on the similarity between the intensity at a certain point of the image and its height on a topographic map. In this way, the points with a greater intensity would correspond to the highest zones, drawing the dividing lines of the ridges and those of less intensity would correspond to the valleys.
  • the mask is usually a simple square, a cross or a circle with the anchor point in its center.
  • the erosion operation the mask is superimposed at each point of the image and the local minimum is obtained for each pixel; This is why it is called erosion, because the effect is that there is an erosion of the edges of the image.
  • the dilation operation is the inverse operation to erosion, since what is computed is the local maximum, resulting in the expansion or expansion of the edges.
  • the opening operation is a combination of the previous ones and is obtained by first applying an erosion and then an opening.
  • a fuzzy logic system is applied to determine the level of staining in each pixel of the image, depending on the levels obtained from each channel; the rules are of the type: If (red is high and green is low and blue is low) then (staining is strong). It defines a Diffuse inference system with three inputs and one output. The three inputs correspond to the intensity of each of the color planes of the RGB image in that pixel and the output defines the level of staining. Each entry has three member functions, called low, medium and high, which are supposed to follow a normal distribution (other distributions are possible). The statistical parameters of each member function are automatically determined for each plane of the color space from the statistical analysis of the histogram in the area of interest corresponding to the nuclei.
  • the inference rules that control the system are all of equal weight, although other weight criteria may apply.
  • the image of the area of interest (figure 1 1) is traversed pixel by pixel, and the pixels of the nuclei are classified according to their level of staining in: negative, weak, intermediate or strong staining.
  • the classification criterion is made according to the pattern set out in Figure 12 developed for this case.
  • This criterion which aims to serve as a basis for the classification of nucleus staining, is extrapolated for the classification of pixels.
  • the level of staining is given by the intensity of pixel brown. Pixels in blue indicate that its staining is negative. Once the pixel is classified, it is colored in the original image according to the pattern of figure 12, resulting in figure 13.
  • the results of the classification give the percentages of pixels for each level of staining (negative, weak, intermediate or strong). It also gives the final percentages of unpainted pixels (with negative staining) and tinted (with weak, intermediate and strong staining).
  • the expert pathologist can validate the result obtained. These percentage values will be used by the pathologist as a reference to indicate the level of expression of the tumor markers according to one or more of the different assessment criteria for use in Pathological Anatomy laboratories, under the hypothesis that the pixel percentages correspond with the percentages of nuclei in each level of staining.
  • an automatic system can indicate the level of expression of the tumor marker analyzed according to the selected evaluation criteria.

Abstract

The invention relates to a method for the automatic interpretation of images for the quantification of nuclear tumour markers, in which, using images that have been pre-tinted using known methods, the following steps are performed: changing from RGB colour space to another space in which one of the channels represents the light intensity of the pixel; extracting the channel corresponding to the intensity value; filtering the image; equalizing the histogram to increase the contrast; performing segmentation based on regions and detection of contours; combining the images obtained from the segmentation based on regions and the segmentation based on contours, in order to produce a mask that is applied to the original image so as to obtain the area of interest, formed only of the nuclei; and, finally, classifying the pixels. The invention improves workflows, ergonomics, comfort and productivity for experts in diagnostics.

Description

DESCRIPCION  DESCRIPTION
PROCEDIMIENTO DE INTERPRETACION AUTOMATICA DE IMAGENES PARA LA CUANTIFICACIÓN DE MARCADORES TUMORALESPROCEDURE FOR AUTOMATIC INTERPRETATION OF IMAGES FOR THE QUANTIFICATION OF TUMOR MARKERS
NUCLEARES NUCLEARS
SECTOR DE LA TECNICA TECHNICAL SECTOR
La presente invención se engloba dentro de las técnicas de tratamiento de imágenes y su aplicación al sector médico, en particular a los procedimientos de tratamiento de imágenes de muestras de tejido tumoral tintadas con técnicas de tinción inmunohistoquímicas (IHC) para su posterior análisis por un experto. ESTADO DE LA TÉCNICA The present invention is encompassed within the techniques of image treatment and its application to the medical sector, in particular to the procedures of image treatment of tumor tissue samples stained with immunohistochemical staining techniques (IHC) for later analysis by an expert . STATE OF THE TECHNIQUE
Los servicios de Anatomía Patológica de los hospitales cuentan con médicos especialistas que evalúan por observación directa al microscopio óptico convencional el nivel de expresión de los marcadores tumorales nucleares en muestras de tejido con tinción inmunohistoquímica. Esta evaluación consiste en la determinación del porcentaje de núcleos tintados de una determinada región de la muestra, así como de la intensidad de su tinción. Esta valoración da información al oncólogo y otros especialistas médicos sobre el posible pronóstico y tratamiento al paciente. Se trata por tanto de pruebas de cuantificación semi-cuantitativa, fuertemente subjetivas. The Pathological Anatomy services of the hospitals have specialized doctors who evaluate the level of expression of the nuclear tumor markers in tissue samples with immunohistochemical staining by direct observation under a conventional optical microscope. This evaluation consists in determining the percentage of tinted nuclei of a given region of the sample, as well as the intensity of their staining. This assessment gives information to the oncologist and other medical specialists about the possible prognosis and treatment of the patient. It is therefore semi-quantitative quantification tests, strongly subjective.
Es deseable por lo tanto, disponer de un procedimiento automatizado de tratamiento de las imágenes que suponga una herramienta de ayuda al diagnóstico, mejorando el flujo de trabajo, la ergonomía, confort y productividad. OBJETO DE LA INVENCIÓN Therefore, it is desirable to have an automated procedure for the treatment of images that involves a diagnostic aid tool, improving workflow, ergonomics, comfort and productivity. OBJECT OF THE INVENTION
Con objeto de proporcionar dicho tratamiento automatizado, la presente invención propone un procedimiento donde, partiendo de imágenes adquiridas previamente y sometidas a procesos de tinción conocidos, se realizan los siguientes pasos: In order to provide said automated treatment, the present invention proposes a procedure where, starting from previously acquired images and subjected to known staining processes, the following steps are performed:
a) cambio de espacio de color RGB a otro espacio en el que uno de sus canales represente la intensidad luminosa del pixel, como HSL, HSI, HSV o HSB; b) extracción del canal correspondiente al valor de intensidad , L, I, V o B ; c) filtrado de la imagen para eliminar el ruido, preservando los bordes; d) ecualización del histograma para aumento del contraste; e) segmentación basada en regiones; f) segmentación basada en detección de contornos; g) suma de las imágenes obtenidas de la segmentación basada en regiones y de la segmentación basada en contornos para obtener una máscara que se aplica a la imagen original para obtener el área de interés compuesta sólo de los núcleos; h) clasificación de los pixeles del área de interés compuesta por los núcleos según su nivel de tinción en tinción negativa, débil, intermedia o fuerte según un sistema predeterminado. Las siguientes puestas en práctica son las más ventajosas, aunque la invención no se limita a ellas: a) change of RGB color space to another space in which one of its channels represents the light intensity of the pixel, such as HSL, HSI, HSV or HSB; b) extraction of the channel corresponding to the intensity value, L, I, V or B; c) image filtering to eliminate noise, preserving the edges; d) histogram equalization to increase contrast; e) segmentation based on regions; f) segmentation based on contour detection; g) sum of the images obtained from region-based segmentation and contour-based segmentation to obtain a mask that is applied to the original image to obtain the area of interest composed only of the nuclei; h) classification of the pixels of the area of interest composed of the nuclei according to their level of staining in negative, weak, intermediate or strong staining according to a predetermined system. The following implementations are the most advantageous, although the invention is not limited to them:
En el paso a, el cambio de RGB a HSL. In step a, change from RGB to HSL.
En el paso b, la extracción del canal L del HSL. En el paso c, la utilización de un filtro bilateral. In step b, the extraction of the L-channel from the HSL. In step c, the use of a bilateral filter.
Para la segmentación basada en regiones (paso e), se realizan los siguientes pasos, que pueden ser utilizados independientemente o en combinación los unos con los otros: For region-based segmentation (step e), the following steps are performed, which can be used independently or in combination with each other:
Umbralización automática mediante el algoritmo de Otsu. Automatic thresholding using the Otsu algorithm.
Delimitación de regiones mediante el algoritmo watershed. Region delimitation using the watershed algorithm.
Aplicación de operaciones morfológicas de erosión, dilatación y sus combinaciones para perfilado de contornos. Application of morphological operations of erosion, dilation and their combinations for contour profiling.
 ■
Para la segmentación basada en la detección de contornos, uno de los algoritmos aplicables es el algoritmo de Canny. Este paso puede estar seguido por la aplicación de operaciones morfológicas de erosión, dilatación y sus combinaciones para perfilado de bordes. For segmentation based on contour detection, one of the applicable algorithms is Canny's algorithm. This step may be followed by the application of morphological operations of erosion, dilation and their combinations for edge profiling.
Para la clasificación de los pixeles según su nivel de tinción es ventajoso utilizar un sistema de lógica difusa. Como entradas del sistema se toman los niveles de intensidad de cada canal del espacio de color utilizado (RGB), aunque se pueden utilizar otros espacios de color que también podrían mejorar el resultado de clasificación. For the classification of pixels according to their level of staining it is advantageous to use a fuzzy logic system. The intensity levels of each channel of the color space used (RGB) are used as inputs of the system, although other color spaces can be used that could also improve the classification result.
La invención encuentra una aplicación práctica en el estudio rápido y fiable de resultados, por ejemplo presentando la imagen original con los pixeles coloreados según el criterio de la figura 12 junto a la imagen original y los porcentajes de pixeles clasificados según su nivel de tinción. The invention finds a practical application in the fast and reliable study of results, for example by presenting the original image with the colored pixels according to the criterion of Figure 12 next to the original image and the percentages of pixels classified according to their level of staining.
BREVE DESCRIPCIÓN DE LAS FIGURAS Con objeto de ayudar a una mejor comprensión de las características de la invención de acuerdo con un ejemplo preferente de realización práctica de la misma, se acompaña la siguiente descripción de un juego de dibujos en donde con carácter ilustrativo se ha representado lo siguiente: BRIEF DESCRIPTION OF THE FIGURES In order to help a better understanding of the features of the invention in accordance with a preferred example of practical realization thereof, the following description of a set of drawings is attached, where the following has been represented by way of illustration:
Figura 1 : es una imagen de la zona de interés de una muestra con tinción inmuno-histoquímica para la determinación de receptores de estrogenos Figure 1: is an image of the area of interest of a sample with immuno-histochemical staining for the determination of estrogen receptors
(ER) (ER)
Figura 2: es una imagen resultante del cambio del espacio de color RGB a HLS Figure 2: is an image resulting from the change of RGB color space to HLS
Figura 3: es una imagen del canal L extraído del espacio de color HLS. Figure 3: is an image of the L channel extracted from the HLS color space.
Figura 4: es la resultante de aplicar al canal L un filtro bilateral. Figure 4: It is the result of applying a bilateral filter to the L channel.
Figura 5: muestra la resultante de ecualizar el histograma de la imagen del canal L filtrado. Figure 5: shows the result of equalizing the histogram of the image of the filtered L-channel.
Figura 6: es la imagen resultante de la umbralización automática. Figure 6: is the image resulting from automatic thresholding.
Figura 7: representa la imagen resultante de la aplicación del algoritmo watershed. Figure 7: represents the image resulting from the application of the watershed algorithm.
Figura 8: es la imagen resultante de aplicar operaciones morfológicas tras el paso anterior. Figure 8: is the image resulting from applying morphological operations after the previous step.
Figura 9: es la resultante de aplicar el algoritmo de Canny a la imagen del canal L. Figure 9: is the result of applying Canny's algorithm to the image of the L channel.
Figura 10: máscara definitiva obtenida de la suma de la segmentación basada en regiones y en contornos. Figure 10: definitive mask obtained from the sum of the segmentation based on regions and contours.
Figura 11 : es una imagen que presenta el área de interés correspondiente a los núcleos Figure 11: is an image that shows the area of interest corresponding to the nuclei
Figura 12: muestra el criterio de clasificación de los núcleos y el color asociado para la sobreimpresión final. Figure 12: shows the classification criteria of the cores and the associated color for the final overprint.
Figura 13: muestra el resultado final con los pixeles coloreados con el color correspondiente al nivel de tinción asignado. Figura 14: es un diagrama de bloques con los pasos de la invención. Figure 13: shows the final result with the pixels colored with the color corresponding to the level of staining assigned. Figure 14: It is a block diagram with the steps of the invention.
DESCRIPCIÓN DETALLADA DE LA INVENCIÓN Para llevar a cabo el procedimiento de la invención en primer lugar se adquiere una imagen mediante un microscopio equipado con cámara o con un scanner de muestras. La ampliación está, preferente pero no necesariamente, en los 40 aumentos. El médico patólogo selecciona el área de interés de la imagen a analizar, tal como lo haría en el caso de la evaluación convencional. DETAILED DESCRIPTION OF THE INVENTION To carry out the process of the invention, an image is first acquired by means of a microscope equipped with a camera or with a sample scanner. The extension is, preferably but not necessarily, in the 40 increases. The pathologist selects the area of interest of the image to be analyzed, just as he would in the case of the conventional evaluation.
A continuación comienza el preprocesamiento con el cambio del espacio de color de la imagen seleccionada de RGB a otro en el que uno de sus canales represente la intensidad luminosa del pixel, como HSL, HSI, HSV, HSB u otros. En la puesta en práctica se utiliza el espacio de color HSL mostrado en la figura 2. Then the preprocessing begins with the change of the color space of the selected image from RGB to another in which one of its channels represents the light intensity of the pixel, such as HSL, HSI, HSV, HSB or others. In the implementation, the HSL color space shown in Figure 2 is used.
A continuación, se extrae el canal correspondiente al valor de intensidad (L, I, V o B), en este ejemplo preferencial el L. Al tratarse de imágenes obtenidas al microscopio de campo claro o mediante scanner, el canal L del espacio de color HSL (Hue, Saturation, Lightness) se utiliza de base para el umbralizado. Then, the channel corresponding to the intensity value (L, I, V or B) is extracted, in this preferential example the L. When dealing with images obtained under a light field microscope or by means of a scanner, the channel L of the color space HSL (Hue, Saturation, Lightness) is used as the basis for thresholding.
En la figura 3 se observa la imagen en escala de grises obtenida del canal L (Lightness). Figure 3 shows the grayscale image obtained from the L (Lightness) channel.
Seguidamente se realiza un filtrado de la imagen. En este caso se ha utilizado el filtro bilateral. El filtrado es una de las operaciones fundamentales del procesamiento de imagen. El filtro bilateral permite la eliminación del ruido en las zonas planas de la imagen donde la señal varía poco, preservando los bordes de las zonas con gran variación de la misma (figura Next, the image is filtered. In this case the bilateral filter has been used. Filtering is one of the fundamental operations of image processing. The bilateral filter allows the elimination of noise in the flat areas of the image where the signal varies little, preserving the edges of the areas with great variation of it (figure
4). El siguiente paso es la ecualización del histograma. La ecualización se consigue aplicando al histograma original de la imagen en escala de grises del paso anterior una tabla LUT (Look-Up Table) o tabla de asignación, que lo redistribuye hasta ocupar todo el rango de niveles de gris que permite la profundidad de la imagen y aumentar el contraste. La imagen resultante se puede ver en la figura 5. 4). The next step is the equalization of the histogram. The equalization is achieved by applying to the original histogram of the grayscale image of the previous step a LUT (Look-Up Table) or assignment table, which redistributes it until it occupies the entire range of gray levels that allow the depth of the image and increase the contrast. The resulting image can be seen in Figure 5.
Tras el pre-procesamiento se realiza una segmentación basada en regiones. El primer paso de esta segmentación es la umbralización, seguido de la delimitación de regiones y el perfilado de contornos. El objetivo de la umbralización es obtener a partir de la imagen anterior en escala de grises una imagen monocroma en la que se puedan distinguir los núcleos del resto de elementos. El paso de la umbralización es autónomo y no requiere de ningún parámetro a introducir por el operador. Para conseguirlo se aplica el algoritmo de Otsu. Este método utiliza un valor umbral del nivel de gris de la imagen para asignar a los diferentes píxeles el valor 0 o 1 , en función de que su nivel de gris se encuentre por encima de este valor umbral o no. Para calcular este valor umbral hace un análisis estadístico del histograma de niveles de gris de la imagen, más exactamente utiliza el valor de la varianza como medida de esta dispersión y calcula el valor umbral que permite la menor dispersión dentro de cada segmento (píxeles a los que se va a asignar 0 o 1 ) y la mayor dispersión entre los dos segmentos. El resultado de la umbralización se ve en la figura 6. El siguiente paso consiste en la delimitación de regiones mediante el algoritmo Watershed. El principio de funcionamiento del algoritmo Watershed se basa en la similitud entre la intensidad en un determinado punto de la imagen y su altura en un mapa topográfico. De esta manera, los puntos con una mayor intensidad corresponderían a las zonas más altas, dibujando las líneas divisorias de las crestas y los de menor intensidad corresponderían a los valles. Aplicando operaciones morfológicas partiendo desde los valles, se simula una "inundación" de la imagen que da como resultado la segmentación de la misma en las zonas de confluencia de las cuencas. El resultado se puede ver en la figura 7. El siguiente paso en la segmentación es el de realizar operaciones morfológicas para mejorar el resultado y asegurar que el área seleccionada sólo pertenece a los núcleos, no al estroma. Se aplican una serie de operaciones de erosión, dilatación y sus combinaciones, cuyo resultado se puede ver en la figura 8. La erosión y la dilatación son las dos operaciones morfológicas básicas y son utilizadas habitualmente para la eliminación de ruido, aislamiento de elementos individuales ó agrupamiento de elementos dispersos en una imagen. Otras operaciones morfológicas son combinaciones de éstas. La operación de erosión es el resultado de la convolución de una imagen original con una máscara que tiene definido un punto de anclaje o referencia. La máscara suele ser un simple cuadrado, una cruz o un círculo con el punto de anclaje en su centro. En la operación de erosión se superpone la máscara en cada punto de la imagen y se obtiene para cada píxel el mínimo local; es por esto que se llama erosión, porque el efecto es que hay una erosión de los bordes de la imagen. La operación de dilatación es la operación inversa a la erosión, ya que lo que se computa es el máximo local, dando como resultado la dilatación o ampliación de los bordes. La operación de apertura es una combinación de las anteriores y se obtiene aplicando primero una erosión y luego una apertura. After pre-processing, a segmentation based on regions is carried out. The first step of this segmentation is the thresholding, followed by the delimitation of regions and contour profiling. The objective of the thresholding is to obtain from the previous image in grayscale a monochrome image in which the nuclei can be distinguished from the rest of the elements. The thresholding step is autonomous and does not require any parameters to be entered by the operator. To achieve this, the Otsu algorithm is applied. This method uses a threshold value of the gray level of the image to assign the different pixels the value 0 or 1, depending on whether their gray level is above this threshold value or not. To calculate this threshold value, it makes a statistical analysis of the histogram of gray levels of the image, more accurately it uses the value of the variance as a measure of this dispersion and calculates the threshold value that allows the least dispersion within each segment (pixels at which will be assigned 0 or 1) and the greatest dispersion between the two segments. The result of the thresholding is seen in Figure 6. The next step is the delimitation of regions using the Watershed algorithm. The principle of operation of the Watershed algorithm is based on the similarity between the intensity at a certain point of the image and its height on a topographic map. In this way, the points with a greater intensity would correspond to the highest zones, drawing the dividing lines of the ridges and those of less intensity would correspond to the valleys. Applying morphological operations starting from the valleys, simulates a "flood" of the image that results in its segmentation in the confluence areas of the basins. The result can be seen in Figure 7. The next step in segmentation is to perform morphological operations to improve the result and ensure that the selected area belongs only to the nuclei, not the stroma. A series of erosion, dilation and combinations thereof are applied, the result of which can be seen in Figure 8. Erosion and dilation are the two basic morphological operations and are commonly used for noise elimination, isolation of individual elements or grouping of elements dispersed in an image. Other morphological operations are combinations of these. The erosion operation is the result of the convolution of an original image with a mask that has an anchor or reference point defined. The mask is usually a simple square, a cross or a circle with the anchor point in its center. In the erosion operation, the mask is superimposed at each point of the image and the local minimum is obtained for each pixel; This is why it is called erosion, because the effect is that there is an erosion of the edges of the image. The dilation operation is the inverse operation to erosion, since what is computed is the local maximum, resulting in the expansion or expansion of the edges. The opening operation is a combination of the previous ones and is obtained by first applying an erosion and then an opening.
Seguidamente se aplica una segmentación basada en contornos. Partiendo de nuevo de la imagen obtenida de la extracción del canal L (figura 3) (o el canal correspondiente a la intensidad de luz de otro espacio de color) se aplica un algoritmo de detección de bordes para conseguir una mejor delimitación de los núcleos y posteriormente unas operaciones morfológicas para el perfilado de los mismos. Para la detección de bordes se utiliza la implementación del algoritmo de paso por cero del Laplaciano (algoritmo deThen a segmentation based on contours is applied. Starting again from the image obtained from the extraction of the L-channel (figure 3) (or the channel corresponding to the light intensity of another color space) an edge detection algorithm is applied to achieve a better delimitation of the nuclei and subsequently morphological operations for profiling them. For the detection of edges, the implementation of the Laplaciano zero-pass algorithm is used (algorithm of
Canny). La operación se lleva a cabo sobre la imagen obtenida de la extracción del canal L y nos devuelve una imagen binaria con los bordes de los núcleos a la que aplicamos una operación de dilatación para engrosar los bordes y obtenemos el resultado de la figura 9. Canny) The operation is carried out on the image obtained from the L-channel extraction and returns a binary image with the edges of the cores to which we apply a dilation operation to thicken the edges and obtain the result of Figure 9.
La suma de las imágenes obtenidas de los procesos de segmentación basada en regiones (figura 8) y segmentación basada en detección de contornos (figura 9) invertida nos da la máscara definitiva de las regiones de interés obtenida del proceso de segmentación; el resultado se observa en la figura 10. The sum of the images obtained from the segmentation processes based on regions (figure 8) and segmentation based on contour detection (figure 9) inverted gives us the definitive mask of the regions of interest obtained from the segmentation process; The result is shown in Figure 10.
Una vez se ha logrado la máscara definitiva, se aplica a la imagen original y se obtiene el área de interés compuesta por los núcleos y que es la que se utiliza para los posteriores tratamientos. Este resultado se puede ver en la figura 1 1 . Once the definitive mask has been achieved, it is applied to the original image and the area of interest composed of the nuclei is obtained and that is the one used for subsequent treatments. This result can be seen in Figure 1 1.
Del análisis de las imágenes de las muestras con tinción inmuno- histoquímica y de los histogramas del área de los núcleos en los distintos planos del espacio de color RGB (se pueden utilizar otros espacios de color que también mejoran el resultado de la clasificación), se han podido determinar una serie de reglas que nos permiten clasificar adecuadamente los píxeles en tintados y no tintados. Dentro de los píxeles tintados se pueden distinguir a su vez tres niveles de tinción (débil, intermedia y fuerte). From the analysis of the images of the samples with immuno-histochemical staining and the histograms of the area of the nuclei in the different planes of the RGB color space (other color spaces that also improve the classification result can be used), they have been able to determine a series of rules that allow us to properly classify the pixels in tinted and non-tinted. Within the tinted pixels, three levels of staining can be distinguished (weak, intermediate and strong).
Para tener un conjunto manejable de reglas, estas se definen en base al nivel de cada uno de los canales de la imagen (Rojo, Verde, Azul). Para cada canal se distinguen tres niveles posibles: bajo, medio y alto. To have a manageable set of rules, these are defined based on the level of each of the channels of the image (Red, Green, Blue). For each channel three possible levels are distinguished: low, medium and high.
Con las premisas anteriores, se aplica un sistema de lógica difusa para la determinación del nivel de tinción en cada píxel de la imagen, en función de los niveles obtenidos de cada canal; las reglas son del tipo: Si (rojo es alto y verde es bajo y azul es bajo) entonces (tinción es fuerte). Se define un sistema de inferencia difusa con tres entradas y una salida. Las tres entradas corresponden a la intensidad de cada uno de los planos de color de la imagen RGB en ese píxel y la salida define el nivel de tinción. Cada entrada tiene tres funciones miembro, llamadas bajo, medio y alto, que se supone siguen una distribución normal (otras distribuciones son posibles). Los parámetros estadísticos de cada función miembro se determinan automáticamente para cada plano del espacio de color a partir del análisis estadístico del histograma del mismo en la zona de interés correspondiente a los núcleos. Las reglas de inferencia que controlan el sistema son todas de igual peso, aunque se pueden aplicar otros criterios de peso. With the previous premises, a fuzzy logic system is applied to determine the level of staining in each pixel of the image, depending on the levels obtained from each channel; the rules are of the type: If (red is high and green is low and blue is low) then (staining is strong). It defines a Diffuse inference system with three inputs and one output. The three inputs correspond to the intensity of each of the color planes of the RGB image in that pixel and the output defines the level of staining. Each entry has three member functions, called low, medium and high, which are supposed to follow a normal distribution (other distributions are possible). The statistical parameters of each member function are automatically determined for each plane of the color space from the statistical analysis of the histogram in the area of interest corresponding to the nuclei. The inference rules that control the system are all of equal weight, although other weight criteria may apply.
Una vez desarrollado el sistema de inferencia difusa, se recorre la imagen del área de interés (figura 1 1 ) pixel a pixel, y se clasifican los píxeles de los núcleos según su nivel de tinción en: tinción negativa, débil, intermedia o fuerte. Once the diffuse inference system has been developed, the image of the area of interest (figure 1 1) is traversed pixel by pixel, and the pixels of the nuclei are classified according to their level of staining in: negative, weak, intermediate or strong staining.
El criterio de clasificación se hace de acuerdo al patrón expuesto en la figura 12 desarrollado para este caso. Este criterio, que pretende servir de base para la clasificación de la tinción de los núcleos, se extrapola para la clasificación de los píxeles. El nivel de tinción viene dado por la intensidad de marrón del píxel. Píxeles en azul, indican que su tinción es negativa. Una vez clasificado el píxel, se colorea en la imagen original de acuerdo al patrón de la figura 12, resultando la figura 13. The classification criterion is made according to the pattern set out in Figure 12 developed for this case. This criterion, which aims to serve as a basis for the classification of nucleus staining, is extrapolated for the classification of pixels. The level of staining is given by the intensity of pixel brown. Pixels in blue indicate that its staining is negative. Once the pixel is classified, it is colored in the original image according to the pattern of figure 12, resulting in figure 13.
Los resultados de la clasificación dan los porcentajes de píxeles para cada nivel de tinción (negativa, débil, intermedia o fuerte). También da los porcentajes finales de píxeles no tintados (con tinción negativa) y tintados (con tinción débil, intermedia y fuerte). Con los datos de la clasificación y la comparación entre la imagen inicial (figura 1 ) y la imagen final (figura 13), el patólogo experto puede validar el resultado obtenido. Estos valores de porcentajes los utilizará el patólogo como referencia para indicar el nivel de expresión de los marcadores tumorales según alguno o varios de los distintos criterios de valoración al uso en los laboratorios de Anatomía Patológica, bajo la hipótesis de que los porcentajes de píxeles se corresponden con los porcentajes de núcleos en cada nivel de tinción. Una vez disponibles los datos de porcentajes según su nivel de tinción, un sistema automático puede indicar el nivel de expresión del marcador tumoral analizado según el criterio de valoración seleccionado. The results of the classification give the percentages of pixels for each level of staining (negative, weak, intermediate or strong). It also gives the final percentages of unpainted pixels (with negative staining) and tinted (with weak, intermediate and strong staining). With the classification data and the comparison between the initial image (figure 1) and the final image (figure 13), the expert pathologist can validate the result obtained. These percentage values will be used by the pathologist as a reference to indicate the level of expression of the tumor markers according to one or more of the different assessment criteria for use in Pathological Anatomy laboratories, under the hypothesis that the pixel percentages correspond with the percentages of nuclei in each level of staining. Once the percentage data are available according to their level of staining, an automatic system can indicate the level of expression of the tumor marker analyzed according to the selected evaluation criteria.

Claims

REIVINDICACIONES
1. Procedimiento de interpretación automática de imágenes para la cuantificación de marcadores tumorales nucleares donde, partiendo de imágenes tintadas previamente por procedimientos conocidos, se aplican los siguientes pasos: a) cambio de espacio de color RGB a otro espacio en el que uno de sus canales represente la intensidad luminosa del pixel, como HSL, HSI, HSV o HSB; b) extracción del canal correspondiente al valor de intensidad; c) filtrado de la imagen para eliminar el ruido, preservando los bordes; d) ecualización del histograma para aumento del contraste; e) segmentación basada en regiones; f) segmentación basada en detección de contornos; g) suma de las imágenes obtenidas de la segmentación basada en regiones y de la segmentación basada en contornos para obtener una máscara que se aplica a la imagen original para obtener el área de interés compuesta sólo de los núcleos; h) clasificación de los pixeles del área de interés compuesta por los núcleos según su nivel de tinción en tinción negativa, débil, intermedia o fuerte según un sistema predeterminado. 1. Procedure for automatic image interpretation for the quantification of nuclear tumor markers where, starting from images previously tinted by known procedures, the following steps are applied: a) change of RGB color space to another space in which one of its channels represent the luminous intensity of the pixel, such as HSL, HSI, HSV or HSB; b) extraction of the channel corresponding to the intensity value; c) filtering the image to eliminate noise, preserving the edges; d) histogram equalization to increase contrast; e) segmentation based on regions; f) segmentation based on contour detection; g) sum of the images obtained from region-based segmentation and contour-based segmentation to obtain a mask that is applied to the original image to obtain the area of interest composed only of the nuclei; h) classification of the pixels of the area of interest composed of the nuclei according to their level of staining into negative, weak, intermediate or strong staining according to a predetermined system.
2. Procedimiento según la reivindicación 1 caracterizado porque el paso a se realiza mediante el cambio de espacio RGB a HSL. 2. Procedure according to claim 1 characterized in that step a is carried out by changing the RGB space to HSL.
Procedimiento según la reivindicación 2, caracterizado porque el paso b consiste en la extracción del canal L del espacio HSL Procedure according to claim 2, characterized in that step b consists of extracting the channel L from the HSL space
Procedimiento según cualquiera de las reivindicaciones anteriores, caracterizado porque en el paso c se usa un filtro bilateral. Procedure according to any of the previous claims, characterized in that in step c a bilateral filter is used.
Procedimiento según cualquiera de las reivindicaciones anteriores, caracterizado porque en el paso e se realizan las siguientes etapas: i. Umbralización automática mediante el algoritmo de Otsu. ii. Delimitación de regiones mediante el algoritmo Watershed. iii. Aplicación de operaciones morfológicas de erosión, dilatación y sus combinaciones para perfilado de contornos. Procedure according to any of the previous claims, characterized in that in step e the following steps are carried out: i. Automatic thresholding using the Otsu algorithm. ii. Region delimitation using the Watershed algorithm. iii. Application of morphological operations of erosion, dilation and their combinations for contour profiling.
Procedimiento según cualquiera de las reivindicaciones anteriores, caracterizado porque el paso f se realiza mediante la aplicación del algoritmo de Canny. Procedure according to any of the previous claims, characterized in that step f is carried out by applying the Canny algorithm.
Procedimiento según la reivindicación 6, caracterizado porque a la aplicación del algoritmo de Canny sigue la aplicación de operaciones morfológicas de erosión, dilatación y sus combinaciones para perfilado de bordes. Procedure according to claim 6, characterized in that the application of the Canny algorithm is followed by the application of morphological operations of erosion, dilation and their combinations for edge profiling.
8. Procedimiento según cualquiera de las reivindicaciones anteriores, caracterizado porque los pixeles se clasifican según su nivel de tinción en el paso h mediante un sistema de lógica difusa. 8. Method according to any of the preceding claims, characterized in that the pixels are classified according to their level of staining in step h using a fuzzy logic system.
PCT/ES2013/070920 2012-12-26 2013-12-23 Method for the automatic interpretation of images for the quantification of nucelar tumour markers WO2014102428A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
ES201232033A ES2481347B1 (en) 2012-12-26 2012-12-26 PROCEDURE FOR AUTOMATIC INTERPRETATION OF IMAGES FOR THE QUANTIFICATION OF NUCLEAR TUMOR MARKERS.  
ESP201232033 2012-12-26

Publications (1)

Publication Number Publication Date
WO2014102428A1 true WO2014102428A1 (en) 2014-07-03

Family

ID=51019932

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/ES2013/070920 WO2014102428A1 (en) 2012-12-26 2013-12-23 Method for the automatic interpretation of images for the quantification of nucelar tumour markers

Country Status (2)

Country Link
ES (1) ES2481347B1 (en)
WO (1) WO2014102428A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107230214A (en) * 2017-05-27 2017-10-03 西安电子科技大学 SAR image waters automatic testing method based on recurrence OTSU algorithms
CN107240084A (en) * 2017-06-14 2017-10-10 湘潭大学 A kind of removing rain based on single image method and device
CN107563384A (en) * 2017-08-31 2018-01-09 江苏大学 The recognition methods end to end of adhesion pig based on broad sense Hough clusters
CN108022233A (en) * 2016-10-28 2018-05-11 沈阳高精数控智能技术股份有限公司 A kind of edge of work extracting method based on modified Canny operators
CN108470343A (en) * 2017-02-23 2018-08-31 南宁市富久信息技术有限公司 A kind of improved method for detecting image edge
CN110378866A (en) * 2019-05-22 2019-10-25 中国水利水电科学研究院 A kind of canal lining breakage image recognition methods based on unmanned plane inspection
CN111815664A (en) * 2020-07-08 2020-10-23 云南电网有限责任公司电力科学研究院 Fire point detection method and system
WO2021164328A1 (en) * 2020-02-17 2021-08-26 腾讯科技(深圳)有限公司 Image generation method, device, and storage medium

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108510499B (en) * 2018-02-08 2021-10-15 河南师范大学 Image threshold segmentation method and device based on fuzzy set and Otsu
CN111368854A (en) * 2020-03-03 2020-07-03 东南数字经济发展研究院 Method for batch extraction of same-class target contour with single color in aerial image
CN111611874B (en) * 2020-04-29 2023-11-03 杭州电子科技大学 Face mask wearing detection method based on ResNet and Canny

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0566915A1 (en) * 1992-04-21 1993-10-27 Dainippon Screen Mfg. Co., Ltd. Sharpness processing apparatus
JPH07203231A (en) * 1993-12-28 1995-08-04 Canon Inc Color picture processor
US20030031345A1 (en) * 2001-05-30 2003-02-13 Eaton Corporation Image segmentation system and method
US20090072098A1 (en) * 2007-09-17 2009-03-19 Inflight Investments Inc. Support bracket for mounting wires to floor beams of an aircraft
CN101699511A (en) * 2009-10-30 2010-04-28 深圳创维数字技术股份有限公司 Color image segmentation method and system
US20110043535A1 (en) * 2009-08-18 2011-02-24 Microsoft Corporation Colorization of bitmaps
US20110286654A1 (en) * 2010-05-21 2011-11-24 Siemens Medical Solutions Usa, Inc. Segmentation of Biological Image Data
CN102608016A (en) * 2012-04-13 2012-07-25 福州大学 Method for measuring average size of complicated particles based on Canny boundary detection

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0566915A1 (en) * 1992-04-21 1993-10-27 Dainippon Screen Mfg. Co., Ltd. Sharpness processing apparatus
JPH07203231A (en) * 1993-12-28 1995-08-04 Canon Inc Color picture processor
US20030031345A1 (en) * 2001-05-30 2003-02-13 Eaton Corporation Image segmentation system and method
US20090072098A1 (en) * 2007-09-17 2009-03-19 Inflight Investments Inc. Support bracket for mounting wires to floor beams of an aircraft
US20110043535A1 (en) * 2009-08-18 2011-02-24 Microsoft Corporation Colorization of bitmaps
CN101699511A (en) * 2009-10-30 2010-04-28 深圳创维数字技术股份有限公司 Color image segmentation method and system
US20110286654A1 (en) * 2010-05-21 2011-11-24 Siemens Medical Solutions Usa, Inc. Segmentation of Biological Image Data
CN102608016A (en) * 2012-04-13 2012-07-25 福州大学 Method for measuring average size of complicated particles based on Canny boundary detection

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HANGU YEO ET AL.: "An automated image segmentation and classification algorithm for immunohistochemically stained tumor cell nuclei", PROCEEDINGS OF THE SPIE, vol. 7259, no. 1-6, 2009, USA, pages 725948 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108022233A (en) * 2016-10-28 2018-05-11 沈阳高精数控智能技术股份有限公司 A kind of edge of work extracting method based on modified Canny operators
CN108470343A (en) * 2017-02-23 2018-08-31 南宁市富久信息技术有限公司 A kind of improved method for detecting image edge
CN107230214A (en) * 2017-05-27 2017-10-03 西安电子科技大学 SAR image waters automatic testing method based on recurrence OTSU algorithms
CN107230214B (en) * 2017-05-27 2020-09-01 西安电子科技大学 SAR image water area automatic detection method based on recursive OTSU algorithm
CN107240084A (en) * 2017-06-14 2017-10-10 湘潭大学 A kind of removing rain based on single image method and device
CN107563384A (en) * 2017-08-31 2018-01-09 江苏大学 The recognition methods end to end of adhesion pig based on broad sense Hough clusters
CN110378866A (en) * 2019-05-22 2019-10-25 中国水利水电科学研究院 A kind of canal lining breakage image recognition methods based on unmanned plane inspection
WO2021164328A1 (en) * 2020-02-17 2021-08-26 腾讯科技(深圳)有限公司 Image generation method, device, and storage medium
US11847812B2 (en) 2020-02-17 2023-12-19 Tencent Technology (Shenzhen) Company Limited Image generation method and apparatus, device, and storage medium
CN111815664A (en) * 2020-07-08 2020-10-23 云南电网有限责任公司电力科学研究院 Fire point detection method and system
CN111815664B (en) * 2020-07-08 2023-10-17 云南电网有限责任公司电力科学研究院 Fire point detection method and system

Also Published As

Publication number Publication date
ES2481347A1 (en) 2014-07-29
ES2481347B1 (en) 2015-07-30

Similar Documents

Publication Publication Date Title
ES2481347B1 (en) PROCEDURE FOR AUTOMATIC INTERPRETATION OF IMAGES FOR THE QUANTIFICATION OF NUCLEAR TUMOR MARKERS.  
ES2711196T3 (en) Systems and procedures for the segmentation and processing of tissue images and extraction of tissue characteristics to treat, diagnose or predict medical conditions
Arslan et al. A color and shape based algorithm for segmentation of white blood cells in peripheral blood and bone marrow images
AU2013258519B2 (en) Method and apparatus for image scoring and analysis
Smaoui et al. A developed system for melanoma diagnosis
US20170140246A1 (en) Automatic glandular and tubule detection in histological grading of breast cancer
US20150186755A1 (en) Systems and Methods for Object Identification
Goceri et al. Quantitative validation of anti‐PTBP1 antibody for diagnostic neuropathology use: Image analysis approach
EP3140778B1 (en) Method and apparatus for image scoring and analysis
EP3271864B1 (en) Tissue sample analysis technique
Jose et al. A novel method for glaucoma detection using optic disc and cup segmentation in digital retinal fundus images
Su et al. Detection of tubule boundaries based on circular shortest path and polar‐transformation of arbitrary shapes
Faridi et al. Cancerous nuclei detection and scoring in breast cancer histopathological images
Lal et al. A robust method for nuclei segmentation of H&E stained histopathology images
WO2014006421A1 (en) Identification of mitotic cells within a tumor region
Jaffery et al. Performance analysis of image segmentation methods for the detection of masses in mammograms
Feng et al. An advanced automated image analysis model for scoring of ER, PR, HER-2 and Ki-67 in breast carcinoma
Avenel et al. Marked point processes with simple and complex shape objects for cell nuclei extraction from breast cancer H&E images
WO2014181024A1 (en) Computer-implemented method for recognising and classifying abnormal blood cells, and computer programs for performing the method
Çayır et al. Segmentation of the main structures in Hematoxylin and Eosin images
Pardo et al. Automated skin lesion segmentation with kernel density estimation
Hashim et al. Automatic segmentation of optic disc from color fundus images
Sundaresan et al. Adaptive super-candidate based approach for detection and classification of drusen on retinal fundus images
Kumar et al. Spectral Analysis for Diagnosing of Melanoma through Digital Image Processing
Leś et al. Automatic cell segmentation using L2 distance function

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13868197

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13868197

Country of ref document: EP

Kind code of ref document: A1