WO2023121572A1 - A deep learning-based chromosome image analysis method and system for improving the success of automatic karyotyping - Google Patents

A deep learning-based chromosome image analysis method and system for improving the success of automatic karyotyping Download PDF

Info

Publication number
WO2023121572A1
WO2023121572A1 PCT/TR2021/050194 TR2021050194W WO2023121572A1 WO 2023121572 A1 WO2023121572 A1 WO 2023121572A1 TR 2021050194 W TR2021050194 W TR 2021050194W WO 2023121572 A1 WO2023121572 A1 WO 2023121572A1
Authority
WO
WIPO (PCT)
Prior art keywords
chromosome
image analysis
chromosomes
deep learning
orientation
Prior art date
Application number
PCT/TR2021/050194
Other languages
French (fr)
Inventor
Abdulkerim ÇAPAR
Cengiz Kaan SAKKAF
Burak Yahya BUYRUKBİLEN
Original Assignee
Capar Abdulkerim
Sakkaf Cengiz Kaan
Buyrukbilen Burak Yahya
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Capar Abdulkerim, Sakkaf Cengiz Kaan, Buyrukbilen Burak Yahya filed Critical Capar Abdulkerim
Priority to PCT/TR2021/050194 priority Critical patent/WO2023121572A1/en
Publication of WO2023121572A1 publication Critical patent/WO2023121572A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/698Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16BBIOINFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR GENETIC OR PROTEIN-RELATED DATA PROCESSING IN COMPUTATIONAL MOLECULAR BIOLOGY
    • G16B40/00ICT specially adapted for biostatistics; ICT specially adapted for bioinformatics-related machine learning or data mining, e.g. knowledge discovery or pattern finding

Definitions

  • the invention is a cytogenetic image analysis system that features with deep learning technologies and comprising methods and processors, which can perform chromosome classification and orientation determination together to help the expert for genetic diagnosis by placing the chromosomes in the correct places and in the correct orientation in the karyotype table.
  • the subject of the invention is a cytogenetic chromosome image analysis system, which helps the expert to make the genetic diagnosis by placing the chromosomes in the karyotype table, to their correct places with the right orientation.
  • the field of use of the invention is in cytogenetic diagnostic laboratories of health institutions.
  • the invention is a part of the chromosome image analysis systems found in these laboratories and used to aid genetic diagnosis.
  • Cytogenetic chromosome image analysis systems consist of the required genetic diagnosis - analysis software and the necessary hardware to run this software. These systems perform the functions of preparing, reporting and recording the results that help genetic diagnosis by analyzing the images taken from the slide samples, prepared in the laboratory environment with the help of a digital camera interactively.
  • Main parts of the system are workstation computer, imaging device (camera, scanner, etc.), light microscope, supporting equipment for image acquisition (light source, cabin, moving stage, etc.) and diagnostic & analysis software.
  • the state of the art WO2013192355 (Al), entitled “Computer-Assisted Karyotyping”, indicates a system and method for computer assisted karyotyping includes a processor that receives a digitized image of metaphase chromosomes for processing in an image processing module and a classifier module.
  • the image processing module can include a segmentation function to extract individual chromosome images, a bend correction function for straightening chromosome images, and a feature selection function to distinguish between chromosome bands.
  • the classifier module which can be one or more trained kernel-based learning machines, takes the processed image and enables the image to be classified as normal or abnormal.
  • the invention does not solve the chromosome classification and orientation estimation problem together.
  • the invention has no orientation determination function. They only used Support Vector Machines (SVM) method for chromosome classification.
  • SVM Support Vector Machines
  • JP2020009402A US 10769408 (B2)
  • JP2020009402A US 10769408 (B2)
  • Method and System For Automatic Chromosome Classification aims to provide a method and system that can automatically classify a chromosome.
  • Res-CRANN residual convolutional recursive neural network
  • a chromosome band is structured to use the properties of a band sequence.
  • Res-CRANN is an end-to-end training possible system and extracts a characteristic vector sequence from a characteristic map created by a convolution layer of a residual neural network (ResNet). Characteristic vector corresponds to visual features that show a chromosome band in a chromosome image.
  • the characteristic vector sequence is given to a recursive neural network (RNN) powered by an attention mechanism.
  • RNN recursive neural network
  • Learning RNN characteristic vector sequence attention modules concentrate on multiple target regions (ROI) of characteristic vector sequence, ROI is specific for a chromosome class marker.
  • ROI target regions
  • Res-CRANN Residual Convolutional Recurrent Attention Neural Network
  • WO03032238 entitled "Karyotype Processing Methods And Devices" discloses methods and devices for creating, maintaining, and exploiting a cytogenetic database.
  • Patient data are entered into a database and associated with a unique patient identifier.
  • Karyotypes are prepared for the patient and associated with a unique patient identifier.
  • the database includes a user interface that allows a user to select and redirect individual chromosomes to prepare the karyotype.
  • the database can be used to create cytogenetic problems with the user interface.
  • an automatic chromosome classification method was not introduced in the invention, in which software, user interfaces and related processors were defined to enable the user to perform the task of creating karyotype table manually in digital environment.
  • chromosome image analysis software As in known in the art, the purpose of chromosome image analysis software is to classify the detected metaphase chromosomes (1-22, X, Y) and place them in the karyotype table in the correct orientation. Genetic diagnosis can be made by placing the chromosomes in the right place and in the correct orientation in the karyotype table.
  • the systems known in the art make mistakes in the correct determination of chromosome classification and orientation, thus increasing the time of genetic diagnosis by making more processes to the analyzer and also causing false diagnosis.
  • chromosome classification inventions In the state of the art, there are chromosome classification inventions, but these inventions do not contain a method that solves the chromosome classification and orientation determination together. The invention remedies this deficiency and helps to close the gap.
  • the invention comprises a deep learning based method and processors that can perform chromosome classification and orientation determination concurrently in cytogenetic chromosome image analysis systems. To the best of the authors knowledge, there is no similar method or system in the state of the art. Thus, the invention helps to eliminate this deficiency by solving the problem of chromosome classification and orientation determination together. Aim of the Invention
  • the aim of the invention is to increase the performance of automatic chromosome classification, orientation determination and placement in the karyotype table by using deep learning technologies.
  • Another object of the invention is to introduce a new deep learning approach that can simultaneously detect both the class and the orientation of the chromosome.
  • Another purpose of the invention is that deep learning technologies are a method that solves the problem of chromosome classification and orientation determination together.
  • Figure 3 B Invention-Specific Deep Convolutional Artificial Neural Networks
  • the invention is a cytogenetic chromosome image analysis system and comprises the following steps:
  • Each chromosome is placed in the karyotype table according to the predicted class label and orientation information.
  • the invention is a chromosome pairing system.
  • the invention is a cytogenetic chromosome image analysis method and comprises the following steps:
  • deep learning based cytogenetic image analysis methods and processors are provided to help the expert for genetic diagnosis by placing the chromosomes in the correct places in the karyotype table by performing chromosome classification and orientation determination together.
  • the invention increases the performance of chromosome classification, orientation determination and placement in the karyotype table by using deep learning -based techniques.
  • the method of the invention takes the segmented and straightened chromosome images as input.
  • the method normalizes the size of the chromosome images, determines their class and orientation (straight / inverse) using deep convolutional artificial neural networks (DCANN) and a chromosome pairing algorithm, and places the chromosomes in the karyotype table.
  • DCANN deep convolutional artificial neural networks
  • Size Normalization is the process of converting the straightened chromosomes to a standard size so that they can be presented to DCNN as an input.
  • they are normalized to 32 pixels width and 96 pixels height dimensions. During normalization, the aspect ratio of the chromosome is preserved.
  • the indicated dimensions are the values accepted as input by the artificial neural network designed in the invention and they are precise.
  • the neural networks used in the invention have a convolutional structure. Convolution is performed by multiplying the convolution kernel coefficients and image pixel values. In gray level images, white pixels are represented as 255, black pixels as 0 intensity values, and intermediate values are expressed in gray level intensities between 1 and 254.
  • the "DCANN (Deep Convolutional Artificial Neural Network)" mentioned here and illustrated in Figure 3 is designed within the scope of the invention to classify the straightened and normalized chromosome images and determine their orientation concurrently.
  • the proposed DCANN which is illustrated with a processor box in Figure 3, accepts a 32x96 pixel input image and has 37 specific layers and 48 output activations. Properties of each network layers are described in Table 1.
  • Convolution layer uses information from adjacent pixels to down-sample the image into features by convolution and then use prediction layers to predict the target values.
  • Convolution filters or kernels are utilized that run over the image and compute a dot product. Each filter extracts different features from the image.
  • Max pooling layer helps reduce the spatial size of the convolved features and also helps reduce over-fitting by providing an abstracted representation of them. It is a sample -based discretization process. It is similar to the convolution layer but instead of taking a dot product between the input and the kernel we take the max of the region from the input overlapped by the kernel.
  • a fully connected layer the input layer nodes are connected to every node in the second layer.
  • the "loss layer” specifies how training penalizes the deviation between the predicted (output) and true labels and is normally the final layer of a neural network. Various loss functions appropriate for different tasks may be used.
  • Batch Normalization Layer is used to increase the stability of a neural network. It normalizes the output of a previous activation layer by subtracting the batch mean and dividing by the batch standard deviation. Batch normalization adds two trainable parameters to each layer, so the normalized output is multiplied by a “standard deviation” parameter and add a “mean” parameter.
  • Global Avg Pooling Layer
  • Global Average Pooling is an operation that calculates the average output of each feature map in the previous layer. This simple operation reduces the data significantly and prepares the model for the final classification layer. It also has no trainable parameters - just like Max Pooling Layer
  • the proposed network produces 48-length activation vector at the output layer.
  • a healthy human metaphase has 46, 23 homolog pairs of, chromosomes.
  • the reason for the existence of 48 classes in specially designed DCANN is that the up-down version of each chromosome is added as a class for each chromosome label. Thus, each chromosome type is expressed in two classes.
  • Network output labels are designed as follows: 1, 1', 2, 2 ', 3, 3', 4, 4 ', 5, 5', 6, 6 ', 7, 7' , 8, 8 ', 9, 9', 10, 10 ', 11, 11', 12, 12 ', 13, 13', 14, 14 ', 15, 15', 16, 16 ', 17, 17' , 18, 18 ', 19, 19', 20, 20 ', 21, 21', 22, 22 ', X, X', Y, Y '.
  • (') sign denotes the up-down version of the corresponding chromosome class label.
  • the invention specific DCANN has the ability to simultaneously detect both the class and the orientation of each chromosome. When the network is run with a chromosome image of unknown class and direction, 48 activation values are generated at the network output layer. N chromosomes in a metaphase are fed in to the network one by one and
  • Proposed chromosome pairing algorithm determines the classes and orientations of chromosomes from the A matrix created by DC ANN, as shown in Figure 4. The algorithm is based on the information that there are two homologous pairs from each chromosome in a human karyotype table. Another information that the algorithm is based on is the DCANN output activation values. The chromosome pairing algorithm determines the class and orientation of chromosomes by taking 48 activation values for each chromosome in a metaphase. These activations are taken into account as the class confidence values.
  • Each chromosome is placed in the karyotype table according to the predicted class label and orientation information.
  • the invention will work as a part of chromosome image analysis systems in cytogenetic diagnostic laboratories and will determine the location and orientation of chromosomes in the karyotype table by taking the straightened chromosome images from the image analysis system and processing them.
  • the source codes and processors of the invention will work in integration with the chromosome image analysis system.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Investigating Or Analysing Biological Materials (AREA)

Abstract

The invention is a cytogenetic image analysis system that features with deep learning technologies and consists of methods and processors that can perform chromosome classification and orientation determination together to help the expert for genetic diagnosis by placing the chromosomes in the correct places and in the correct orientation in the karyotype table.

Description

A DEEP LEARNING-BASED CHROMOSOME IMAGE ANALYSIS METHOD AND SYSTEM FOR IMPROVING THE SUCCESS OF AUTOMATIC KARYOTYPING
Technical Field
The invention is a cytogenetic image analysis system that features with deep learning technologies and comprising methods and processors, which can perform chromosome classification and orientation determination together to help the expert for genetic diagnosis by placing the chromosomes in the correct places and in the correct orientation in the karyotype table.
State of the Art
The subject of the invention is a cytogenetic chromosome image analysis system, which helps the expert to make the genetic diagnosis by placing the chromosomes in the karyotype table, to their correct places with the right orientation.
The field of use of the invention is in cytogenetic diagnostic laboratories of health institutions. The invention is a part of the chromosome image analysis systems found in these laboratories and used to aid genetic diagnosis.
Cytogenetic chromosome image analysis systems consist of the required genetic diagnosis - analysis software and the necessary hardware to run this software. These systems perform the functions of preparing, reporting and recording the results that help genetic diagnosis by analyzing the images taken from the slide samples, prepared in the laboratory environment with the help of a digital camera interactively.
Main parts of the system are workstation computer, imaging device (camera, scanner, etc.), light microscope, supporting equipment for image acquisition (light source, cabin, moving stage, etc.) and diagnostic & analysis software.
The state of the art WO2013192355 (Al), entitled "Computer-Assisted Karyotyping”, indicates a system and method for computer assisted karyotyping includes a processor that receives a digitized image of metaphase chromosomes for processing in an image processing module and a classifier module. The image processing module can include a segmentation function to extract individual chromosome images, a bend correction function for straightening chromosome images, and a feature selection function to distinguish between chromosome bands. The classifier module, which can be one or more trained kernel-based learning machines, takes the processed image and enables the image to be classified as normal or abnormal. However, the invention does not solve the chromosome classification and orientation estimation problem together. The invention has no orientation determination function. They only used Support Vector Machines (SVM) method for chromosome classification.
The state of the art JP2020009402A (US 10769408 (B2)), entitled "Method And System For Automatic Chromosome Classification" aims to provide a method and system that can automatically classify a chromosome. With a residual convolutional recursive neural network (Res- CRANN), a chromosome band is structured to use the properties of a band sequence. Res-CRANN is an end-to-end training possible system and extracts a characteristic vector sequence from a characteristic map created by a convolution layer of a residual neural network (ResNet). Characteristic vector corresponds to visual features that show a chromosome band in a chromosome image. The characteristic vector sequence is given to a recursive neural network (RNN) powered by an attention mechanism. Learns RNN characteristic vector sequence, attention modules concentrate on multiple target regions (ROI) of characteristic vector sequence, ROI is specific for a chromosome class marker. However, in the invention of chromosome classification using the Residual Convolutional Recurrent Attention Neural Network (Res-CRANN), which is a kind of feedback deep network structure, did not produce a solution to the chromosome orientation determination problem.
The state of the art WO03032238 (Al) entitled "Karyotype Processing Methods And Devices", discloses methods and devices for creating, maintaining, and exploiting a cytogenetic database. Patient data are entered into a database and associated with a unique patient identifier. Karyotypes are prepared for the patient and associated with a unique patient identifier. The database includes a user interface that allows a user to select and redirect individual chromosomes to prepare the karyotype. In addition, the database can be used to create cytogenetic problems with the user interface. However, an automatic chromosome classification method was not introduced in the invention, in which software, user interfaces and related processors were defined to enable the user to perform the task of creating karyotype table manually in digital environment.
The publication entitled "Classification of Metaphase Chromosomes Using Deep Convolutional Neural Network", published in 2019 by authors "Hu, X., Yi, W., Jiang, L., Wu, S., Zhang, Y., Du, J., Ma, T. , Wang, T. and Wu, X. ", in the journal of "Journal of Computational Biology" and the publication entitled "Extended ResNet and Label Feature Vector Based Chromosome Classification", published in 2020 by authors "Wang, C., Yu, L., Zhu, X., Su, J. and Ma. , F. " in the journal of "IEEE Access" and the publication entitled "Varifocal-Net: A Chromosome Classification Approach Using Deep Convolutional Networks", published in 2019 by authors " Qin, Y., Wen, J., Zheng, H., Huang, X., Yang, J., Song, N., Zhu, Y.M., Wu, L. and Yang, G.Z " in the journal of " IEEE Transactions on Medical Imaging " and the publication entitled " Siamese Networks For Chromosome Classification", published in 2017 by authors " Jindal, S., Gupta, G., Yadav, M., Sharma, M. and Vig, L." in "Proceedings of the IEEE International Conference on Computer Vision Workshops" are all state of the art chromosome classification study conducted with deep learning-based approaches. However, no study has been found that solves the problem of chromosome classification and orientation determination together for karyotyping, which is the subject of our invention.
As in known in the art, the purpose of chromosome image analysis software is to classify the detected metaphase chromosomes (1-22, X, Y) and place them in the karyotype table in the correct orientation. Genetic diagnosis can be made by placing the chromosomes in the right place and in the correct orientation in the karyotype table. The systems known in the art, make mistakes in the correct determination of chromosome classification and orientation, thus increasing the time of genetic diagnosis by making more processes to the analyzer and also causing false diagnosis. In the state of the art, there are chromosome classification inventions, but these inventions do not contain a method that solves the chromosome classification and orientation determination together. The invention remedies this deficiency and helps to close the gap.
The invention comprises a deep learning based method and processors that can perform chromosome classification and orientation determination concurrently in cytogenetic chromosome image analysis systems. To the best of the authors knowledge, there is no similar method or system in the state of the art. Thus, the invention helps to eliminate this deficiency by solving the problem of chromosome classification and orientation determination together. Aim of the Invention
The aim of the invention is to increase the performance of automatic chromosome classification, orientation determination and placement in the karyotype table by using deep learning technologies.
Another object of the invention is to introduce a new deep learning approach that can simultaneously detect both the class and the orientation of the chromosome.
Another purpose of the invention is that deep learning technologies are a method that solves the problem of chromosome classification and orientation determination together.
Description of the Drawings
Figure 1 Working Steps of a Cytogenetic Chromosome Image Analysis System
Figure 1A Visualization of Chromosome Metaphase with the Help of a Light Microscope and Camera
Figure IB Image Enhancement
Figure 1C Chromosome Segmentation
Figure ID Chromosome Straightening
Figure IE Placing Chromosomes in the Karyotype Table by Chromosome Classification and Orientation Determination
Figure IF Creating the Cytogenetic Diagnostic Report
Figure 2 A: Size Normalization
Figure 3 B: Invention-Specific Deep Convolutional Artificial Neural Networks
Figure 4 C: Chromosome Pairing Algorithm
Figure 5 Metaphase Chromosome Spread
Figure 6 Chromosomes Placed on the Karyotype Table
Figure 7 Chromosome Size Normalization Brief Description of the Invention
The invention is a cytogenetic chromosome image analysis system and comprises the following steps:
• Algorithm iterations are started (it = 0) and continued until all chromosomes are labeled
Figure imgf000005_0001
Figure imgf000005_0002
• The pairs with the highest activation value for the j th column of the matrix are found by
Figure imgf000005_0003
processing each column of the A matrix.
Figure imgf000005_0004
Figure imgf000005_0005
Figure imgf000005_0006
• Each chromosome is placed in the karyotype table according to the predicted class label and orientation information.
The invention is a chromosome pairing system.
Figure imgf000005_0007
It comprises information about the presence of two homologous pairs from each chromosome in the karyotype table
It comprises DCANN output activation values, which contain class confidence value.
The invention is a cytogenetic chromosome image analysis method and comprises the following steps:
Figure imgf000005_0008
Figure imgf000006_0001
Figure imgf000006_0002
Detailed Description of the Invention
According to the invention, deep learning based cytogenetic image analysis methods and processors are provided to help the expert for genetic diagnosis by placing the chromosomes in the correct places in the karyotype table by performing chromosome classification and orientation determination together.
The invention increases the performance of chromosome classification, orientation determination and placement in the karyotype table by using deep learning -based techniques. The method of the invention takes the segmented and straightened chromosome images as input. The method normalizes the size of the chromosome images, determines their class and orientation (straight / inverse) using deep convolutional artificial neural networks (DCANN) and a chromosome pairing algorithm, and places the chromosomes in the karyotype table.
"Size Normalization", which is mentioned here and also illustrated in Figure 2, is the process of converting the straightened chromosomes to a standard size so that they can be presented to DCNN as an input. In the invention, taking into account the average aspect ratio of the chromosomes, they are normalized to 32 pixels width and 96 pixels height dimensions. During normalization, the aspect ratio of the chromosome is preserved. The indicated dimensions are the values accepted as input by the artificial neural network designed in the invention and they are precise. The neural networks used in the invention have a convolutional structure. Convolution is performed by multiplying the convolution kernel coefficients and image pixel values. In gray level images, white pixels are represented as 255, black pixels as 0 intensity values, and intermediate values are expressed in gray level intensities between 1 and 254. To prevent obtaining high values during convolution multiplication on background pixels, all image pixel values have been subtracted from 255 to represent foreground pixels with high intensity values and background pixels with low intensity values. An example of straightened chromosome image and normalization result is shown in Figure 6.
The "DCANN (Deep Convolutional Artificial Neural Network)" mentioned here and illustrated in Figure 3 is designed within the scope of the invention to classify the straightened and normalized chromosome images and determine their orientation concurrently. The proposed DCANN, which is illustrated with a processor box in Figure 3, accepts a 32x96 pixel input image and has 37 specific layers and 48 output activations. Properties of each network layers are described in Table 1.
Table 1: Properties of proposed 37-Layer DCANN
Figure imgf000008_0001
There are seven different types of layer models employed in proposed DCANN. Convolution Layer
Convolution layer uses information from adjacent pixels to down-sample the image into features by convolution and then use prediction layers to predict the target values. Convolution filters or kernels are utilized that run over the image and compute a dot product. Each filter extracts different features from the image.
Max Pooling Layer
Max pooling layer helps reduce the spatial size of the convolved features and also helps reduce over-fitting by providing an abstracted representation of them. It is a sample -based discretization process. It is similar to the convolution layer but instead of taking a dot product between the input and the kernel we take the max of the region from the input overlapped by the kernel.
Figure imgf000009_0001
Fully Connected Layers
In a fully connected layer the input layer nodes are connected to every node in the second layer. We use one or more fully connected layers at the end of a CNN. Adding a fully -connected layer helps learn non-linear combinations of the high-level features outputted by the convolutional layers.
Soft Max Loss Layer
The "loss layer" specifies how training penalizes the deviation between the predicted (output) and true labels and is normally the final layer of a neural network. Various loss functions appropriate for different tasks may be used.
Figure imgf000009_0002
Batch Normalization Layer
Batch Normalization Layer is used to increase the stability of a neural network. It normalizes the output of a previous activation layer by subtracting the batch mean and dividing by the batch standard deviation. Batch normalization adds two trainable parameters to each layer, so the normalized output is multiplied by a “standard deviation” parameter and add a “mean” parameter. Global Avg Pooling Layer
Global Average Pooling is an operation that calculates the average output of each feature map in the previous layer. This simple operation reduces the data significantly and prepares the model for the final classification layer. It also has no trainable parameters - just like Max Pooling Layer
The proposed network produces 48-length activation vector at the output layer. A healthy human metaphase has 46, 23 homolog pairs of, chromosomes. There are 24 classes of chromosomes along with the sex chromosomes in a normal human metaphase. These chromosomes are labeled as 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, X and Y. The reason for the existence of 48 classes in specially designed DCANN is that the up-down version of each chromosome is added as a class for each chromosome label. Thus, each chromosome type is expressed in two classes. Network output labels are designed as follows: 1, 1', 2, 2 ', 3, 3', 4, 4 ', 5, 5', 6, 6 ', 7, 7' , 8, 8 ', 9, 9', 10, 10 ', 11, 11', 12, 12 ', 13, 13', 14, 14 ', 15, 15', 16, 16 ', 17, 17' , 18, 18 ', 19, 19', 20, 20 ', 21, 21', 22, 22 ', X, X', Y, Y '. Here (') sign denotes the up-down version of the corresponding chromosome class label. The invention specific DCANN, has the ability to simultaneously detect both the class and the orientation of each chromosome. When the network is run with a chromosome image of unknown class and direction, 48 activation values are generated at the network output layer. N chromosomes in a metaphase are fed in to the network one by one and
Figure imgf000010_0001
Proposed chromosome pairing algorithm determines the classes and orientations of chromosomes from the A matrix created by DC ANN, as shown in Figure 4. The algorithm is based on the information that there are two homologous pairs from each chromosome in a human karyotype table. Another information that the algorithm is based on is the DCANN output activation values. The chromosome pairing algorithm determines the class and orientation of chromosomes by taking 48 activation values for each chromosome in a metaphase. These activations are taken into account as the class confidence values.
Figure imgf000010_0002
Figure imgf000011_0001
• Each chromosome is placed in the karyotype table according to the predicted class label and orientation information.
Figure imgf000011_0002
The invention will work as a part of chromosome image analysis systems in cytogenetic diagnostic laboratories and will determine the location and orientation of chromosomes in the karyotype table by taking the straightened chromosome images from the image analysis system and processing them. The source codes and processors of the invention will work in integration with the chromosome image analysis system.

Claims

1. A cytogenetic chromosome image analysis system, comprising:
Figure imgf000012_0001
2. A chromosome pairing system according to claim 1, and characterized by
Figure imgf000012_0002
Figure imgf000013_0001
4. A chromosome pairing method according to claim 2, and characterized by
Figure imgf000013_0002
PCT/TR2021/050194 2021-12-24 2021-12-24 A deep learning-based chromosome image analysis method and system for improving the success of automatic karyotyping WO2023121572A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/TR2021/050194 WO2023121572A1 (en) 2021-12-24 2021-12-24 A deep learning-based chromosome image analysis method and system for improving the success of automatic karyotyping

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/TR2021/050194 WO2023121572A1 (en) 2021-12-24 2021-12-24 A deep learning-based chromosome image analysis method and system for improving the success of automatic karyotyping

Publications (1)

Publication Number Publication Date
WO2023121572A1 true WO2023121572A1 (en) 2023-06-29

Family

ID=86903485

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/TR2021/050194 WO2023121572A1 (en) 2021-12-24 2021-12-24 A deep learning-based chromosome image analysis method and system for improving the success of automatic karyotyping

Country Status (1)

Country Link
WO (1) WO2023121572A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2018201476B2 (en) * 2017-07-19 2020-01-30 Tata Consultancy Services Limited Crowdsourcing and deep learning based segmenting and karyotyping of chromosomes

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2018201476B2 (en) * 2017-07-19 2020-01-30 Tata Consultancy Services Limited Crowdsourcing and deep learning based segmenting and karyotyping of chromosomes

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
QIN YULEI; WEN JUAN; ZHENG HAO; HUANG XIAOLIN; YANG JIE; SONG NING; ZHU YUE-MIN; WU LINGQIAN; YANG GUANG-ZHONG: "Varifocal-Net: A Chromosome Classification Approach Using Deep Convolutional Networks", IEEE TRANSACTIONS ON MEDICAL IMAGING, IEEE, USA, vol. 38, no. 11, 1 November 2019 (2019-11-01), USA, pages 2569 - 2581, XP011755656, ISSN: 0278-0062, DOI: 10.1109/TMI.2019.2905841 *

Similar Documents

Publication Publication Date Title
US11756318B2 (en) Convolutional neural networks for locating objects of interest in images of biological samples
CN110349147B (en) Model training method, fundus macular region lesion recognition method, device and equipment
Abid et al. A survey of neural network based automated systems for human chromosome classification
Tang et al. CT image enhancement using stacked generative adversarial networks and transfer learning for lesion segmentation improvement
CN108231190B (en) Method of processing image, neural network system, device, and medium
CN113728335A (en) Method and system for classification and visualization of 3D images
US11182899B2 (en) Systems and methods for processing electronic images to detect contamination
Sarrafzadeh et al. Circlet based framework for red blood cells segmentation and counting
CN110059656B (en) Method and system for classifying white blood cells based on convolution countermeasure generation neural network
CN107049239A (en) Epileptic electroencephalogram (eeg) feature extracting method based on wearable device
Zhang et al. Urine sediment recognition method based on multi-view deep residual learning in microscopic image
Tavakoli et al. Automated detection of microaneurysms in color fundus images using deep learning with different preprocessing approaches
CN108665963A (en) A kind of image data analysis method and relevant device
Ayma et al. An adaptive filtering approach for segmentation of tuberculosis bacteria in Ziehl-Neelsen sputum stained images
Sudha et al. A novel approach for segmentation and counting of overlapped leukocytes in microscopic blood images
Cao et al. Supervised contrastive pre-training formammographic triage screening models
Abbas et al. Detection and classification of malignant melanoma using deep features of NASNet
WO2023121572A1 (en) A deep learning-based chromosome image analysis method and system for improving the success of automatic karyotyping
Prakash et al. An identification of abnormalities in dental with support vector machine using image processing
Zhao et al. A survey of sperm detection techniques in microscopic videos
Mazumder et al. Classification and detection of plant leaf diseases using various deep learning techniques and convolutional neural network
Rodrigues et al. X-Ray cardiac angiographic vessel segmentation based on pixel classification using machine learning and region growing
Doering et al. Automatic detection and counting of malaria parasite-infected blood cells
Preethi et al. Malaria parasite enumeration and classification using convolutional neural networking
Tabtaba et al. Diabetic retinopathy detection using developed hybrid cascaded multi-scale DCNN with hybrid heuristic strategy

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21969176

Country of ref document: EP

Kind code of ref document: A1