TW202032574A - Method and system for classifying cells and medical analysis platform - Google Patents

Method and system for classifying cells and medical analysis platform Download PDF

Info

Publication number
TW202032574A
TW202032574A TW108106571A TW108106571A TW202032574A TW 202032574 A TW202032574 A TW 202032574A TW 108106571 A TW108106571 A TW 108106571A TW 108106571 A TW108106571 A TW 108106571A TW 202032574 A TW202032574 A TW 202032574A
Authority
TW
Taiwan
Prior art keywords
cell
data
image data
classification
analysis platform
Prior art date
Application number
TW108106571A
Other languages
Chinese (zh)
Inventor
劉家宏
曾韋霖
Original Assignee
沛智生醫科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 沛智生醫科技股份有限公司 filed Critical 沛智生醫科技股份有限公司
Priority to TW108106571A priority Critical patent/TW202032574A/en
Priority to CN201910510908.XA priority patent/CN111612027A/en
Publication of TW202032574A publication Critical patent/TW202032574A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Software Systems (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • General Health & Medical Sciences (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Radiology & Medical Imaging (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides a method for classifying cells. The method includes: performing an image training adjustment operation with a medical analysis platform so that the medical analysis platform includes at least one deep-learning model for performing cell image classification; obtaining at least one cell image data corresponding to one cell; inputting the at least one cell image data to the at least one deep-learning model in the medical analysis platform; processing, by the medical analysis platform, a plurality of cell mass images corresponding to the at least one cell image data to obtain a plurality of feature values; and combining the feature values to generate a classification data corresponding to the feature values for determining the type of the cell. In addition, a system for classifying cells and a medical analysis platform are disclosed herein.

Description

細胞分類方法、系統與醫療分析平台 Cell classification method, system and medical analysis platform

本揭露關於一種分類方法與系統,特別是關於一種細胞分類方法與系統。 This disclosure relates to a classification method and system, and particularly to a cell classification method and system.

人工生殖技術不斷發展。經由體外授精培養胚胎後再植回子宮的輔助人工生殖技術,提供不孕症夫妻一個生育的選擇。因此,胚胎品質是胚胎能否在子宮內成功著床的一個關鍵因素。習知多由醫師或胚胎師依肉眼判別胚胎品質。然而,不同醫師或胚胎師對同一胚胎影像的判斷結果,可能因個人學識、經歷而不同。因此,如何提升對胚胎品質判斷的準確性及判斷效率是一重要課題。 Artificial reproduction technology continues to develop. The assisted artificial reproductive technology, in which embryos are cultivated through in vitro fertilization and then replanted into the uterus, provides infertile couples with a choice of reproduction. Therefore, embryo quality is a key factor for the successful implantation of embryos in the uterus. Traditionally, doctors or embryologists judge embryo quality by naked eyes. However, different physicians or embryologists may make different judgments on the same embryo image due to personal knowledge and experience. Therefore, how to improve the accuracy and efficiency of judging embryo quality is an important issue.

本揭露內容之一些實施例是關於一細胞分類方法。該方法包含:藉由一醫療分析平台進行一影像訓練調校操作,使得該醫療分析平台包含至少一深度學習模型以進行細胞影像分類;取得一細胞所對應之至少一細胞影像資料;將該至少一細胞影像資料輸入該醫療分析平台中該至少一深度學習模 型;透過該醫療分析平台處理對應該至少一細胞影像資料的複數細胞團影像,以取得複數特徵資料;以及組合該些特徵資料,以產生對應於該些特徵資料的一分類資料以供判斷該細胞的類別。 Some embodiments of the present disclosure relate to a cell classification method. The method includes: performing an image training adjustment operation by a medical analysis platform, so that the medical analysis platform includes at least one deep learning model for cell image classification; obtaining at least one cell image data corresponding to a cell; A cell image data is input into the at least one deep learning model in the medical analysis platform Type; processing multiple cell cluster images corresponding to at least one cell image data through the medical analysis platform to obtain multiple feature data; and combining the feature data to generate a classification data corresponding to the feature data for determining the The type of cell.

本揭露內容之另一些實施例是關於一細胞分類系統。該系統包含一終端機和一醫療分析平台。終端機用以取得一細胞所對應之至少一細胞影像資料。醫療分析平台與該終端機連結,用以對自該終端機接收之該細胞所對應之該至少一細胞影像資料進行分類,並回傳相應於該細胞的一分類資料至該終端機。其中該醫療分析平台包含一處理器。處理器用以處理該至少一細胞影像資料的複數細胞團影像,以取得複數特徵資料,並用以組合該些特徵資料,以產生對應於該些特徵資料的一分類資料以供判斷該細胞的類別。 Other embodiments of this disclosure are related to a cell classification system. The system includes a terminal and a medical analysis platform. The terminal is used to obtain at least one cell image data corresponding to a cell. The medical analysis platform is connected to the terminal for classifying the at least one cell image data corresponding to the cell received from the terminal, and returning a classification data corresponding to the cell to the terminal. The medical analysis platform includes a processor. The processor is used to process the plural cell cluster images of the at least one cell image data to obtain plural characteristic data, and to combine the characteristic data to generate a classification data corresponding to the characteristic data for determining the type of the cell.

為讓本案之上述和其他目的、特徵、優點與實施例能更明顯易懂,所附符號之說明如下: In order to make the above and other purposes, features, advantages and embodiments of this case more obvious and understandable, the description of the attached symbols is as follows:

100‧‧‧細胞分類方法 100‧‧‧Cell classification method

S110、S120、S130、S140、S150、S160、S170、S180‧‧‧步驟 S110, S120, S130, S140, S150, S160, S170, S180‧‧‧Step

400‧‧‧細胞分類系統 400‧‧‧Cell Classification System

410‧‧‧終端機 410‧‧‧Terminal

412‧‧‧影像取得裝置 412‧‧‧Image acquisition device

414‧‧‧輸入/輸出裝置 414‧‧‧Input/Output Device

416‧‧‧通訊裝置 416‧‧‧Communication device

420‧‧‧網路 420‧‧‧Internet

430‧‧‧醫療分析平台 430‧‧‧Medical Analysis Platform

432‧‧‧伺服器 432‧‧‧Server

434‧‧‧處理器 434‧‧‧Processor

為讓本揭露之上述和其他目的、特徵、優點與實施例能更明顯易懂,所附圖式之說明如下:第1圖係為依據本揭露內容之一實施例之一種細胞分類方法的流程圖;第2圖係為依據本揭露內容之一實施例之一種統計分類資料的示意圖;第3圖係說明依據本揭露內容之另一實施例之一細胞影像資料分類的示意圖;以及第4圖係為說明依據本揭露內容之一實施例之一種細胞分類系統的示意圖。 In order to make the above and other objectives, features, advantages and embodiments of the present disclosure more comprehensible, the description of the accompanying drawings is as follows: Figure 1 is the flow of a cell classification method according to an embodiment of the present disclosure Figure; Figure 2 is a schematic diagram of a statistical classification data according to an embodiment of the present disclosure; Figure 3 is a schematic diagram illustrating a classification of cell image data according to another embodiment of the present disclosure; and Figure 4 It is a schematic diagram illustrating a cell classification system according to an embodiment of the disclosure.

以下將以圖式及詳細說明闡述本揭露之精神,任何所屬技術領域中具有通常知識者在瞭解本揭露之較佳實施例後,當可由本揭露所教示之技術,加以改變及修飾,其並不脫離本揭露之精神與範圍。 The following will illustrate the spirit of the present disclosure with drawings and detailed descriptions. Any person with ordinary knowledge in the relevant technical field who understands the preferred embodiments of the present disclosure can change and modify the techniques taught in the present disclosure. Not departing from the spirit and scope of this disclosure.

應當理解,在本文的描述和其後的所有專利範圍中,當一個元件被稱為被『連接』或『耦合』到另一個元件時,它可以被直接連接或耦合到另一個元件,或者可能存在插入元件。相比之下,當一個元件被稱為『直接連接』或『直接耦合』到另一個元件時,則不存在插入元件。此外,『電連接』或『連接』還可以指兩個或多個元件之間的相互操作或相互作用。 It should be understood that in the description herein and all subsequent patent scopes, when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element, or may There is an insert component. In contrast, when an element is said to be "directly connected" or "directly coupled" to another element, there is no intervening element. In addition, "electrical connection" or "connection" can also refer to the mutual operation or interaction between two or more elements.

應當理解,在本文的描述和其後的所有專利範圍中,雖然『第一』、『第二』、...等詞彙可以使用來描述不同元件,但是這些元件不應該被這些術語所限制。這些詞彙只限於用來區分單一元件與另一個元件。例如,一第一元件也可被稱為一第二元件,類似地,一第二元件也可被稱為一第一元件,而不脫離實施例的範圍。 It should be understood that in the description herein and all subsequent patent scopes, although words such as "first", "second", ... can be used to describe different elements, these elements should not be limited by these terms. These terms are only used to distinguish a single element from another element. For example, a first element may also be referred to as a second element, and similarly, a second element may also be referred to as a first element without departing from the scope of the embodiment.

應當理解,在本文的描述和其後的所有專利範圍中,在本文中所使用的用詞『包含』、『包括』、『具有』、『含有』等等,均為開放性的用語,即意指『包含但不限於』。 It should be understood that in the description of this article and all subsequent patent scopes, the terms "include", "include", "have", "contain", etc. used in this article are all open terms, namely Means "including but not limited to".

應當理解,在本文的描述和其後的所有專利範圍中,所使用的『及/或』包含相關列舉項目中一或多個項目的任意一個以及其所有組合。 It should be understood that in the description herein and all subsequent patent scopes, the use of "and/or" includes any one or more of the related listed items and all combinations thereof.

應當理解,在本文的描述和其後的所有專利範圍 中,除非另有定義,使用的所有術語(包括技術和科學術語)與本揭露所屬領域技術人員所理解的具有相同含義。進一步可以明瞭,除非這裡明確地說明,這些術語,例如在常用字典中所定義的術語,應該被解釋為具有與其在相關領域背景下的含義相一致的含義,而不應被理想化地或過於正式地解釋。 It should be understood that in the description of this article and all subsequent patent scope Unless otherwise defined, all terms used (including technical and scientific terms) have the same meanings as understood by those skilled in the art to which this disclosure belongs. It is further clear that, unless explicitly stated here, these terms, such as those defined in commonly used dictionaries, should be interpreted as having meanings consistent with their meanings in the context of the relevant field, and should not be idealized or excessively Explain formally.

專利範圍中的任一元件如果沒有明確說明『裝置用於』實施一特定功能,或是『步驟用於』實施一特定功能,不應當被解釋為手段功能用語。 If any element in the scope of the patent does not clearly state that "the device is used to implement a specific function, or that the "step is used to implement" a specific function, it should not be interpreted as a means function term.

請參照第1圖。第1圖係為依據本揭露內容之一實施例之一種細胞分類方法100的流程圖。如第1圖所繪示,細胞分類方法100首先執行步驟S110。在步驟S110中,蒐集用作訓練影像的細胞影像及其相應的分類資料。在本案的一些實施例中,上述蒐集的步驟可使用顯微鏡或是其他可以拍照的裝置,蒐集已發展至囊胚期(Blastocyst stage)的胚胎細胞影像資料,接著由醫療專業人士,例如婦產科醫生、胚胎師等,標註每張細胞影像資料中的囊胚細胞之品質的分類資料。 Please refer to Figure 1. FIG. 1 is a flowchart of a cell classification method 100 according to an embodiment of the present disclosure. As shown in Figure 1, the cell classification method 100 first executes step S110. In step S110, the cell images used as training images and their corresponding classification data are collected. In some embodiments of this case, the above-mentioned collection steps can use a microscope or other photographic devices to collect image data of embryonic cells that have developed to the blastocyst stage, and then medical professionals such as obstetrics and gynecology Doctors, embryologists, etc., mark the classification data of the quality of blastocyst cells in each cell image data.

具體而言,判斷囊胚細胞的品質需分別、獨立觀察三類特徵。第一類為囊胚腔(Blastocoel)的飽滿程度與透明帶(Zone pellucida)狀態,依據囊胚腔的擴張程度可分為1至6個等級。其中等級1代表囊胚腔體積小於胚胎總體積的50%,而等級6代表孵出的囊胚已全部從透明帶中溢出,是最成熟的囊胚型態。第二類為內細胞團(Inner Cell Mass,ICM)的數量與排列情形,可分為A、B、C等級。其中等級A代表細胞數量多且緊密連結,品質最佳,而等級C代表細胞數量少。第三類為滋養層細胞(Trophectoderm cell,TE)的數量與排列 情形,可分為A、B、C等級。其中等級A代表由非常多細胞緊密連結構成,品質最佳,而等級C代表由很少且大細胞構成。因此,在一些實施例中,每張囊胚的細胞影像資料都會有3個特徵分類,例如「5AA」便是代表擴張程度為5、內細胞團等級A及滋養層細胞等級A的狀態。由於內細胞團為發育形成胎兒的主要構造,而滋養層細胞則是發育成胎盤的主要構造。因此,細胞品質的好壞茲關胎兒的發育狀況。同時,品質越好的囊胚代表越好的著床率,也降低流產與子宮外孕的機率。 Specifically, to determine the quality of blastocyst cells, three types of characteristics must be observed separately and independently. The first category is the fullness of the blastocoel (Blastocoel) and the state of the zona pellucida, which can be divided into 1 to 6 levels according to the degree of expansion of the blastocoel. Grade 1 means that the volume of the blastocyst cavity is less than 50% of the total embryo volume, while grade 6 means that the hatched blastocysts have all overflowed from the zona pellucida, which is the most mature type of blastocyst. The second category is the number and arrangement of Inner Cell Mass (ICM), which can be divided into A, B, and C levels. Among them, grade A means that the number of cells is large and tightly connected, and the quality is the best, while grade C means that the number of cells is small. The third category is the number and arrangement of trophoblast cells (Trophectoderm cell, TE) Situations can be divided into A, B, and C levels. Among them, grade A means it is made up of very many cells with the best quality, while grade C means it is made up of very few and large cells. Therefore, in some embodiments, the cell image data of each blastocyst will have 3 characteristic classifications. For example, "5AA" represents the state of expansion degree 5, inner cell mass grade A, and trophoblast cell grade A. Since the inner cell mass is the main structure for the development of the fetus, the trophoblast cells are the main structure for the development of the placenta. Therefore, the quality of the cells is critical to the development of the fetus. At the same time, the better the quality of the blastocyst, the better the implantation rate, which also reduces the probability of miscarriage and ectopic pregnancy.

接著,在步驟S120中,對在S110中所取得用作訓練的影像進行預處理,以供後續的分析與訓練用。在本案的一些實施例中,在原始影像的比例被保留情況下,將影像轉成264像素乘以198像素的圖檔,之後可利用直方圖均衡化,藉此平均影像的亮度,同時亦加強局部區域的對比。 Next, in step S120, the image obtained in S110 for training is preprocessed for subsequent analysis and training. In some embodiments of this case, the image is converted into a 264 pixel by 198 pixel image file with the original image's proportions preserved, and then the histogram can be used for equalization to average the brightness of the image while also enhancing Contrast of local area.

另一方面,在步驟S130中,建構預訓練的神經網路模型。在一些實施例中,上述建構的步驟首先取用ImageNet圖像識別競賽(ImageNet Large Scale Visual Recognition Challenge,ILSVRC)中預訓練好之神經網路中的卷積層(convolutional layer)及其權重,作為模型的基底。例如,神經網路可以是採用Imagenet數據庫進行預訓練的殘差神經網路模型ResNet50(Residual Neural Network)。接著,在本案的一些實施例中,針對判別囊胚品質的需要,在前述的模型基底上連接新的神經網路層,例如一全連接層(Full Connected layers,FC),而形成神經網路分類器以進行囊胚品質分類。此外,一或多個機器學習軟體庫如Keras及TensorFlow等,可以用於實施神經網路分類器。應當注意的是,本案之態樣在此方 面不受限制。神經網路分類器不限於單一的神經網路,而可以是任何其他合適類型的分類器,或是彼此的組合。例如,神經網路分類器可以是一卷積神經網路(Convolutional Neural Network,CNN)與一遞歸神經網路(Recurrent Neural network,RNN)的組合。 On the other hand, in step S130, a pre-trained neural network model is constructed. In some embodiments, the above construction steps first take the convolutional layer and its weights in the pre-trained neural network in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) as a model The base. For example, the neural network may be a residual neural network model ResNet50 (Residual Neural Network) pre-trained using the Imagenet database. Then, in some embodiments of this case, in order to determine the quality of blastocysts, a new neural network layer, such as a Full Connected layer (FC), is connected to the aforementioned model base to form a neural network Classifier to classify blastocyst quality. In addition, one or more machine learning software libraries such as Keras and TensorFlow can be used to implement neural network classifiers. It should be noted that the state of the case is here The surface is not restricted. The neural network classifier is not limited to a single neural network, but can be any other suitable type of classifier, or a combination of each other. For example, the neural network classifier may be a combination of a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN).

接著,將經預處理的細胞影像資料輸入預訓練好的神經網路模型中,並執行步驟S140,進行一影像訓練調校操作,以產生基於神經網路的一深度學習模型。在一些實施例中,上述訓練調校的操作先將經預處理的細胞影像資料連同其相應的分類資料分為訓練集、驗證集和測試集。再將屬訓練集中的細胞影像資料輸入卷積神經網路模型中。透過卷積層和池化層(pooling layer)交替處理輸入影像,找出特徵圖或特徵值,並利用至少一全連接層處理特徵圖,以產生對應於該細胞影像資料的預測分類。在一些實施例中,由於輸出的預測分類與由醫療專業人士標註之相應的實際分類資料有誤差,因此定義損失函數和優化器,利用反向傳播算法(backpropagation)計算預測分類與實際分類資料之間的誤差,並將誤差反向傳播至神經網路的輸入層,不斷遞代前述過程而訓練模型,直到找出最優的參數與模型結構組合。最後,使用所有細胞影像資料及分類資料,結合前述的最優參數及模型結構訓練並輸出最終模型。 Then, input the pre-processed cell image data into the pre-trained neural network model, and perform step S140 to perform an image training and adjustment operation to generate a deep learning model based on the neural network. In some embodiments, the above-mentioned training and adjustment operation first divides the preprocessed cell image data and its corresponding classification data into a training set, a verification set, and a test set. Then input the cell image data in the training set into the convolutional neural network model. The input image is processed alternately through a convolutional layer and a pooling layer to find a feature map or feature value, and at least one fully connected layer is used to process the feature map to generate a predictive classification corresponding to the cell image data. In some embodiments, because the output predicted classification is in error with the corresponding actual classification data marked by the medical professional, the loss function and optimizer are defined, and the backpropagation algorithm is used to calculate the difference between the predicted classification and the actual classification data. The error is propagated back to the input layer of the neural network, and the model is trained through the previous process until the optimal combination of parameters and model structure is found. Finally, use all cell image data and classification data, combined with the aforementioned optimal parameters and model structure to train and output the final model.

需注意的是,以本案的一些實施例為例,在步驟S140中,藉由連接不同的神經網路層(如分別針對囊胚細胞的囊胚腔與透明帶、內細胞團及滋養層細胞)可產生不同的深度學習模型。具體來說,一細胞影像資料可分別輸入三個連接不同全連接層以進行神經網路分類器的深度學習模型中,並進行 訓練及優化的過程,至終輸出三個最終模型。應當理解的是,本案所述的實施例僅為著易於理解,然而本案的實施方式並不以此為限。 It should be noted that, taking some embodiments of this case as an example, in step S140, by connecting different neural network layers (such as the blastocyst cavity and zona pellucida of the blastocyst, the inner cell mass and the trophoblast cell) ) Can produce different deep learning models. Specifically, a cell image data can be input into three deep learning models that connect different fully connected layers to perform neural network classifiers, and perform In the process of training and optimization, three final models are finally output. It should be understood that the embodiments described in this case are only for ease of understanding, but the implementation of this case is not limited thereto.

接著,在步驟S150中,藉由一醫療分析平台提供細胞品質判別服務。在本案的一些實施例中,此醫療分析平台包含步驟S140中所述的至少一深度學習模型以進行細胞影像分類。此外,此醫療分析平台亦可根據步驟S140進行對一或多個深度學習模型的影像訓練調校操作。 Then, in step S150, a medical analysis platform is used to provide cell quality discrimination services. In some embodiments of the present case, the medical analysis platform includes at least one deep learning model described in step S140 to perform cell image classification. In addition, the medical analysis platform can also perform image training and adjustment operations on one or more deep learning models according to step S140.

在步驟S150後,該平台可藉由接收細胞影像並將影像輸入平台內的深度學習模型中,藉此進行細胞品質判別的操作,亦即如本案的一些實施例所述進行細胞影像分類的操作。 After step S150, the platform can perform cell quality determination operations by receiving cell images and inputting the images into the deep learning model in the platform, that is, performing cell image classification operations as described in some embodiments of this case .

上述的細胞影像可藉由不同的方式取得,亦即可依據實際後續分析需要,在不同時間、以不同方式或參數操作影像取得裝置而得到單張或多張細胞影像。在一些實施例中,影像取得裝置可以是一相機、一顯微鏡、一CCD攝影機或是任何具有相同功能的裝置。在本案的一些實施例中,可透過調整一顯微鏡的拍攝焦距,由深至淺取得囊胚細胞在縱向對應不同焦平面的複數個細胞影像資料,以形成複數個深度序列細胞影像。調整顯微鏡的方式可以是以程式自動控制顯微鏡拍攝焦距或是人工手動調整並拍攝的方式。例如,焦距每減少10微米,即拍攝一張細胞影像,並儲存在一儲存裝置中。在一特定焦段範圍內調整顯微鏡焦距後,將在此特定焦段拍攝的照片組成為按照焦距長短排列的一組深度序列細胞影像,並一起輸入至醫療分析平台的深度學習模型中進行處理。 The above-mentioned cell images can be obtained in different ways, that is, according to actual subsequent analysis needs, a single or multiple cell images can be obtained by operating the image obtaining device at different times, with different methods or parameters. In some embodiments, the image acquisition device may be a camera, a microscope, a CCD camera or any device with the same function. In some embodiments of the present case, by adjusting the shooting focal length of a microscope, multiple cell image data corresponding to different focal planes in the longitudinal direction of the blastocyst can be obtained from deep to shallow to form multiple depth-sequence cell images. The method of adjusting the microscope can be a method of programmatically automatically controlling the focal length of the microscope or manually adjusting and shooting. For example, every time the focal length is reduced by 10 microns, a cell image is taken and stored in a storage device. After adjusting the focal length of the microscope within a specific focal length, the photos taken at this specific focal length are composed into a set of depth sequence cell images arranged according to the focal length, and they are input into the deep learning model of the medical analysis platform for processing.

在另一些實施例中,也可以程式控制影像取得裝 置或人工紀錄時間的方式,依序取得囊胚細胞所對應之在不同時間點的複數細胞影像資料,以形成複數個時間序列細胞影像。例如,每10分鐘即拍攝一張囊胚細胞影像,並儲存在一儲存裝置中。經過一特定時段後,將這特定時段內的照片組成為按照拍攝時間順序的一組時間序列細胞影像,並一起輸入至醫療分析平台的深度學習模型中進行處理。 In other embodiments, the image acquisition device can also be controlled programmatically. By means of setting or manually recording time, sequentially obtain multiple cell image data corresponding to blastocyst cells at different time points to form multiple time-series cell images. For example, an image of blastocyst cells is taken every 10 minutes and stored in a storage device. After a specific period of time, the photos in this specific period of time are assembled into a group of time-series cell images in the order of shooting time, and they are input into the deep learning model of the medical analysis platform for processing.

上述的細胞影像資料可儲存於一硬碟、一記憶體、一資料庫、一雲端數據儲存平台或任何儲存裝置中,並於適當時間被讀取出來進行分析或處理。 The above-mentioned cell image data can be stored in a hard disk, a memory, a database, a cloud data storage platform or any storage device, and be read out for analysis or processing at an appropriate time.

接著,在步驟S160中,輸入欲判別之細胞影像資料至醫療分析平台中。具體的說,將單張或序列的細胞影像資料輸入至醫療分析平台內至少一個深度學習模型中。在本案的一些實施例中,為提供囊胚細胞的分類資料,分別需要囊胚腔擴張程度、內細胞團及滋養細胞層的特徵分類資料。因此,需要將該至少一囊胚細胞影像資料分別輸入對應囊胚腔擴張程度、內細胞團及滋養細胞層的深度學習模型中。應當理解的是,本案所述的實施例僅為著易於理解,然而本案的實施方式並不以此為限。 Then, in step S160, input the cell image data to be identified into the medical analysis platform. Specifically, the cell image data of a single sheet or sequence is input into at least one deep learning model in the medical analysis platform. In some embodiments of this case, in order to provide classification data of blastocyst cells, the characteristic classification data of blastocyst cavity expansion degree, inner cell mass and trophoblast layer are respectively required. Therefore, it is necessary to input the at least one blastocyst image data into the deep learning model corresponding to the expansion degree of the blastocyst cavity, the inner cell mass and the trophoblast cell layer. It should be understood that the embodiments described in this case are only for ease of understanding, but the implementation of this case is not limited thereto.

進一步,在步驟S170中,醫療分析平台將依據所輸入的細胞影像資料,進行細胞品質判別操作。在一些實施例中,透過醫療分析平台處理對應該至少一輸入欲判別的細胞影像資料的複數細胞團影像,以取得複數個特徵資料。具體而言,是透過醫療分析平台中之至少一深度學習模型處理該至少一輸入欲判別的細胞影像資料的複數細胞團影像。 Further, in step S170, the medical analysis platform will perform a cell quality judgment operation based on the input cell image data. In some embodiments, multiple cell cluster images corresponding to at least one input cell image data to be identified are processed through a medical analysis platform to obtain multiple feature data. Specifically, the at least one multiple cell cluster image of the input cell image data to be determined is processed through at least one deep learning model in the medical analysis platform.

在一些實施例中,該至少一深度學習模型包含一 卷積神經網路模型,因此上述操作可利用該卷積神經網路模型擷取欲判別的細胞影像中上述細胞團的複數特徵值。舉例而言,在一些實施例中,將一張欲判別的囊胚細胞影像資料分別輸入至對應於囊胚腔擴張程度、內細胞團及滋養層細胞的三個深度學習模型之卷積神經網路的輸入層中。接著,利用該些卷積神經網路的至少一卷積層擷取細胞影像的特徵值。這些特徵值可以是形狀、亮度或質地等。最後,經過卷積神經網路至少一隱藏層的處理將這些特徵值連接至全連接層(神經網路分類器)而產生對應的第一特徵分類資料。例如,這張囊胚細胞影像經過對應囊胚腔擴張程度、內細胞團及滋養層細胞三個深度學習的模型產生出相應的三個第一特徵分類資料可以是「4」、「A」、「B」等等。 In some embodiments, the at least one deep learning model includes a The convolutional neural network model, so the above operation can use the convolutional neural network model to capture the complex feature value of the cell cluster in the cell image to be judged. For example, in some embodiments, a piece of blastocyst image data to be discriminated is input into the convolutional neural network of three deep learning models corresponding to the expansion degree of the blastocyst cavity, the inner cell mass and the trophoblast cells. In the input layer of the road. Then, at least one convolutional layer of the convolutional neural networks is used to capture the feature value of the cell image. These feature values can be shape, brightness or texture. Finally, through at least one hidden layer of the convolutional neural network, these feature values are connected to the fully connected layer (neural network classifier) to generate corresponding first feature classification data. For example, this blastocyst image has three deep learning models corresponding to the expansion of the blastocyst cavity, the inner cell mass and the trophoblast cells, and the corresponding three first feature classification data can be "4", "A", "B" and so on.

接著,醫療分析平台組合這些第一特徵分類資料以產生對應此細胞的分類資料。如上述的一實施例,醫療分析平台組合該些彼此獨立的第一特徵分類資料(「4」、「A」、「B」等)形成「4AB」,作為根據此張細胞影像資料生成的分類資料。 Then, the medical analysis platform combines the first feature classification data to generate classification data corresponding to the cell. As in the above embodiment, the medical analysis platform combines these independent first feature classification data ("4", "A", "B", etc.) to form "4AB" as the classification generated based on this cell image data data.

之後,在步驟S180中,將對應於此細胞的分類資料輸出,作為細胞品質判別的結果。在一些實施例中,醫療分析平台可將判別結果傳輸至透過網路與平台連結的裝置中,使得使用此裝置的操作人員可利用此判別結果進行後續判斷,例如:具有此細胞品質的囊胚細胞是否合適植入母體。 Then, in step S180, the classification data corresponding to the cell is output as the result of cell quality discrimination. In some embodiments, the medical analysis platform can transmit the judgment result to a device connected to the platform through the network, so that the operator using the device can use the judgment result to make subsequent judgments, for example: a blastocyst with this cell quality Whether the cells are suitable for implantation in the mother.

在一些實施例中,醫療分析平台可根據步驟S150提供一細胞的複數張細胞影像資料的細胞品質判別服務。請參照第2圖。第2圖係為依據本揭露內容之一實施例之一種統計分 類資料的示意圖。如第2圖所示,將同一細胞的複數個細胞影像1~細胞影像n相應的輸入至卷積神經網路CNN1~CNNn中而產生分類資料1~分類資料n。接著,統計各種分類資料出現的次數,並以次數最多的一分類資料作為對此一細胞品質的最終判別結果。例如,在本案的一些實施例中,共輸入某一囊胚細胞的30張細胞影像資料至醫療分析平台中,在經過神經網路分類器後得到對應於30張細胞影像資料的30個分類資料,其中為「4AB」的分類資料出現15次,為「4AA」的分類資料出現5次而為「4BB」的分類資料出現5次。則醫療分析平台輸出出現次數最多的分類資料「4AB」作為此細胞的類別。應當理解的是,本案所述的實施例僅為著易於理解,然而本案的實施方式並不以此為限。 In some embodiments, the medical analysis platform can provide a cell quality discrimination service for multiple pieces of cell image data of a cell according to step S150. Please refer to Figure 2. Figure 2 is a statistical analysis based on an embodiment of the present disclosure Schematic diagram of class data. As shown in Figure 2, input multiple cell images 1~cell images n of the same cell into the convolutional neural network CNN1~CNNn to generate classification data 1~classification data n. Then, the number of occurrences of various classification data is counted, and the classification data with the most frequency is used as the final judgment result of the quality of this cell. For example, in some embodiments of this case, a total of 30 cell image data of a certain blastocyst are input to the medical analysis platform, and 30 classification data corresponding to the 30 cell image data are obtained after the neural network classifier , The classified data of "4AB" appears 15 times, the classified data of "4AA" appears 5 times, and the classified data of "4BB" appears 5 times. The medical analysis platform outputs the classification data "4AB" with the most occurrences as the category of this cell. It should be understood that the embodiments described in this case are only for ease of understanding, but the implementation of this case is not limited thereto.

在另一些實施例中,細胞分類方法100可以更進一步處理深度序列細胞影像和/或時間序列細胞影像。請參照第3圖。第3圖係說明依據本揭露內容之另一實施例之一細胞影像資料分類的示意圖。以處理時間序列細胞影像為例,執行步驟S160,將此時間序列細胞影像資料輸入至如步驟S130至S160中更包含遞歸神經網路模型的醫療分析平台,其中結合卷積神經網路與遞歸神經網路的模型包含n個層級網路。如第3圖所示,由虛線框起來的部份表示一個層級網路,在時間序列細胞影像中的每一張細胞影像被逐一按順序輸入至相應的一個層級網路。 In other embodiments, the cell classification method 100 can further process the depth sequence cell image and/or the time sequence cell image. Please refer to Figure 3. FIG. 3 is a schematic diagram illustrating the classification of cell image data according to another embodiment of the disclosure. Taking the processing of time-series cell images as an example, step S160 is executed to input the time-series cell image data to a medical analysis platform that further includes a recurrent neural network model in steps S130 to S160, which combines convolutional neural networks and recurrent neural networks The network model includes n-level networks. As shown in Figure 3, the part framed by the dotted line represents a hierarchical network, and each cell image in the time-series cell image is sequentially input to a corresponding hierarchical network.

接續上述,於步驟S170中,在一些實施例,在每一個層級網路中,皆利用卷積神經網路模型的至少一卷積層擷取細胞影像中細胞團的特徵值,其中特徵值可以是一特徵向量 或矩陣。在一些實施例中,這些特徵向量可以是第一特徵分類資料,與細胞影像資料在空間變化的圖形特徵有關,之後,將第一特徵分類資料輸入至遞歸神經網路以擷取與因時間變化而產生之圖形特徵有關的第二特徵分類資料。同時,該第二特徵分類資料也會被輸入至下一層級與最後一個層級的遞歸神經網路。透過前述的操作可使描述此細胞的第二特徵分類資料在神經網路中被傳遞下去,且代表單一層級網路所分析出的結果不單由該層級網路的卷積神經網路產生,亦會受前面幾個層級網路分析結果的影響。至終,最後一層級網路中的遞歸神經網路可根據最後一層級網路中卷積神經網路提供之特徵向量,及前面各層級遞歸神經網路提供之第二特徵分類資料,產生對此細胞之分類資料。 Following the above, in step S170, in some embodiments, in each hierarchical network, at least one convolutional layer of the convolutional neural network model is used to capture the feature value of the cell cluster in the cell image, where the feature value can be Eigenvector Or matrix. In some embodiments, these feature vectors may be the first feature classification data, which is related to the graphical features of the spatial variation of the cell image data. After that, the first feature classification data is input to the recurrent neural network to capture and change with time. And the second feature classification data related to the generated graphic feature. At the same time, the second feature classification data will also be input to the recurrent neural network of the next level and the last level. Through the foregoing operations, the second feature classification data describing this cell can be passed down in the neural network, and the analysis result of a single-level network is not only generated by the convolutional neural network of that level network, but also Will be affected by the results of the previous several levels of network analysis. In the end, the recurrent neural network in the last-level network can generate pairs based on the feature vector provided by the convolutional neural network in the last-level network and the second feature classification data provided by the recurrent neural network at the previous levels. Classification data of this cell.

舉例而言,請再參照第3圖。以每10分鐘對一細胞拍攝一張細胞影像的頻率,在2日內依序蒐集288張細胞影像作為細胞影像資料。接著,將此細胞影像資料輸入至醫療分析平台進行分析操作。在上述的分析操作中,288張細胞影像分別依照取得的時間順序,被輸入對應的288個層級網路中,接著透過層級網路中的卷積神經網路取得每一張圖片的特徵向量,例如一1x2048的矩陣,作第一特徵分類資料。之後,將這些第一特徵分類資料輸入至層級網路中的遞歸神經網路以產生第二特徵分類資料,其中每一個層級網路所產生的第二特徵分類都會被輸入至下一層及第288層級網路,至終再由第288層級網路根據其內卷積神經網路提供的第一特徵分類資料(特徵向量)及前面287個層級網路提供的第二特徵分類資料,產生對應此細胞的分類資料。 For example, please refer to Figure 3 again. At the frequency of taking one cell image of a cell every 10 minutes, 288 cell images are collected sequentially as cell image data within 2 days. Then, input the cell image data to the medical analysis platform for analysis operation. In the above analysis operation, 288 cell images were input into the corresponding 288 hierarchical networks according to the time sequence obtained, and then the feature vector of each image was obtained through the convolutional neural network in the hierarchical network. For example, a 1x2048 matrix is used as the first feature classification data. After that, the first feature classification data is input to the recurrent neural network in the hierarchical network to generate the second feature classification data, and the second feature classification generated by each hierarchical network will be input to the next layer and the 288th Hierarchical network. Finally, the 288th level network generates corresponding data based on the first feature classification data (feature vector) provided by its inner convolutional neural network and the second feature classification data provided by the previous 287 level networks. Classification of cells.

在一些實施例中,透過將每一層級網路擷取的第二特徵分類資料輸入至下一層級及最後一層級遞歸神經網路的方式,醫療分析平台的深度學習模型考量依不同時間或不同深度形成序列的一囊胚細胞影像之特徵,可提高對此細胞品質判別的準確度。 In some embodiments, by inputting the second feature classification data extracted from each level network to the next level and the last level of the recurrent neural network, the deep learning model of the medical analysis platform considers different time or different The image characteristics of a blastocyst cell in the deep formation sequence can improve the accuracy of the cell quality judgment.

應當注意的是,如前面所述之基於卷積神經網路與遞歸神經網路所結合的深度學習模型,其訓練及產生方式與步驟S140中舉例之基於卷積神經網路所產生的深度學習模型相似,故在此不再贅述。 It should be noted that the deep learning model based on the combination of the convolutional neural network and the recurrent neural network as described above is trained and generated in the same way as the deep learning generated based on the convolutional neural network in step S140. The models are similar, so I won't repeat them here.

請參照第4圖。第4圖係為說明依據本揭露內容之一實施例之一種細胞分類系統400的示意圖。在一些實施例中,第1圖所示之細胞分類方法100可透過細胞分類系統400實現。但實現本案提供的細胞分類方法並不以此為限。 Please refer to Figure 4. FIG. 4 is a schematic diagram illustrating a cell classification system 400 according to an embodiment of the disclosure. In some embodiments, the cell classification method 100 shown in FIG. 1 can be implemented by the cell classification system 400. But the realization of the cell classification method provided in this case is not limited to this.

如第4圖所示,細胞分類系統400包含一終端機410、一網路420和一醫療分析平台430。終端機410與醫療分析平台430透過網路420連接彼此。終端機410用以取得一細胞所對應之至少一細胞影像資料。在本案的一些實施例中,細胞影像資料可以包含一囊胚細胞。透過網路420,終端機410傳送細胞影像資料至醫療分析平台430進行分析。醫療分析平台430與終端機410連結,用以對自終端機410接收之該細胞所對應之該至少一細胞影像資料進行分類,並回傳相應於該細胞的一分類資料至該終端機410。 As shown in FIG. 4, the cell classification system 400 includes a terminal 410, a network 420, and a medical analysis platform 430. The terminal 410 and the medical analysis platform 430 are connected to each other through the network 420. The terminal 410 is used to obtain at least one cell image data corresponding to a cell. In some embodiments of this case, the cell image data may include a blastocyst cell. Through the network 420, the terminal 410 transmits the cell image data to the medical analysis platform 430 for analysis. The medical analysis platform 430 is connected to the terminal 410 to classify the at least one cell image data corresponding to the cell received from the terminal 410, and return a classification data corresponding to the cell to the terminal 410.

如第4圖所示,終端機410可包含一影像取得裝置412、一輸入/輸出裝置414及一通訊裝置416。影像取得裝置412耦接至輸入/輸出裝置414及通訊裝置416。輸入/輸出裝置414 耦接至通訊裝置416。在一些實施例中,終端機410可用個人電腦、平板電腦、移動裝置或任何具相同功能的裝置實現。 As shown in FIG. 4, the terminal 410 may include an image acquisition device 412, an input/output device 414, and a communication device 416. The image acquisition device 412 is coupled to the input/output device 414 and the communication device 416. Input/output device 414 Coupled to the communication device 416. In some embodiments, the terminal 410 can be implemented by a personal computer, a tablet computer, a mobile device, or any device with the same function.

醫療分析平台430可包含一伺服器432及一處理器434。伺服器432與處理器434耦接。在一些實施例中,伺服器432可以是指具有相關通訊、儲存資料或處理資料等功能的物理處理器,在此不設限。處理器434可以積體電路如微控制單元(microcontroller)、微處理器(microprocessor)、數位訊號處理器(digital signal processor)、特殊應用積體電路(application specific integrated circuit,ASIC)、邏輯電路或其他類似元件或上述元件的組合實施。 The medical analysis platform 430 may include a server 432 and a processor 434. The server 432 is coupled to the processor 434. In some embodiments, the server 432 may refer to a physical processor with functions such as related communication, data storage, or data processing, and is not limited herein. The processor 434 can be an integrated circuit such as a microcontroller, a microprocessor, a digital signal processor, an application specific integrated circuit (ASIC), a logic circuit or other Similar elements or a combination of the above elements are implemented.

請一併參照第1圖和第4圖。以下將根據本案的一實施例說明透過細胞分類系統400實現細胞分類方法100。 Please refer to Figure 1 and Figure 4 together. The following will describe the cell classification method 100 implemented by the cell classification system 400 according to an embodiment of the present case.

在執行步驟S110後,醫療分析平台430可根據步驟S120至步驟S140,由處理器434根據至少一深度學習模型,進行後續步驟S150中提供後續細胞品質判斷的操作。 After step S110 is performed, the medical analysis platform 430 can perform the operation of providing subsequent cell quality judgment in the subsequent step S150 by the processor 434 according to the at least one deep learning model according to the steps S120 to S140.

根據步驟S120,處理器434用以透過對所蒐集的細胞影像資料進行預處理,例如直方圖均衡化,使得該些細胞影像資料具有相同的圖形大小及利於後續分析的對比度等。之後,依照步驟S130,處理器434中建構一預訓練的神經網路模型。實施方式如前面所述,在此不再贅述。 According to step S120, the processor 434 is used for preprocessing the collected cell image data, such as histogram equalization, so that the cell image data have the same graphic size and the contrast for subsequent analysis. After that, according to step S130, a pre-trained neural network model is constructed in the processor 434. The implementation is as described above, and will not be repeated here.

接著,根據步驟S140,處理器434用以對複數個待訓練細胞影像資料與對應於該些待訓練細胞影像資料之複數個特徵資料進行訓練。在一些實施例中,該些特徵資料可以是關於囊胚細胞品質的分類資料,如前述的分類資料「5AA」。此外,在進行如步驟S140中的訓練調校操作時,在本案的一些實 施例中,所有蒐集到的細胞影像資料及相應的分類資料可隨機地被分配成訓練集和驗證集。處理器434利用訓練集中的資料對預訓練模型進行訓練並產生深度學習模型。之後,處理器434再將驗證集中的資料輸入至深度學習模型,以產生對相應於驗證集中細胞影像資料的特徵資料。當由處理器434對驗證細胞影像資料進行處理所產生之特徵資料與對應於驗證細胞影像資料的驗證特徵資料不同時,處理器434中深度學習模型之參數和/或模型結構將會被調整,以改進深度學習模型判別細胞品質的準確度。 Then, according to step S140, the processor 434 is used to train the plurality of cell image data to be trained and the plurality of feature data corresponding to the cell image data to be trained. In some embodiments, the characteristic data may be classification data about the quality of blastocyst cells, such as the aforementioned classification data "5AA". In addition, when performing the training adjustment operation in step S140, some practical In an embodiment, all collected cell image data and corresponding classification data can be randomly allocated into training set and validation set. The processor 434 uses the data in the training set to train the pre-training model and generate a deep learning model. After that, the processor 434 inputs the data in the verification set to the deep learning model to generate feature data corresponding to the cell image data in the verification set. When the feature data generated by the processing of the verification cell image data by the processor 434 is different from the verification feature data corresponding to the verification cell image data, the parameters and/or model structure of the deep learning model in the processor 434 will be adjusted. Improve the accuracy of the deep learning model to determine cell quality.

接著,根據步驟S150,已具備深度學習模型而可提供細胞品質判別服務的醫療分析平台430透過網路420與終端機410連結。在一些實施例中,終端機410可以是醫院或人工生殖中心建置的電腦或影像處理系統等,但本案不以此為限。 Then, according to step S150, the medical analysis platform 430 that has a deep learning model and can provide cell quality judgment services is connected to the terminal 410 through the network 420. In some embodiments, the terminal 410 may be a computer or an image processing system built in a hospital or an artificial reproduction center, but this case is not limited to this.

終端機410中的影像取得裝置412用以取得該至少一細胞影像資料。在一些實施例中,影像取得裝置412拍攝一細胞的細胞影像資料,其中細胞影像資料可以是單張影像,亦可以是影像取得裝置412依序取得在不同時間點或不同切片深度的序列影像。之後,影像取得裝置412根據步驟S160,透過通訊裝置416經網路420,將所取得的細胞影像資料傳送至醫療分析平台430進行分析。 The image obtaining device 412 in the terminal 410 is used to obtain the at least one cell image data. In some embodiments, the image acquisition device 412 captures cell image data of a cell, where the cell image data can be a single image, or the image acquisition device 412 sequentially acquires serial images at different time points or different slice depths. After that, the image obtaining device 412 transmits the obtained cell image data to the medical analysis platform 430 for analysis through the communication device 416 via the network 420 according to step S160.

接著,醫療分析平台430中的伺服器432用以接收自通訊裝置416傳輸的細胞影像資料,並根據步驟S170,輸入細胞影像資料至處理器434進行細胞品質判別。 Next, the server 432 in the medical analysis platform 430 is used to receive the cell image data transmitted from the communication device 416, and according to step S170, input the cell image data to the processor 434 for cell quality determination.

在一些實施例中,處理器434更用以處理該至少一細胞影像資料的複數細胞團影像,以取得複數特徵資料,並用 以組合該些特徵資料,以產生對應於該些特徵資料的一分類資料以供判斷該細胞的類別。在本案的一些實施例中,如前面所述,複數個細胞團影像可以是指囊胚細胞的囊胚腔及透明帶、內細胞團和滋養層細胞。 In some embodiments, the processor 434 is further used to process the plural cell cluster images of the at least one cell image data to obtain plural characteristic data, and use The characteristic data are combined to generate a classification data corresponding to the characteristic data for judging the type of the cell. In some embodiments of the present case, as described above, the plurality of cell cluster images may refer to the blastocoel and zona pellucida, inner cell cluster and trophoblast cells of blastocyst cells.

具體而言,處理器434更用以分別對細胞團影像執行基於類神經網路的一第一特徵辨識操作,以產生第一特徵資料供處理器434組合特徵資料。在一些實施例中,類神經網路可以是一卷積神經網路或任何具從影像中擷取圖形的特徵值或特徵向量的神經網路模型。如本案的一實施例,處理器434透過對一囊胚細胞的細胞影像資料執行第一特徵辨識操作。值得注意的是,此第一特徵辨識操作分別對應囊胚細胞的擴張程度、內細胞團和滋養層細胞,並產生相應的第一特徵資料作特徵資料,分別如「4」、「A」、「B」等。最後,處理器434經組合特徵資料「4」、「A」及「B」形成「4AB」作為根據此張囊胚細胞影像資料生成的分類資料。其他實施方式如步驟S170所述,在此不再贅述。 Specifically, the processor 434 is further configured to perform a first feature recognition operation based on a neural network on the image of the cell clusters to generate first feature data for the processor 434 to combine the feature data. In some embodiments, the neural network may be a convolutional neural network or any neural network model that extracts feature values or feature vectors from images. As in an embodiment of this case, the processor 434 performs the first feature recognition operation on the cell image data of a blastocyst cell. It is worth noting that this first feature identification operation corresponds to the expansion degree of blastocyst cells, inner cell mass and trophoblast cells, and generates corresponding first feature data as feature data, such as "4", "A", "B" etc. Finally, the processor 434 combines the characteristic data "4", "A" and "B" to form "4AB" as the classification data generated based on the blastocyst image data. Other implementation manners are as described in step S170, which will not be repeated here.

在另一實施例中,當一影像取得裝置412依序取得在不同時間點或不同切片深度的細胞影像資料,該處理器434更用以分別對細胞影像資料中的細胞團影像執行基於類神經網路的第二特徵辨識操作,以產生第二特徵資料供處理器434組合該些特徵資料。類神經網路模型在此不設限。具體而言,請參照第3圖。以依據不同切片深度拍攝而取得的細胞影像資料為例。在本案的一些實施例中,處理器434按切片深度由深至淺,將囊胚細胞影像資料輸入至另包含一遞歸神經網路的深度學習模型中。首先處理器434透過卷積神經網路進行第一特徵辨識操 作,取得細胞影像資料中每一張影像圖形的特徵向量作為第一特徵資料。接著,透過將第一特徵資料輸入分別輸入至相應的遞歸神經網路進行第二特徵辨識操作,從而分析出對應的第二特徵資料作為特徵資料。最後,處理器434組合分別對應擴張程度、內細胞團及滋養層細胞的特徵資料以產生此囊胚細胞的分類資料。 In another embodiment, when an image acquisition device 412 sequentially acquires cell image data at different time points or at different slice depths, the processor 434 is further used to perform neural-based imaging on the cell clusters in the cell image data. The second feature recognition operation of the network generates second feature data for the processor 434 to combine the feature data. The neural network model is not limited here. Specifically, please refer to Figure 3. Take, for example, cell image data obtained based on different slice depths. In some embodiments of the present case, the processor 434 inputs the blastocyst image data into a deep learning model that also includes a recurrent neural network according to the depth of the slice from deep to shallow. First, the processor 434 performs the first feature recognition operation through the convolutional neural network To obtain the feature vector of each image pattern in the cell image data as the first feature data. Then, by inputting the input of the first characteristic data into the corresponding recurrent neural network to perform the second characteristic identification operation, the corresponding second characteristic data is analyzed as the characteristic data. Finally, the processor 434 combines the characteristic data corresponding to the expansion degree, the inner cell mass and the trophoblast cell to generate the classification data of the blastocyst.

醫療分析平台430完成對細胞品質的判別服務後,根據步驟S180,處理器434透過伺服器432經網路420,將透過影像取得裝置412取得細胞影像資料之細胞所對應的分類資料傳輸至終端機410。終端機410透過通訊裝置416接收後顯示在輸入/輸出裝置414上,以供相關專業人士根據此分類資料判斷下一步動作。 After the medical analysis platform 430 completes the cell quality identification service, according to step S180, the processor 434 transmits the classification data corresponding to the cells obtained by the image acquisition device 412 via the network 420 via the server 432 to the terminal. 410. The terminal 410 is received by the communication device 416 and then displayed on the input/output device 414 for relevant professionals to determine the next action based on the classified data.

在一些實施例中,輸入/輸出裝置414在接收並顯示相應於細胞影像資料之分類資料給相關專業人士後,專業人士可根據自身學識、經驗輸入對於此細胞影像資料的判斷作為回饋資料。在一些實施例中,輸入/輸出裝置414更用以透過通訊裝置416回傳回饋資料至醫療分析平台430。當回饋資料與醫療分析平台430提供的分類資料不符時,處理器434以回饋資料與至少一細胞影像資料再次進行訓練,以修正或更新進行細胞品質判別操作的深度學習模型。此外,在一些實施例中,透過伺服器432醫療分析平台430可提供細胞品質判別的服務給多個不同終端機410以對至少一細胞的至少一細胞影像資料進行判別,亦可利用自這些不同的終端機410接收相應的所回饋分類資料對深度學習模型進行訓練。 In some embodiments, after the input/output device 414 receives and displays the classified data corresponding to the cell image data to relevant professionals, the professionals can input their judgments on the cell image data as feedback data based on their own knowledge and experience. In some embodiments, the input/output device 414 is further used to send feedback data to the medical analysis platform 430 through the communication device 416. When the feedback data does not match the classification data provided by the medical analysis platform 430, the processor 434 uses the feedback data and at least one cell image data to perform training again, so as to modify or update the deep learning model for performing cell quality judgment operations. In addition, in some embodiments, the server 432 medical analysis platform 430 can provide cell quality identification services to a plurality of different terminals 410 to determine at least one cell image data of at least one cell. These different The terminal 410 receives the corresponding feedback classification data to train the deep learning model.

經由上述各種實施例的操作,可以實現一個細胞 分類的方法,以產生由至少一深度學習模型對一細胞分析之結果的分類資料,並且在處理過程中,可以大量資料為基準訓練神經網路來產生此分析結果。另由於細胞影像資料中的細胞影像彼此的關聯性也一併被考慮,因此,亦可改善細胞品質判別的準確度。此外,透過前述的細胞分類系統,遠端的使用者可透過網路使用細胞品質判別的服務並同時提供回饋。 Through the operations of the various embodiments above, a cell can be realized The classification method is to generate classification data of the result of analyzing a cell by at least one deep learning model, and in the processing process, a neural network can be trained based on a large amount of data to generate the analysis result. In addition, since the correlation between the cell images in the cell image data is also considered, the accuracy of cell quality determination can also be improved. In addition, through the aforementioned cell classification system, remote users can use the service of cell quality identification through the Internet and provide feedback at the same time.

雖然本發明已以實施例揭露如上,然其並非用以限定本發明,任何熟習此技藝者,在不脫離本發明之精神和範圍內,當可作各種之更動與潤飾,因此本發明之保護範圍當視後附之申請專利範圍所界定者為準。 Although the present invention has been disclosed in the above embodiments, it is not intended to limit the present invention. Anyone who is familiar with this technique can make various changes and modifications without departing from the spirit and scope of the present invention. Therefore, the protection of the present invention The scope shall be subject to those defined in the attached patent scope.

100‧‧‧細胞分類方法 100‧‧‧Cell classification method

S110、S120、S130、S140、S150、S160、S170、S180‧‧‧步驟 S110, S120, S130, S140, S150, S160, S170, S180‧‧‧Step

Claims (11)

一種細胞分類方法,包含:藉由一醫療分析平台進行一影像訓練調校操作,使得該醫療分析平台包含至少一深度學習模型以進行細胞影像分類;取得一細胞所對應之至少一細胞影像資料;將該至少一細胞影像資料輸入該醫療分析平台中該至少一深度學習模型;透過該醫療分析平台處理對應該至少一細胞影像資料的複數細胞團影像,以取得複數特徵資料;以及組合該些特徵資料,以產生對應於該些特徵資料的一分類資料以供判斷該細胞的類別。 A cell classification method includes: performing an image training and adjustment operation by a medical analysis platform, so that the medical analysis platform includes at least one deep learning model for cell image classification; and obtaining at least one cell image data corresponding to a cell; Inputting the at least one cell image data into the at least one deep learning model in the medical analysis platform; processing the plural cell cluster images corresponding to the at least one cell image data through the medical analysis platform to obtain plural characteristic data; and combining the features Data to generate a classification data corresponding to the characteristic data for judging the cell type. 如請求項1所述之細胞分類方法,其中取得該細胞所對應之該至少一細胞影像資料的步驟包含:透過調整一影像取得裝置的拍攝焦距,由深至淺取得該細胞在縱向對應不同焦平面的複數個細胞影像資料,以形成複數個深度序列細胞影像;其中該些深度序列細胞影像係分別輸入該醫療分析平台中該至少一深度學習模型進行處理。 The cell classification method according to claim 1, wherein the step of obtaining the at least one cell image data corresponding to the cell comprises: obtaining the cells corresponding to different focal lengths in the longitudinal direction by adjusting the shooting focal length of an image obtaining device A plurality of cell image data on a plane form a plurality of depth sequence cell images; wherein the depth sequence cell images are respectively input to the at least one deep learning model in the medical analysis platform for processing. 如請求項1所述之細胞分類方法,其中取得該細胞所對應之該至少一細胞影像資料的步驟包含:依序取得該細胞所對應之在不同時間點的複數細胞影像資料,以形成複數個時間序列細胞影像; 其中該些時間序列細胞影像係分別輸入該醫療分析平台中該至少一深度學習模型進行處理。 The cell classification method according to claim 1, wherein the step of obtaining the at least one cell image data corresponding to the cell comprises: sequentially obtaining a plurality of cell image data corresponding to the cell at different time points to form a plurality of cells Time series cell imaging; The time-series cell images are respectively input to the at least one deep learning model in the medical analysis platform for processing. 如請求項1所述之細胞分類方法,其中透過該醫療分析平台處理對應該至少一細胞影像資料的該些細胞團影像的步驟包含:透過該醫療分析平台中之該至少一深度學習模型處理對應該至少一細胞影像資料的該些細胞團影像,其中該至少一深度學習模型包含一卷積神經網路模型;利用該卷積神經網路模型擷取對應該細胞影像之該些細胞團的複數特徵值;以及根據該些特徵值產生對應於該些細胞團的複數第一特徵分類資料;其中該醫療分析平台用以根據該些第一特徵分類資料組合該些特徵資料。 The cell classification method according to claim 1, wherein the step of processing the cell mass images corresponding to at least one cell image data through the medical analysis platform comprises: processing the at least one deep learning model in the medical analysis platform The cell cluster images should be at least one cell image data, wherein the at least one deep learning model includes a convolutional neural network model; the convolutional neural network model is used to retrieve the plural number of the cell clusters corresponding to the cell image Characteristic value; and generating plural first characteristic classification data corresponding to the cell clusters according to the characteristic values; wherein the medical analysis platform is used for combining the characteristic data according to the first characteristic classification data. 如請求項4所述之細胞分類方法,更包含:依序將該些第一特徵分類資料輸入該醫療分析平台中至少一深度學習模型,其中該至少一深度學習模型更包含一遞歸神經網路模型;透過該遞歸神經網路模型產生對應於該些細胞團的複數第二特徵分類資料,其中該醫療分析平台更用以根據該些第二特徵分類資料組合該些特徵資料。 The cell classification method according to claim 4, further comprising: sequentially inputting the first feature classification data into at least one deep learning model in the medical analysis platform, wherein the at least one deep learning model further comprises a recurrent neural network Model; through the recurrent neural network model to generate a plurality of second feature classification data corresponding to the cell clusters, wherein the medical analysis platform is further used to combine the feature data according to the second feature classification data. 一種細胞分類系統,包含:一終端機,用以取得一細胞所對應之至少一細胞影像資料;以及一醫療分析平台,與該終端機連結,用以對自該終端機接收之該細胞所對應之該至少一細胞影像資料進行分類,並回傳相應於該細胞的一分類資料至該終端機,其中該醫療分析平台包含:一處理器,用以處理該至少一細胞影像資料的複數細胞團影像,以取得複數特徵資料,並用以組合該些特徵資料,以產生對應於該些特徵資料的一分類資料以供判斷該細胞的類別。 A cell classification system includes: a terminal machine for obtaining at least one cell image data corresponding to a cell; and a medical analysis platform connected with the terminal machine for matching the cell received from the terminal machine The at least one cell image data is classified, and a classification data corresponding to the cell is returned to the terminal, wherein the medical analysis platform includes: a processor for processing a plurality of cell clusters of the at least one cell image data The image is used to obtain plural characteristic data and combine the characteristic data to generate a classification data corresponding to the characteristic data for judging the cell type. 如請求項6所述之細胞分類系統,其中該處理器更用以分別對該些細胞團影像執行基於類神經網路的至少一第一特徵辨識操作,以產生複數個第一特徵資料供該處理器組合該些第一特徵資料。 The cell classification system according to claim 6, wherein the processor is further configured to perform at least one first feature recognition operation based on a neural network on the cell cluster images respectively to generate a plurality of first feature data for the The processor combines the first characteristic data. 如請求項7所述之細胞分類系統,其中該處理器更用以對複數個待訓練細胞影像資料與對應於該些待訓練細胞影像資料之複數個特徵資料進行訓練;當由該處理器對複數個驗證細胞影像資料進行處理所產生之複數個特徵資料與對應於該些驗證細胞影像資料的驗證特徵資料不同時,該處理器之參數被調整。 The cell classification system according to claim 7, wherein the processor is further used to train a plurality of cell image data to be trained and a plurality of feature data corresponding to the cell image data to be trained; When the plurality of feature data generated by processing the plurality of verification cell image data is different from the verification feature data corresponding to the verification cell image data, the parameters of the processor are adjusted. 如請求項6所述之細胞分類系統,其中當一影 像取得裝置依序取得在不同時間點或不同切片深度該至少一細胞影像資料的複數細胞團影像時,該處理器更用以分別對該些細胞團影像執行基於類神經網路的至少一第二特徵辨識操作,以產生複數個第二特徵資料供該處理器組合該些特徵資料。 The cell classification system described in claim 6, wherein When the image obtaining device sequentially obtains the plurality of cell cluster images of the at least one cell image data at different time points or at different slice depths, the processor is further used to execute at least one neural network-based image on the cell cluster images respectively Two feature identification operations are used to generate a plurality of second feature data for the processor to combine the feature data. 如請求項6所述之細胞分類系統,其中該終端機包含:一影像取得裝置,用以取得該至少一細胞影像資料;以及一輸入/輸出裝置,連接於該影像取得裝置及一通訊裝置,用以自該影像取得裝置接收該至少一細胞影像資料,並在接收相應於該至少一細胞影像資料之該分類資料後,該輸入/輸出裝置用以透過該通訊裝置回傳一回饋資料至該醫療分析平台;其中當該回饋資料與該分類資料不符時,該處理器以該回饋資料與該至少一細胞影像資料再次進行訓練。 The cell classification system according to claim 6, wherein the terminal includes: an image acquisition device for acquiring the at least one cell image data; and an input/output device connected to the image acquisition device and a communication device, Used for receiving the at least one cell image data from the image acquisition device, and after receiving the classification data corresponding to the at least one cell image data, the input/output device is used for returning a feedback data to the Medical analysis platform; wherein when the feedback data does not match the classification data, the processor retrains with the feedback data and the at least one cell image data. 一個醫療分析平台,包含:一伺服器,用以接收相應於一細胞的至少一細胞影像資料;以及一處理器,與該伺服器連接,用以根據至少一深度學習模型對該細胞之該至少一細胞影像資料產生對應該細胞的一分類資料;其中該至少一深度學習模型是基於複數個訓練用細胞影 像資料及其分類資訊訓練而產生,和/或是利用自至少一終端機接收相應至少一細胞的至少一細胞影像資料及所回饋分類資料訓練而產生。 A medical analysis platform includes: a server for receiving at least one cell image data corresponding to a cell; and a processor connected to the server for the at least one cell based on at least one deep learning model A cell image data generates a classification data corresponding to the cell; wherein the at least one deep learning model is based on a plurality of cell shadows for training Image data and its classification information are generated by training, and/or generated by receiving at least one cell image data corresponding to at least one cell from at least one terminal and the feedback classification data training.
TW108106571A 2019-02-26 2019-02-26 Method and system for classifying cells and medical analysis platform TW202032574A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
TW108106571A TW202032574A (en) 2019-02-26 2019-02-26 Method and system for classifying cells and medical analysis platform
CN201910510908.XA CN111612027A (en) 2019-02-26 2019-06-13 Cell classification method, system and medical analysis platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW108106571A TW202032574A (en) 2019-02-26 2019-02-26 Method and system for classifying cells and medical analysis platform

Publications (1)

Publication Number Publication Date
TW202032574A true TW202032574A (en) 2020-09-01

Family

ID=72205229

Family Applications (1)

Application Number Title Priority Date Filing Date
TW108106571A TW202032574A (en) 2019-02-26 2019-02-26 Method and system for classifying cells and medical analysis platform

Country Status (2)

Country Link
CN (1) CN111612027A (en)
TW (1) TW202032574A (en)

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SE530750C2 (en) * 2006-07-19 2008-09-02 Hemocue Ab A measuring device, a method and a computer program
US9001200B2 (en) * 2010-01-12 2015-04-07 Bio-Rad Laboratories, Inc. Cell characterization using multiple focus planes
JP6514892B2 (en) * 2011-07-13 2019-05-15 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. Method of automatically adjusting the focal plane of digital pathological image
US9619881B2 (en) * 2013-09-26 2017-04-11 Cellogy, Inc. Method and system for characterizing cell populations
WO2017117210A1 (en) * 2015-12-30 2017-07-06 Visiongate, Inc. System and method for automated detection and monitoring of dysplasia and administration of chemoprevention
WO2017151757A1 (en) * 2016-03-01 2017-09-08 The United States Of America, As Represented By The Secretary, Department Of Health And Human Services Recurrent neural feedback model for automated image annotation
CN106250707A (en) * 2016-08-12 2016-12-21 王双坤 A kind of based on degree of depth learning algorithm process head construction as the method for data
US10417788B2 (en) * 2016-09-21 2019-09-17 Realize, Inc. Anomaly detection in volumetric medical images using sequential convolutional and recurrent neural networks
CN106934799B (en) * 2017-02-24 2019-09-03 安翰科技(武汉)股份有限公司 Capsule endoscope visual aids diagosis system and method
CN117054316A (en) * 2017-05-19 2023-11-14 兴盛生物科技股份有限公司 System and method for counting cells
CN107423756A (en) * 2017-07-05 2017-12-01 武汉科恩斯医疗科技有限公司 Nuclear magnetic resonance image sequence sorting technique based on depth convolutional neural networks combination shot and long term memory models
CN107392895A (en) * 2017-07-14 2017-11-24 深圳市唯特视科技有限公司 A kind of 3D blood vessel structure extracting methods based on convolution loop network
US10963737B2 (en) * 2017-08-01 2021-03-30 Retina-Al Health, Inc. Systems and methods using weighted-ensemble supervised-learning for automatic detection of ophthalmic disease from images
CN107622485B (en) * 2017-08-15 2020-07-24 中国科学院深圳先进技术研究院 Medical image data analysis method and system fusing depth tensor neural network
CN107784324A (en) * 2017-10-17 2018-03-09 杭州电子科技大学 The more classifying identification methods of white blood corpuscle based on depth residual error network
CN107886127A (en) * 2017-11-10 2018-04-06 深圳市唯特视科技有限公司 A kind of histopathology image classification method based on convolutional neural networks
CN108364006B (en) * 2018-01-17 2022-03-08 超凡影像科技股份有限公司 Medical image classification device based on multi-mode deep learning and construction method thereof
CN108470359A (en) * 2018-02-11 2018-08-31 艾视医疗科技成都有限公司 A kind of diabetic retinal eye fundus image lesion detection method
CN108364032A (en) * 2018-03-27 2018-08-03 哈尔滨理工大学 A kind of cervical cancer cell picture recognition algorithm based on convolutional neural networks
CN108596046A (en) * 2018-04-02 2018-09-28 上海交通大学 A kind of cell detection method of counting and system based on deep learning
CN109065110B (en) * 2018-07-11 2021-10-19 哈尔滨工业大学 Method for automatically generating medical image diagnosis report based on deep learning method
CN109117890B (en) * 2018-08-24 2020-04-21 腾讯科技(深圳)有限公司 Image classification method and device and storage medium
CN109190567A (en) * 2018-09-10 2019-01-11 哈尔滨理工大学 Abnormal cervical cells automatic testing method based on depth convolutional neural networks

Also Published As

Publication number Publication date
CN111612027A (en) 2020-09-01

Similar Documents

Publication Publication Date Title
JP7072067B2 (en) Systems and methods for estimating embryo viability
US8687879B2 (en) Method and apparatus for generating special-purpose image analysis algorithms
CN108764072B (en) Blood cell subtype image classification method based on multi-scale fusion
WO2021196632A1 (en) Intelligent analysis system and method for panoramic digital pathological image
CN113781397B (en) Medical image focus detection modeling method, device and system based on federal learning
WO2019015246A1 (en) Image feature acquisition
CN104850860A (en) Cell image recognition method and cell image recognition device
CN110363218A (en) A kind of embryo's noninvasively estimating method and device
CN110163130B (en) Feature pre-alignment random forest classification system and method for gesture recognition
CN113486202B (en) Method for classifying small sample images
Kaur et al. A CNN-Based Identification of Honeybees' Infection using Augmentation
CN106682604B (en) Blurred image detection method based on deep learning
TW202032574A (en) Method and system for classifying cells and medical analysis platform
Syahputra et al. Comparison of CNN Models With Transfer Learning in the Classification of Insect Pests
CN116310466A (en) Small sample image classification method based on local irrelevant area screening graph neural network
Abayomi-Alli et al. Facial image quality assessment using an ensemble of pre-trained deep learning models (EFQnet)
CN115035339A (en) Cystoscope image classification method based on artificial intelligence
CN114758379A (en) Jupiter identification method based on attention convolution neural network
CN114429460A (en) General image aesthetic assessment method and device based on attribute perception relationship reasoning
CN114627123A (en) Leucocyte detection method integrating double-current weighting network and spatial attention mechanism
CN113011506A (en) Texture image classification method based on depth re-fractal spectrum network
WO2022009186A1 (en) Predicting embryo implantation probability
US20220051154A1 (en) Method and apparatus for measuring plant trichomes
CN117197139B (en) Tongue diagnosis image multi-label classification method based on AI
Bhanumathi et al. Underwater Fish Species Classification Using Alexnet