TWI836280B - Medical image analysis method and device - Google Patents

Medical image analysis method and device Download PDF

Info

Publication number
TWI836280B
TWI836280B TW110137780A TW110137780A TWI836280B TW I836280 B TWI836280 B TW I836280B TW 110137780 A TW110137780 A TW 110137780A TW 110137780 A TW110137780 A TW 110137780A TW I836280 B TWI836280 B TW I836280B
Authority
TW
Taiwan
Prior art keywords
symptom
result
physiological tissue
medical image
parameter
Prior art date
Application number
TW110137780A
Other languages
Chinese (zh)
Other versions
TW202226270A (en
Inventor
王鼎元
鄧名杉
李雅文
劉容慈
Original Assignee
財團法人工業技術研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 財團法人工業技術研究院 filed Critical 財團法人工業技術研究院
Priority to CN202111296777.3A priority Critical patent/CN114638781A/en
Publication of TW202226270A publication Critical patent/TW202226270A/en
Application granted granted Critical
Publication of TWI836280B publication Critical patent/TWI836280B/en

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

A medical image analysis method includes: reading an original medical image; performing image classification and objection detection on the original medical image to generate a first classification result and a plurality of detection results by a plurality of complementary AI models; performing object feature integration and quantification transformation on first and second detection results among the plurality of detection results to generate a quantification result by a feature integration and transformation module; and performing machine learning on the first classification result and the quantification result to generate and display an image determination result by a machine learning module.

Description

醫學影像分析方法與裝置Medical image analysis methods and devices

本發明是有關於一種醫學影像分析方法與裝置。The present invention relates to a medical image analysis method and device.

為了提高醫師影像判讀的效能,並減低人為誤差,藉由人工智慧(AI)協助醫師判讀影像,成為醫學界發展智慧醫療的重點項目之一。In order to improve the efficiency of doctors' image interpretation and reduce human errors, using artificial intelligence (AI) to assist doctors in image interpretation has become one of the key projects in the development of smart medicine in the medical field.

AI影像輔助診斷主要利用三項技術:物件偵測(Object Detection)、分割(Segmentation)及分類(Classification)。以分類技術而言,可進行良性及惡性腫瘤的判別,以及較為簡單的疾病嚴重程度分類,使醫療資源做有效的分配。一般情況下,通常會使用分類器(classifier)如:深度神經網路(Deep Neural Networks,DNN)、支援向量機(support vector machine,SVM,屬於機器學習的一種)、隨機森林(Random Forest)等進行醫學影像的病變嚴重程度分類。AI image-assisted diagnosis mainly uses three technologies: object detection (Object Detection), segmentation (Segmentation) and classification (Classification). In terms of classification technology, it can distinguish benign and malignant tumors and classify the severity of diseases in a relatively simple manner, allowing for effective allocation of medical resources. Under normal circumstances, classifiers (classifiers) such as: Deep Neural Networks (DNN), support vector machines (SVM, a type of machine learning), random forests, etc. are usually used. Perform lesion severity classification on medical images.

然而,通常疾病的分類、病徵、生理組織彼此間具有相互關聯程度,且該關連方式、關連程度大小等,即便有經驗的醫師也很難給岀具體明確(clear-cut)的判斷準則,也不易由解析方法(analytical solution)求得。這類型問題非常適合應用AI(人工智慧)技術透過學習標註資料來解決,但由過往經驗發現:若僅使用單一AI分類模型或病徵物件的AI偵測模型,難以達成最好的準確率。若能藉由分類模型去輔助偵測模型的不足之處(反之亦然,亦即讓這些模型的功能彼此互補,必可改善使用單一模型的缺點,進而改善誤判率。However, the classification, symptoms, and physiological tissues of diseases are usually correlated with each other, and the correlation method, degree of correlation, etc. are difficult for even experienced doctors to give specific clear-cut judgment criteria, and are not easy to obtain by analytical methods. This type of problem is very suitable for applying AI (artificial intelligence) technology to solve it through learning labeled data, but past experience has found that it is difficult to achieve the best accuracy if only a single AI classification model or AI detection model of symptom objects is used. If the classification model can be used to assist the shortcomings of the detection model (and vice versa, that is, the functions of these models can complement each other, it will definitely improve the shortcomings of using a single model and thus improve the misjudgment rate.

此外,如果訓練資料量筆數稀少,抑或是由於在疾病初期,病徵表現通常非常細小,若使用單純靠分類器進行醫療影像分類,可能因為正常影像與異常影像間的變異面積範圍極小,將無法有效辨識這些病徵,導致疾病分類結果錯誤。In addition, if the amount of training data is sparse, or because in the early stages of the disease, the symptoms are usually very small, if you use a classifier solely to classify medical images, it may not be possible because the variation area between normal images and abnormal images is extremely small. Effective identification of these symptoms leads to incorrect disease classification results.

根據本案一實施例,提出一種醫學影像分析方法,包括:讀取一原始醫學影像;利用互補的複數個人工智慧模型對該原始醫學影像進行影像分類及物件偵測,以得到一第一分類結果及複數個物件偵測結果;由一特徵整合轉換模組對該些物件偵測結果中的一第一偵測結果與一第二偵測結果進行物件特徵的整合及量化轉換,以得到一量化結果;以及由一機器學習模組對該量化結果與該第一分類結果進行機器學習,以得到並顯示一影像判讀結果。According to an embodiment of this case, a medical image analysis method is proposed, including: reading an original medical image; using a plurality of complementary artificial intelligence models to perform image classification and object detection on the original medical image to obtain a first classification result and a plurality of object detection results; a feature integration conversion module performs integration and quantitative conversion of object features on a first detection result and a second detection result among the object detection results to obtain a quantification result; and a machine learning module performs machine learning on the quantification result and the first classification result to obtain and display an image interpretation result.

根據本案另一實施例,提出一種醫學影像分析裝置,包括一處理器;以及一顯示單元,耦接於該處理器。該處理器架構成:讀取一原始醫學影像;利用互補的複數個人工智慧模型對該原始醫學影像進行影像分類及物件偵測,以得到一第一分類結果及複數個物件偵測結果;由一特徵整合轉換模組對該些物件偵測結果中的一第一偵測結果與一第二偵測結果進行物件特徵之整合及量化轉換,以得到一量化結果;以及由一機器學習模組對該量化結果與該第一分類結果進行機器學習,以得到一影像判讀結果,該影像判讀結果係顯示於該顯示單元上。According to another embodiment of the present invention, a medical image analysis device is provided, comprising a processor; and a display unit coupled to the processor. The processor is configured to: read an original medical image; use a plurality of complementary artificial intelligence models to perform image classification and object detection on the original medical image to obtain a first classification result and a plurality of object detection results; use a feature integration conversion module to integrate and quantify object features of a first detection result and a second detection result among the object detection results to obtain a quantitative result; and use a machine learning module to perform machine learning on the quantitative result and the first classification result to obtain an image interpretation result, which is displayed on the display unit.

為了對本發明之上述及其他方面有更佳的瞭解,下文特舉實施例,並配合所附圖式詳細說明如下:In order to better understand the above and other aspects of the present invention, the following embodiments are specifically described in detail with reference to the accompanying drawings:

本說明書的技術用語係參照本技術領域之習慣用語,如本說明書對部分用語有加以說明或定義,該部分用語之解釋係以本說明書之說明或定義為準。本揭露之各個實施例分別具有一或多個技術特徵。在可能實施的前提下,本技術領域具有通常知識者可選擇性地實施任一實施例中部分或全部的技術特徵,或者選擇性地將這些實施例中部分或全部的技術特徵加以組合。The technical terms in this specification refer to the idioms in the technical field. If there are explanations or definitions for some terms in this specification, the explanation or definition of this part of the terms shall prevail. Each embodiment of the present disclosure has one or more technical features. Under the premise that implementation is possible, a person with ordinary skill in the art can selectively implement some or all of the technical features in any embodiment, or selectively combine some or all of the technical features in these embodiments.

以目前而言,糖尿病黃斑部水腫(diabetic macular edema, DME)定義為,在距離黃斑部中心(macula)的一個視盤直徑(disc diameter)內出現任何一個硬滲漏(hard exudates,HE)即可視為有糖尿病黃斑部水腫。相反地,在距離黃斑部中心的一個視盤直徑內沒有出現任何硬滲漏即可視為無糖尿病黃斑部水腫(Non-referable diabetic macular edema)。Currently, diabetic macular edema (DME) is defined as the presence of any hard exudates (HE) within one disc diameter from the center of the macula. Conversely, the absence of any hard exudates within one disc diameter from the center of the macula is considered non-referable diabetic macular edema.

此外,於手術前使用電腦斷層掃描(Computed Tomography,CT)評估胰臟癌腫瘤可切除性,可分為五種等級,由第0級(Grade 0)到第4級(Grade 4)。等級與腫瘤接觸血管的程度相關。腫瘤接觸血管的程度越大,則等級越大。In addition, computed tomography (CT) is used before surgery to assess the resectability of pancreatic cancer tumors, which can be divided into five grades, from Grade 0 to Grade 4. The grade is related to the degree to which the tumor contacts the blood vessels. The greater the degree to which the tumor contacts the blood vessels, the greater the grade.

如第1圖所示,在原始醫學影像100內,在黃斑部中心110為圓心,以視盤111的直徑R為半徑,虛擬出一個圓形,在此虛擬圓形內出現硬滲漏HE,則該情況被視為有糖尿病黃斑部水腫。As shown in Figure 1, in the original medical image 100, a circle is virtualized with the macular center 110 as the center and the diameter R of the optic disc 111 as the radius. Hard leakage HE appears in this virtual circle. The condition is considered to have diabetic macular edema.

第2圖顯示根據本案一實施例的醫學影像分析裝置200的功能示意圖。醫學影像分析裝置200包括:處理器210、資料庫220與顯示單元230。處理器210耦接於資料庫220與顯示單元230。處理器210從資料庫220讀出原始醫學影像,進行分析與影像判讀後,將影像判讀結束顯示於顯示單元230上。處理器210例如是中央處理單元(central processing unit,CPU),或是其他可程式化之一般用途或特殊用途的微控制單元(micro control unit,MCU)、微處理器(microprocessor)、數位信號處理器(digital signal processor,DSP)、可程式化控制器、特殊應用積體電路(application specific integrated circuit,ASIC)、圖形處理器(graphics processing unit,GPU)、算數邏輯單元(arithmetic logic unit,ALU)、複雜可程式邏輯裝置(complex programmable logic device,CPLD)、現場可程式化邏輯閘陣列(field programmable gate array,FPGA)或其他類似元件或上述元件的組合。顯示單元230例如但不受限於為,液晶顯示器(LCD)等具有顯示功能的裝置。FIG. 2 shows a functional schematic diagram of a medical image analysis device 200 according to an embodiment of the present invention. The medical image analysis device 200 includes a processor 210, a database 220, and a display unit 230. The processor 210 is coupled to the database 220 and the display unit 230. The processor 210 reads the original medical image from the database 220, performs analysis and image interpretation, and then displays the image interpretation result on the display unit 230. The processor 210 is, for example, a central processing unit (CPU), or other programmable general-purpose or special-purpose micro control unit (MCU), microprocessor, digital signal processor (DSP), programmable controller, application specific integrated circuit (ASIC), graphics processing unit (GPU), arithmetic logic unit (ALU), complex programmable logic device (CPLD), field programmable gate array (FPGA) or other similar components or a combination of the above components. The display unit 230 is, for example but not limited to, a device with display function such as a liquid crystal display (LCD).

第3圖顯示根據本案一實施例的醫學影像分析示意圖。如第3圖所示,原始醫學影像RMI分別輸入至多種不同任務的人工智慧(AI)模型,例如但不受限於,分類模型310、病徵偵測模型320與生理組織偵測模型330。Figure 3 shows a schematic diagram of medical image analysis according to an embodiment of this case. As shown in Figure 3, the original medical image RMI is input to artificial intelligence (AI) models for a variety of different tasks, such as, but not limited to, classification model 310, disease symptom detection model 320, and physiological tissue detection model 330.

分類模型310,其為一疾病嚴重程度分類(類)模型,用以分析整張的原始醫學影像RMI以得到疾病分類結果340。The classification model 310 is a disease severity classification (class) model used to analyze the entire original medical image RMI to obtain the disease classification result 340.

病徵偵測模型320,其為與疾病相關的病徵偵測模型,分析原始醫學影像RMI以得到病徵偵測結果350,病徵偵測結果350包含各病徵的位置、面積、信心值及各病徵種類的數量。可選地,各病徵的面積可經由病徵偵測模型320所框選出的病徵的定界框(bounding box)之垂直長度及水平長度計算求得。在此,病徵例如但不受限於,硬滲漏。The symptom detection model 320 is a disease-related symptom detection model. The original medical image RMI is analyzed to obtain a symptom detection result 350. The symptom detection result 350 includes the location, area, confidence value and number of each symptom type of each symptom. Optionally, the area of each symptom can be obtained by calculating the vertical length and horizontal length of the bounding box of the symptom selected by the symptom detection model 320. Here, the symptom is, for example, but not limited to, hard leakage.

生理組織偵測模型330,其為相關於疾病的生理組織(含器官、組織)偵測模型,分析原始醫學影像RMI以得到生理組織偵測結果360。生理組織偵測結果360包含各生理組織的位置、垂直長度(第一長度)、水平長度(第二長度)、信心值。可選地,各生理組織的面積可經由生理組織偵測模型330所框選出的定界框之垂直長度及水平長度計算求得。在此,生理組織例如但不受限於,視盤、黃斑部中心等。The physiological tissue detection model 330 is a physiological tissue (including organs and tissues) detection model related to the disease, and analyzes the original medical image RMI to obtain the physiological tissue detection result 360. The physiological tissue detection result 360 includes the position, vertical length (first length), horizontal length (second length), and confidence value of each physiological tissue. Optionally, the area of each physiological tissue can be calculated by the vertical length and horizontal length of the bounding box selected by the physiological tissue detection model 330. Here, the physiological tissue is, for example, but not limited to, the optic disc, the center of the macula, etc.

病徵偵測結果350與生理組織偵測結果360輸入至特徵整合轉換模組(Features Integration and Transformation module)370,以進行物件特徵(例如但不受限於病徵及生理組織)之整合及量化轉換。詳言之,特徵整合轉換模組370使用病徵偵測結果與生理組織之物件偵測結果,透過轉換(transformation)方式得出純量或向量。特徵整合轉換模組370的量化轉換結果(純量或向量)連同疾病分類結果340一起輸入至機器學習模組380。機器學習模組380可藉由機器學習(ML)演算法找出複數條判斷規則,得到影像判讀結果390。影像判讀結果390可輸入至顯示單元230進行顯示。影像判讀結果例如是指示DME的機率。The symptom detection result 350 and the physiological tissue detection result 360 are input to the Features Integration and Transformation module 370 to integrate and quantitatively transform the object features (such as but not limited to symptoms and physiological tissues). In detail, the Features Integration and Transformation module 370 uses the symptom detection result and the object detection result of the physiological tissue to obtain a scalar or vector through transformation. The quantitative transformation result (scalar or vector) of the Features Integration and Transformation module 370 is input to the machine learning module 380 together with the disease classification result 340. The machine learning module 380 can find a plurality of judgment rules through a machine learning (ML) algorithm to obtain an image interpretation result 390. The image interpretation result 390 may be input to the display unit 230 for display. The image interpretation result may indicate the probability of DME, for example.

物件偵測結果(純量或向量)與疾病分類結果340可以當成複數筆訓練資料(每一筆訓練資料包括物件偵測結果(純量或向量)與疾病分類結果340),用以訓練機器學習模組380,以得到影像判讀結果。The object detection result (scalar or vector) and the disease classification result 340 can be used as a plurality of training data (each training data includes an object detection result (scalar or vector) and a disease classification result 340) to train the machine learning module 380 to obtain image interpretation results.

在本案一實施例中,特徵整合轉換模組370與機器學習模組380可以例如是藉由使用一晶片、晶片內的一電路區塊、一韌體電路、含有數個電子元件及導線的電路板或儲存複數組程式碼的一儲存媒體來實現,也可藉由電腦系統、伺服器等電子裝置執行對應軟體或程式來實現。特徵整合轉換模組370與機器學習模組380由第2圖的處理器210所執行。In one embodiment of the present case, the feature integration conversion module 370 and the machine learning module 380 can be implemented by using, for example, a chip, a circuit block in a chip, a firmware circuit, a circuit board containing a plurality of electronic components and wires, or a storage medium storing a plurality of sets of program codes, or can be implemented by executing corresponding software or programs in electronic devices such as computer systems and servers. The feature integration conversion module 370 and the machine learning module 380 are executed by the processor 210 of FIG. 2 .

第4圖顯示根據本案另一實施例的醫學影像分析示意圖。如第4圖所示,原始醫學影像RMI分別輸入至多種不同任務的人工智慧(AI)模型,例如但不受限於,分類模型310、病徵偵測模型320、生理組織偵測模型330與分類模型410。Figure 4 shows a schematic diagram of medical image analysis according to another embodiment of the present case. As shown in Figure 4, the original medical image RMI is input to artificial intelligence (AI) models for a variety of different tasks, such as, but not limited to, classification model 310, disease symptom detection model 320, physiological tissue detection model 330 and classification. Model 410.

如第4圖所示,由生理組織偵測模型330所產生的生理組織偵測結果360(亦即,目標區域)係輸入至分類模型410。例如,分類模型410將原始醫學影像RMI的黃斑部中心區域(macula region)當成輸入。可選地,黃斑部中心區域可為先前由生理組織偵測結果360的定界框所框選,或者是,利用黃斑部中心與視盤直徑所框選,其中,黃斑部中心與視盤直徑乃是使用生理組織偵測結果360來計算且由分類模型410分析所得,以產生疾病分類結果420。本案實施例的重要觀點之一在於,黃斑部中心區域乃是由生理組織偵測模型330所定義,且當成分類模型410的輸入。特別是,在分類模型410的訓練或/及預測過程中,所定義的黃斑部中心區域會被當成輸入影像,而非整張的原始醫學影像RMI被當成輸入影像。相比於將整張的原始醫學影像RMI當成分類模型410的輸入,即便是經歷過經常被用來在深度學習及/或預測過程中節省計算時間的影像縮減操作(image reduction operation),利用上述方式來定義黃斑部中心區域比較可能被分類模型410所偵測。當病徵(Lesion)或生理組織(anatomic landmark)相對於整張原始醫學影像RMI是非常小時,此效果是特別有用的。這能保証分類模型410有能力可用於學習辨識黃斑部中心區域中的細小物件。As shown in FIG. 4 , the physiological tissue detection results 360 (ie, the target area) generated by the physiological tissue detection model 330 are input to the classification model 410 . For example, the classification model 410 takes the macula region of the original medical image RMI as input. Alternatively, the macular center area may be selected by the bounding box of the physiological tissue detection result 360, or may be selected by using the macular center and the optic disc diameter, where the macular center and the optic disc diameter are It is calculated using the physiological tissue detection results 360 and analyzed by the classification model 410 to generate the disease classification result 420. One of the important points of this embodiment is that the central area of the macula is defined by the physiological tissue detection model 330 and is used as the input of the classification model 410 . In particular, during the training or/and prediction process of the classification model 410, the defined central area of the macula will be regarded as the input image, rather than the entire original medical image RMI. Compared with using the entire original medical image RMI as the input of the classification model 410, even after undergoing an image reduction operation (image reduction operation) that is often used to save computing time in the deep learning and/or prediction process, using the above The way to define the central area of the macula is more likely to be detected by the classification model 410. This effect is particularly useful when the lesion or anatomic landmark is very small relative to the RMI of the entire original medical image. This ensures that the classification model 410 has the ability to learn to recognize small objects in the central area of the macula.

如第4圖所示,分類模型410的疾病分類結果420亦輸入至機器學習模組380。As shown in Figure 4, the disease classification results 420 of the classification model 410 are also input to the machine learning module 380.

現將說明特徵整合轉換模組370的操作原則。The operating principle of the feature integration conversion module 370 will now be described.

第5A圖顯示,在本案一實施例中,以視盤中心為中心點,將眼底影像分為4個象限的示意圖。在第5A圖中,S 1~S 4分別表示第一象限至第四象限。此外,S n代表第n種生理組織,如黃斑部中心、象限等(n為正整數)。在本案一實施例中,n=1~6,但可根據實際應用調整n的範圍。亦即,在本案實施例中,象限也代表一種生理組織。 代表第m種病徵中的第i個,如 代表第一個硬滲漏、 代表第三個硬滲漏。 代表病徵 與生理組織S n的相對距離,0≦ ≦1。例如, 代表病徵 與生理組織S n的相對距離。 FIG. 5A shows a schematic diagram of dividing the fundus image into four quadrants with the center of the optic disc as the center point in an embodiment of the present invention. In FIG. 5A, S1 to S4 represent the first to fourth quadrants, respectively. In addition, Sn represents the nth physiological tissue, such as the center of the macula, quadrant, etc. (n is a positive integer). In an embodiment of the present invention, n=1~6, but the range of n can be adjusted according to the actual application. That is, in the embodiment of the present invention, the quadrant also represents a physiological tissue. represents the i-th symptom among the m-th symptom, such as Represents the first hard leak, Represents the third hard leak. Representative symptoms Relative distance from physiological tissue S n , 0 ≦ ≦1. For example, Representative symptoms Relative distance from physiological tissue S n .

在本案一實施例中,以象限(n=1~4)與病徵關係為例,特徵整合轉換模組370的量化方式考慮到病徵與此象限的關聯性、病徵面積、信心值(confidence)等之任意組合。在本案一實施例中,量化程度分別與角度隸屬(membership)象限程度、面積、信心值或其任意組合呈正向關係。在本案一實施例中,例如但不受限於,複數個象限與病徵之第一正向關係可用式(一)表示: 式(一)。 In an embodiment of the present invention, taking the relationship between quadrants (n=1-4) and symptoms as an example, the quantization method of the feature integration conversion module 370 takes into account any combination of the correlation between the symptom and the quadrant, the symptom area, the confidence value, etc. In an embodiment of the present invention, the quantization degree is positively correlated with the degree of angular membership of the quadrant, the area, the confidence value, or any combination thereof. In an embodiment of the present invention, for example but not limited to, the first positive relationship between multiple quadrants and symptoms can be expressed by formula (1): Formula (I).

在式(一)中,參數p、q、r可透過資料學習獲得。角度隸屬象限程度可以使用模糊(Fuzzy)函數表示(例如但不受限於,如第6圖所示),角度 為病徵 與橫軸之夾角(弧度), 代表角度隸屬象限程度。 代表病徵 的相對面積, 代表信心值。 ( )代表角度隸屬象限函數。其中,0≦ , , ( )≦1。其中,在本案一實施例中,使用之平面座標為以視盤中心為原點,水平線為橫軸,垂直線為縱軸。亦即,該些象限與該病徵之該第一正向關係有關於該角度隸屬象限程度與一第一參數(參數p)之一第一運算結果( )、該面積與一第二參數(參數q)之一第二運算結果( ),以及,該信心值與一第三參數(參數r)之一第三運算結果( )。 In formula (1), the parameters p, q, and r can be obtained through data learning. The degree of quadrant belonging of the angle can be represented by a fuzzy function (for example, but not limited to, as shown in Figure 6). Symptoms Angle with the horizontal axis (radian), Represents the degree to which the angle belongs to the quadrant. Representative symptoms The relative area of Represents confidence value. ( ) represents the angle belonging to the quadrant function. Among them, 0≦ , , ( )≦1. In one embodiment of the present invention, the plane coordinates used are the center of the optic disc as the origin, the horizontal line as the horizontal axis, and the vertical line as the vertical axis. That is, the first positive relationship between the quadrants and the symptom is related to a first operation result ( ), the area and a second parameter (parameter q) a second operation result ( ), and a third operation result ( ).

在本案一實施例中,以一般生理組織與病徵關係為例,特徵整合轉換模組370的量化方式考慮到病徵與此生理組織的關聯性(例如是,病徵與生理組織的距離的倒數)、病徵面積、信心值等之任意組合。在本案一實施例中,量化程度值分別與(1)病徵與生理組織的距離的倒數、(2)面積、(3)信心值或其之任意組合呈正向關係。在本案一實施例中,例如但不受限於,生理組織與病徵之第二正向關係可用式(二)表示: 式(二)。 In one embodiment of the present case, taking the relationship between a general physiological tissue and a symptom as an example, the quantification method of the feature integration conversion module 370 takes into account any combination of the correlation between the symptom and the physiological tissue (for example, the inverse of the distance between the symptom and the physiological tissue), the symptom area, the confidence value, etc. In one embodiment of the present case, the quantification degree value is positively correlated with (1) the inverse of the distance between the symptom and the physiological tissue, (2) the area, (3) the confidence value, or any combination thereof. In one embodiment of the present case, for example but not limited to, the second positive relationship between the physiological tissue and the symptom can be expressed by formula (II): Formula (II).

在式(二)中,參數p’、q’、r’可透過資料學習獲得。亦即,生理組織與病徵之第二正向關係有關於該病徵與該生理組織的該距離的該倒數與該第四參數(參數p’)之一第四運算結果( )、該面積與該第五參數(參數q’)之一第五運算結果( ),以及,該信心值與該第六參數(參數r’)之一第六運算結果( )。 In formula (2), the parameters p', q', and r' can be obtained through data learning. That is, the second positive relationship between the physiological tissue and the symptom is related to the fourth operation result (the inverse of the distance between the symptom and the physiological tissue) and the fourth parameter (parameter p'). ), the area and the fifth parameter (parameter q') a fifth operation result ( ), and a sixth operation result ( ).

第5B圖顯示,在本案另一實施例中,非糖尿病眼底影像之一示範圍。在第5B圖中,S 1與S 2分別代表2種生理組織,如S 1與S 2分別代表血管與胰臟。 代表第m種病徵中的第i個(如胰臟腫瘤)。 代表病徵 與生理組織S n的相對距離,0≦ ≦1。例如, 代表病徵 與生理組織S 1的相對距離。 FIG. 5B shows a non-diabetic fundus image in another embodiment of the present invention. In FIG. 5B , S 1 and S 2 represent two physiological tissues, such as blood vessels and pancreas , respectively . Represents the i-th symptom among the m-th symptom (e.g. pancreatic tumor). Representative symptoms Relative distance from physiological tissue S n , 0 ≦ ≦1. For example, Representative symptoms Relative distance from physiological tissue S 1 .

同樣地,以第5B圖而言,在本案一實施例中,以一般生理組織(如血管或胰臟)與病徵(如胰臟腫瘤)關係為例,特徵整合轉換模組370的量化方式考慮到病徵與此生理組織的關聯性(例如是,病徵與生理組織的距離的倒數)、病徵面積、信心值等之任意組合。在本案一實施例中,量化程度值分別與(1)病徵與生理組織的距離的倒數、(2)面積、(3)信心值或其之任意組合呈正向關係。生理組織與病徵之第二正向關係可用式(二)表示,其細節在此不重述。Similarly, with reference to FIG. 5B, in an embodiment of the present invention, taking the relationship between a general physiological tissue (such as a blood vessel or pancreas) and a symptom (such as a pancreatic tumor) as an example, the quantification method of the feature integration conversion module 370 takes into account any combination of the correlation between the symptom and the physiological tissue (for example, the inverse of the distance between the symptom and the physiological tissue), the symptom area, the confidence value, etc. In an embodiment of the present invention, the quantification degree value is positively correlated with (1) the inverse of the distance between the symptom and the physiological tissue, (2) the area, (3) the confidence value, or any combination thereof. The second positive relationship between the physiological tissue and the symptom can be expressed by formula (2), the details of which are not repeated here.

前述角度隸屬象限函數,可用模糊理論實現,更明確地說,如第6圖所示,可定義四個模糊集合,其中該函數輸出介於0與1之間。另外,該角度隸屬象限函數之形狀可為但不限於梯形、三角形或其任意組合,並且該形狀可被訓練(該形狀是可訓練的)。The aforementioned angle-belonging quadrant function can be implemented using fuzzy theory. More specifically, as shown in FIG. 6 , four fuzzy sets can be defined, where the function output is between 0 and 1. In addition, the shape of the angle-belonging quadrant function can be, but is not limited to, a trapezoid, a triangle, or any combination thereof, and the shape can be trained (the shape is trainable).

在本案實施例中,當n=1~4(象限)時,式(一)中參數 代表該病徵偵測模型320對該病徵輸出之信心值。亦即,當n=1~4(象限)時,式(一)係使用角度隸屬象限函數、面積參數與信心參數以計算量化值。信心參數 愈高,代表病徵存在程度愈大。 In the embodiment of this case, when n=1~4 (quadrant), the parameters in formula (1) Represents the confidence value of the symptom detection model 320 in the symptom output. That is, when n=1~4 (quadrant), equation (1) uses the angle membership quadrant function, area parameter and confidence parameter to calculate the quantified value. Confidence parameter The higher the number, the greater the presence of symptoms.

在本案實施例中,當n=5~6(生理組織)時,式(二)係使用距離參數(特別是距離參數的倒數)、面積參數與信心參數以計算一量化值。In the present embodiment, when n=5-6 (physiological tissue), formula (2) uses the distance parameter (especially the reciprocal of the distance parameter), the area parameter and the confidence parameter to calculate a quantitative value.

在本案實施例中,參數p、q、r、p’、q’與r’為實數且可以在進行模型訓練時被更新,例如但不受限於使用反向傳播法(Backpropagation)求出參數p、q、r、p’、q’與r’最佳解。另外,亦可使用貝葉斯優化(Bayesian Optimization)來求出參數p、q、r、p’、q’與r’的最佳組合解,其中使用貝葉斯優化時,p、q、r、p’、q’與r’的條件可為但不受限於:0≦p,q,r,p’,q’,r’。參數p、q、r、p’、q’與r’可透過反向傳播法或貝葉斯優化進行更新。In the present embodiment, the parameters p, q, r, p', q' and r' are real numbers and can be updated during model training, for example but not limited to using backpropagation to find the best solution for the parameters p, q, r, p', q' and r'. In addition, Bayesian Optimization can also be used to find the best combination solution for the parameters p, q, r, p', q' and r', wherein when using Bayesian Optimization, the conditions of p, q, r, p', q' and r' can be but not limited to: 0≦p,q,r,p',q',r'. The parameters p, q, r, p', q' and r' can be updated by backpropagation or Bayesian Optimization.

在本案一實施例中,特徵整合轉換模組370將病徵偵測結果350與生理組織偵測結果360轉換為特徵向量或純量,亦即,函數 In one embodiment of this case, the feature integration conversion module 370 converts the symptom detection results 350 and the physiological tissue detection results 360 into feature vectors or scalars, that is, functions .

細言之,病徵偵測模型320所得到的病徵偵測結果350可為一病徵結果矩陣,如第7圖所示。在此病徵結果矩陣中,各列元素包括:[Lesion, X, Y, W, H, C],其中,Lesion, X, Y, W, H, C分別代表:該病徵偵測模型預測框選之病徵類型(Lesion)、病徵位置之X座標(X)、病徵位置之Y座標(Y)、病徵之水平長度(W)、病徵之垂直長度(H)、病徵之信心值(C)。Specifically, the symptom detection result 350 obtained by the symptom detection model 320 may be a symptom result matrix, as shown in FIG7. In the symptom result matrix, each row element includes: [Lesion, X, Y, W, H, C], wherein Lesion, X, Y, W, H, C represent: the symptom type (Lesion) selected by the symptom detection model prediction box, the X coordinate of the symptom position (X), the Y coordinate of the symptom position (Y), the horizontal length of the symptom (W), the vertical length of the symptom (H), and the confidence value of the symptom (C).

生理組織偵測模型330所得到的生理組織偵測結果360則為一生理組織結果矩陣,如第7圖所示。在此生理組織結果矩陣中,各列元素包括:[Structure, X, Y, W, H, C],其中,Structure, X, Y, W, H, C分別代表:該生理組織偵測模型預測框選之生理組織類型(Structure)、生理組織位置之X座標(X)、生理組織位置之Y座標(Y)、生理組織之水平長度(W)、生理組織之垂直長度(H)、生理組織之信心值(C)。The physiological tissue detection results 360 obtained by the physiological tissue detection model 330 are a physiological tissue result matrix, as shown in Figure 7 . In this physiological tissue result matrix, each column element includes: [Structure, X, Y, W, H, C], where Structure, X, Y, W, H, C respectively represent: the prediction of the physiological tissue detection model Box-selected physiological tissue type (Structure), X coordinate of physiological tissue position (X), Y coordinate of physiological tissue position (Y), horizontal length of physiological tissue (W), vertical length of physiological tissue (H), physiological tissue Confidence value (C).

特徵整合轉換模組370將病徵結果矩陣與生理組織結果矩陣整合為病徵生理組織關係矩陣(如第7圖)。病徵生理組織關係矩陣的各種類生理組織與各種類病徵相對應關係及嚴重程度可以使用量化方式產生。在第7圖中,假設病徵共有5種,分別是:L 1代表硬滲漏(HE),L 2代表出血(H),L 3代表軟滲漏(SE),L 4代表新生血管(NE) ,L 5代表微動脈瘤(MA)。 The feature integration conversion module 370 integrates the symptom result matrix and the physiological tissue result matrix into a symptom-physiological tissue relationship matrix (as shown in FIG. 7). The corresponding relationship and severity between various types of physiological tissues and various types of symptoms in the symptom-physiological tissue relationship matrix can be generated in a quantitative manner. In FIG. 7, it is assumed that there are 5 types of symptoms, namely: L1 represents hard oozing (HE), L2 represents hemorrhage (H), L3 represents soft oozing (SE), L4 represents neovascularization (NE), and L5 represents micro-arteriovenous aneurysm (MA).

第8圖顯示根據本案一實施例的合併多種AI模型輸出,以輸入至機器學習模組380的示意圖。如第8圖所示,分類模型310的疾病分類結果340可以表示為一維信心值矩陣。如前述般,特徵整合轉換模組370整合病徵偵測結果350(可表示為病徵結果矩陣)與生理組織偵測結果360(可表示為生理組織結果矩陣)為病徵生理組織關係矩陣810,之後,特徵整合轉換模組370將病徵生理組織關係矩陣810平坦化(flatten)為一維病徵生理組織關係矩陣820。疾病分類結果340(一維信心值矩陣)與一維病徵生理組織關係矩陣820輸入至機器學習模組380以進行機器學習演算法,來得到影像判讀結果390。Figure 8 shows a schematic diagram of merging the output of multiple AI models to input to the machine learning module 380 according to an embodiment of the present case. As shown in Figure 8, the disease classification result 340 of the classification model 310 can be expressed as a one-dimensional confidence value matrix. As mentioned above, the feature integration conversion module 370 integrates the disease symptom detection results 350 (can be expressed as a disease symptom result matrix) and the physiological tissue detection results 360 (can be expressed as a physiological tissue result matrix) into the disease symptom physiological tissue relationship matrix 810. After that, The feature integration conversion module 370 flattens the disease-symptom physiological tissue relationship matrix 810 into a one-dimensional disease-symptom physiological tissue relationship matrix 820. The disease classification result 340 (one-dimensional confidence value matrix) and the one-dimensional disease symptom physiological tissue relationship matrix 820 are input to the machine learning module 380 to perform the machine learning algorithm to obtain the image interpretation result 390.

在本案一實施例中,機器學習演算法例如但不受限於為決策樹(decision tree)。第9圖顯示根據本案一實施例的機器學習演算法示意圖。如第9圖所示,在節點910處判斷信心值F cls是否小於0.158。如果節點910為假,則跳至節點915,決定有DME。如果節點910為真,則跳至節點920,判斷信心值F cls是否小於0.016。如果節點920為真,則跳至節點930,決定沒有DME。如果節點920為假,則跳至節點925,判斷信心值F SL是否小於0.221。如果節點925為假,則跳至節點935,決定有DME。如果節點925為真,則跳至節點940,決定沒有DME。 In an embodiment of this case, the machine learning algorithm is, for example but not limited to, a decision tree. Figure 9 shows a schematic diagram of a machine learning algorithm according to an embodiment of this case. As shown in Figure 9, it is determined at node 910 whether the confidence value F cls is less than 0.158. If node 910 is false, jump to node 915 to determine that there is DME. If node 910 is true, jump to node 920 to determine whether the confidence value F cls is less than 0.016. If node 920 is true, jump to node 930 to determine that there is no DME. If node 920 is false, jump to node 925 to determine whether the confidence value F SL is less than 0.221. If node 925 is false, jump to node 935 to determine that there is DME. If node 925 is true, jump to node 940 to determine that there is no DME.

第10圖顯示根據本案一實施例的顯示在顯示單元230的影像判讀結果390的一例。如第10圖所示,影像判讀結果390包括:原始醫學影像RMI(例如但不受限於,為一眼部影像,其中更顯示病徵與生理組織)、DR判讀結果(二分類)1010、DR判讀結果(五分類)1020與DME判讀結果1030。當然,第10圖乃是顯示影像判讀結果390的一示範例,當知本案並不受限於此。FIG. 10 shows an example of the image interpretation result 390 displayed on the display unit 230 according to an embodiment of the present invention. As shown in Figure 10, the image interpretation results 390 include: original medical image RMI (for example, but not limited to, a facial image, which further displays disease symptoms and physiological tissues), DR interpretation results (two categories) 1010, DR Interpretation results (five categories) 1020 and DME interpretation results 1030. Of course, Figure 10 is an example showing the image interpretation result 390, and it should be noted that this case is not limited thereto.

第11圖顯示根據本案一實施例的醫學影像分析方法的流程圖。如第11圖所示,於步驟1105中,讀取原始醫學影像。在步驟1110中,判斷原始醫學影像的尺寸是否小於一既定尺寸門檻值。如果步驟1110為否,則於步驟1115中調整原始醫學影像的尺寸,以小於既定尺寸門檻值。如果步驟1110為真,則於步驟1120,利用多種互補的AI模型(如分類模型、病徵偵測模型、生理組織偵測模型等)對醫學影像進行影像分類與物件偵測,以得到一第一分類結果及多種物件偵測結果。於步驟1125,對該些偵測結果中的2個偵測結果進行物件特徵(例如但不受限於,病徵、生理組織等)之整合及量化轉換,以得到量化結果。於步驟1130,對該量化結果與該第一分類結果進行機器學習,以得到影像判讀結果,並顯示影像判讀結果。Figure 11 shows a flow chart of a medical image analysis method according to an embodiment of the present case. As shown in Figure 11, in step 1105, the original medical image is read. In step 1110, it is determined whether the size of the original medical image is smaller than a predetermined size threshold. If step 1110 is negative, the size of the original medical image is adjusted in step 1115 to be smaller than the predetermined size threshold. If step 1110 is true, then in step 1120, multiple complementary AI models (such as classification models, symptom detection models, physiological tissue detection models, etc.) are used to perform image classification and object detection on the medical images to obtain a first Classification results and multiple object detection results. In step 1125, the object characteristics (such as, but not limited to, disease symptoms, physiological tissue, etc.) are integrated and quantitatively transformed for two of the detection results to obtain a quantitative result. In step 1130, perform machine learning on the quantification result and the first classification result to obtain an image interpretation result, and display the image interpretation result.

原始醫學影像可為眼底影像(如第5A圖)或非眼底影像(如第5B圖),此皆在本案精神範圍內。The original medical image can be a fundus image (such as Figure 5A) or a non-fundus image (such as Figure 5B), which are within the spirit of this case.

步驟1120、1125與1130的細節可如上述,於此不重述。The details of steps 1120, 1125 and 1130 can be as described above and will not be repeated here.

以DME疾病嚴重程度結果來比較習知技術與本案實施例之間的效能差異:   靈敏度(sensitivity) 特異性(specificity) 準確度(accuracy) 使用分類模型 84.61% 93.46% 93.56% 使用病徵偵測模型與生理組織偵測模型 82.52% 93.36% 92.47% 本案實施例 90.91% 94.24% 93.97% The performance difference between the conventional technology and the embodiment of this case is compared based on the DME disease severity results: sensitivity specificity Accuracy Use a classification model 84.61% 93.46% 93.56% Using symptom detection models and physiological tissue detection models 82.52% 93.36% 92.47% Example of this case 90.91% 94.24% 93.97%

由上表可看出,本案實施例不論是在靈敏度、特異性與準確度皆有所改善。It can be seen from the above table that the embodiment of this case has improved in terms of sensitivity, specificity and accuracy.

本案實施例揭露基於病徵偵測與生理組織偵測輔助醫學影像分析之方法與系統,尤其是關於結合多任務AI模型醫學影像分析之方法與系統。The embodiments of this case disclose methods and systems for assisting medical image analysis based on disease symptom detection and physiological tissue detection, especially methods and systems for medical image analysis that combine multi-task AI models.

在本案實施例中,藉由將具有不同任務、具關連性之AI模型(如分類模型、病徵偵測模型、生理組織偵測模型等)做互補性(complementarity)結合,以有效提升整體系統於病變嚴重程度分類之效能,進而提高醫學影像判讀的準確性。在本案實施例中,藉由分類模型去輔助偵測模型的不足之處(反之亦然),亦即讓這些模型的功能彼此互補,必可改善使用單一模型的缺點,進而改善誤判率。In the present embodiment, by combining related AI models with different tasks (such as classification models, symptom detection models, physiological tissue detection models, etc.) in a complementary manner, the overall system's performance in lesion severity classification is effectively improved, thereby improving the accuracy of medical image interpretation. In the present embodiment, the classification model is used to assist the inadequacy of the detection model (and vice versa), that is, the functions of these models complement each other, which can improve the shortcomings of using a single model and thus improve the misjudgment rate.

在本案實施例中,結合分類模型的廣泛判讀結果,以及基於病理分析的結果,取得病徵與生理組織之關聯資訊,再透過機器學習找出最佳決策,故而可以克服習知常見問題(如單純CNN分類模型具位移不變性,無法準確判斷病徵與生理組織之相對位置關係,病徵較小時容易誤判,或者是,特定情況下(如病徵完全蓋住生理組織時,生理組織模型無法辨識)容易造成誤判等問題)。In this embodiment, by combining the extensive interpretation results of the classification model and the results based on pathological analysis, the correlation information between disease symptoms and physiological tissues is obtained, and then the best decision is found through machine learning. Therefore, common problems (such as simply The CNN classification model has displacement invariance and cannot accurately determine the relative positional relationship between disease symptoms and physiological tissue. It is easy to misjudge when the symptoms are small, or under certain circumstances (such as when the disease symptoms completely cover the physiological tissue, the physiological tissue model cannot be identified). causing misjudgment and other problems).

本發明係透過機器學習(Machine Learning)找出最佳輸出決策規則(prediction rules),此亦適用於醫學影像辨識以外的應用。The present invention finds the best output decision rules (prediction rules) through machine learning, which is also applicable to applications other than medical image recognition.

本案實施例可以學習資料內含的結構分佈(data-driven) ,可以學習高維度、高抽象的資料分佈結構;不需人為主觀訂定規則,比較客觀;透過專家標註,可以直接學習專家的經驗及知識,避免專家無法透過語言描述知識及經驗之情形;以及獨立適應新資料。The embodiment of this case can learn the structure distribution contained in the data (data-driven), and can learn high-dimensional and highly abstract data distribution structures; it does not require subjective rules to be set by humans, and is relatively objective; through expert annotation, it can directly learn the experience and knowledge of experts, avoiding the situation where experts cannot describe knowledge and experience through language; and independently adapt to new data.

綜上所述,本案實施例可達到提升病變篩檢系統之效能;具有高可解釋性;以及這些規則未來可推廣至模糊系統,當成模糊法則(Fuzzy rules)使用。In summary, the embodiments of the present invention can improve the performance of the lesion screening system; have high interpretability; and these rules can be extended to fuzzy systems in the future and used as fuzzy rules.

綜上所述,雖然本發明已以實施例揭露如上,然其並非用以限定本發明。本發明所屬技術領域中具有通常知識者,在不脫離本發明之精神和範圍內,當可作各種之更動與潤飾。因此,本發明之保護範圍當視後附之申請專利範圍所界定者為準。In summary, although the present invention has been disclosed above through embodiments, they are not intended to limit the present invention. Those with ordinary knowledge in the technical field to which the present invention belongs can make various modifications and modifications without departing from the spirit and scope of the present invention. Therefore, the protection scope of the present invention shall be determined by the appended patent application scope.

100:原始醫學影像 110:黃斑部中心 R:視盤直徑 HE:硬滲漏 111:視盤 200:醫學影像分析裝置 210:處理器 220:資料庫 230:顯示單元 RMI:原始醫學影像 310:分類模型 320:病徵偵測模型 330:生理組織偵測模型 340:疾病分類結果 350:病徵偵測結果 360:生理組織偵測結果 370:特徵整合轉換模組 380:機器學習模組 390:影像判讀結果 410:分類模型 420:疾病分類結果 810:病徵生理組織關係矩陣 820:一維病徵生理組織關係矩陣 910-940:節點 1010:DR判讀結果(二分類) 1020:DR判讀結果(五分類) 1030:DME判讀結果 1105-1130:步驟 100: Original medical image 110: Center of macula R: Diameter of optic disc HE: Hard permeation 111: Optic disc 200: Medical image analysis device 210: Processor 220: Database 230: Display unit RMI: Original medical image 310: Classification model 320: Symptom detection model 330: Physiological tissue detection model 340: Disease classification result 350: Symptom detection result 360: Physiological tissue detection result 370: Feature integration conversion module 380: Machine learning module 390: Image interpretation result 410: Classification model 420: Disease classification result 810: Symptom-physiology-tissue relationship matrix 820: One-dimensional symptom-physiology-tissue relationship matrix 910-940: Nodes 1010: DR interpretation results (two-classification) 1020: DR interpretation results (five-classification) 1030: DME interpretation results 1105-1130: Steps

第1圖顯示糖尿病黃斑部水腫。 第2圖顯示根據本案一實施例的醫學影像分析裝置的功能示意圖。 第3圖顯示根據本案一實施例的醫學影像分析示意圖。 第4圖顯示根據本案另一實施例的醫學影像分析示意圖。 第5A圖顯示,在本案一實施例中,以視盤為中心點,將眼底影像分為4個象限的示意圖。 第5B圖顯示,在本案另一實施例中,非糖尿病眼底影像之一示範圍。 第6圖顯示根據本案一實施例的角度隸屬象限函數的示意圖。 第7圖顯示根據本案一實施例中,所整合的病徵生理組織關係矩陣。 第8圖顯示根據本案一實施例的合併多種AI模型輸出,以輸入至機器學習模組的示意圖。 第9圖顯示根據本案一實施例的機器學習演算法示意圖。 第10圖顯示根據本案一實施例的顯示在顯示單元的影像判讀結果的一例。 第11圖顯示根據本案一實施例的醫學影像分析方法的流程圖。 Image 1 shows diabetic macular edema. Figure 2 shows a functional schematic diagram of a medical image analysis device according to an embodiment of the present invention. Figure 3 shows a schematic diagram of medical image analysis according to an embodiment of this case. Figure 4 shows a schematic diagram of medical image analysis according to another embodiment of the present case. Figure 5A shows a schematic diagram of dividing the fundus image into four quadrants with the optic disc as the center point in one embodiment of this case. Figure 5B shows a display range of a non-diabetic fundus image in another embodiment of this case. Figure 6 shows a schematic diagram of the angle membership quadrant function according to an embodiment of the present case. Figure 7 shows an integrated disease-symptom-physiological-tissue relationship matrix according to an embodiment of this case. Figure 8 shows a schematic diagram of merging the output of multiple AI models to input to the machine learning module according to an embodiment of the present case. Figure 9 shows a schematic diagram of a machine learning algorithm according to an embodiment of this case. Figure 10 shows an example of image interpretation results displayed on the display unit according to an embodiment of the present invention. Figure 11 shows a flow chart of a medical image analysis method according to an embodiment of the present case.

RMI:原始醫學影像 RMI: Raw Medical Image

310:分類模型 310:Classification model

320:病徵偵測模型 320:Symptom Detection Model

330:生理組織偵測模型 330: Physiological tissue detection model

340:疾病分類結果 340:Disease classification results

350:病徵偵測結果 350: Symptom detection results

360:生理組織偵測結果 360: Physiological tissue detection results

370:特徵整合轉換模組 370: Feature integration conversion module

380:機器學習模組 380: Machine learning module

390:影像判讀結果 390: Image interpretation results

Claims (26)

一種醫學影像分析方法,包括:讀取一原始醫學影像;利用一第一分類模型、一病徵偵測模型、一生理組織偵測模型對該原始醫學影像進行影像分類及物件偵測,以分別得到一第一分類結果、一病徵偵測結果、及一生理組織偵測結果;由一特徵整合轉換模組對該病徵偵測結果及該生理組織偵測結果進行物件特徵的整合及量化轉換,以得到一量化結果,該量化結果係與從該病徵偵測結果及該生理組織偵測結果中所得到一病徵與一生理組織的關聯性相關;以及由一機器學習模組對該量化結果與該第一分類結果進行機器學習,以得到並顯示一影像判讀結果;其中,該特徵整合轉換模組對該病徵偵測結果與該生理組織偵測結果透過量化轉換得到一純量或向量;其中,於該特徵整合轉換模組進行量化時,對於複數個象限與一病徵間的關係,量化程度分別與一角度隸屬象限程度、一面積、一信心值或其任意組合呈一第一正向關係;其中,於該特徵整合轉換模組進行量化時,對於一生理組織與該病徵之間的關係,量化程度分別與該病徵與一生理組織之間的一距離的一倒數、該面積、該信心值或其任意組合呈一第二正向關係。 A medical image analysis method includes: reading an original medical image; using a first classification model, a symptom detection model, and a physiological tissue detection model to perform image classification and object detection on the original medical image to obtain respectively A first classification result, a symptom detection result, and a physiological tissue detection result; a feature integration conversion module integrates and quantifies the object characteristics of the symptom detection result and the physiological tissue detection result, so as to Obtaining a quantified result that is related to the correlation between a disease symptom and a physiological tissue obtained from the disease symptom detection result and the physiological tissue detection result; and using a machine learning module to correlate the quantified result with the physiological tissue detection result The first classification result is subjected to machine learning to obtain and display an image interpretation result; wherein, the feature integration conversion module obtains a scalar quantity or vector through quantitative conversion of the symptom detection result and the physiological tissue detection result; wherein, When the feature integration conversion module performs quantification, for the relationship between multiple quadrants and a disease symptom, the degree of quantification has a first positive relationship with an angle's degree of belonging to a quadrant, an area, a confidence value, or any combination thereof; Among them, when the feature integration conversion module performs quantification, for the relationship between a physiological tissue and the disease symptom, the degree of quantification is respectively related to a reciprocal of a distance between the disease symptom and a physiological tissue, the area, and the confidence value. or any combination thereof exhibits a second positive relationship. 如請求項1所述之醫學影像分析方法,更包括:當一處理器判斷該原始醫學影像的一尺寸不小於一既定尺寸門檻值時,該處理器調整該原始醫學影像的該尺寸,以小於該既定尺寸門檻值。 The medical image analysis method as described in claim 1 further includes: when a processor determines that a size of the original medical image is not less than a predetermined size threshold, the processor adjusts the size of the original medical image to be less than the predetermined size threshold. 如請求項1所述之醫學影像分析方法,其中,該第一分類模型,用以分析該原始醫學影像以得到該第一分類結果,該第一分類結果為一疾病分類結果;該病徵偵測模型,分析該原始醫學影像以得到該病徵偵測結果,該病徵偵測結果包含各病徵的一位置、一面積、一信心值及各病徵種類的一病徵總數量;以及該生理組織偵測模型,分析該原始醫學影像以得到該生理組織偵測結果,該生理組織偵測結果包含各生理組織的一位置、一第一長度、一第二長度與一信心值。 The medical image analysis method as described in claim 1, wherein the first classification model is used to analyze the original medical image to obtain the first classification result, and the first classification result is a disease classification result; the symptom detection model analyzes the original medical image to obtain the symptom detection result, and the symptom detection result includes a position, an area, a confidence value and a total number of symptoms of each symptom type for each symptom type; and the physiological tissue detection model analyzes the original medical image to obtain the physiological tissue detection result, and the physiological tissue detection result includes a position, a first length, a second length and a confidence value for each physiological tissue. 如請求項3所述之醫學影像分析方法,其中,各病徵的該面積經由該病徵偵測模型所偵測出的一定界框之一垂直長度及一水平長度藉由一處理器計算求得。 The medical image analysis method of claim 3, wherein the area of each symptom is calculated by a processor through a vertical length and a horizontal length of a certain bounding box detected by the symptom detection model. 如請求項3所述之醫學影像分析方法,其中,該些互補人工智慧模型更包括一第二分類模型,該生理組織偵測結果輸入至該第二分類模型,該第二分類模型根據該生理組織偵測結果從該原始醫學影像中框選出一目標區域,以分析所框選出的該目標區域,而得到一第二分類結果,該第二分類結果輸入至該機器學習模組。 The medical image analysis method as described in claim 3, wherein the complementary artificial intelligence models further include a second classification model, the physiological tissue detection result is input into the second classification model, the second classification model selects a target area from the original medical image according to the physiological tissue detection result, analyzes the selected target area, and obtains a second classification result, and the second classification result is input into the machine learning module. 如請求項1所述之醫學影像分析方法,其中,該些象限與該病徵之該第一正向關係有關於該角度隸屬象限程度與一第一參數之一第一運算結果、該面積與一第二參數之一第二運算結果,以及,該信心值與一第三參數之一第三運算結果;以及該生理組織與該病徵之該第二正向關係有關於該病徵與該生理組織之間的該距離的該倒數與一第四參數之一第四運算結果,該面積與一第五參數之一第五運算結果,以及,該信心值與一第六參數之一第六運算結果。 The medical image analysis method as described in claim 1, wherein the first positive relationship between the quadrants and the symptom is related to a first operation result of the degree of quadrant belonging of the angle and a first parameter, a second operation result of the area and a second parameter, and a third operation result of the confidence value and a third parameter; and the second positive relationship between the physiological tissue and the symptom is related to a fourth operation result of the reciprocal of the distance between the symptom and the physiological tissue and a fourth parameter, a fifth operation result of the area and a fifth parameter, and a sixth operation result of the confidence value and a sixth parameter. 如請求項6所述之醫學影像分析方法,其中,該第一參數,該第二參數,該第三參數,該第四參數,該第五參數與該第六參數透過資料學習而獲得。 The medical image analysis method as described in claim 6, wherein the first parameter, the second parameter, the third parameter, the fourth parameter, the fifth parameter and the sixth parameter are obtained through data learning. 如請求項7所述之醫學影像分析方法,其中,該第一參數,該第二參數,該第三參數,該第四參數,該第五參數與該第六參數透過反向傳播法或貝葉斯優化進行更新。 The medical image analysis method as described in claim 7, wherein the first parameter, the second parameter, the third parameter, the fourth parameter, the fifth parameter and the sixth parameter are updated by back propagation or Bayesian optimization. 如請求項1所述之醫學影像分析方法,其中,當該角度隸屬象限程度用一模糊理論實現時,藉由一處理器定義複數個模糊集合;當該角度隸屬象限程度以一函數表示時,該函數之一輸出介於0與1之間;以及該角度隸屬象限函數之一形狀為一梯形、一三角形或其任意組合,並且該形狀是可訓練的。 The medical image analysis method as described in claim 1, wherein when the degree of the angle's membership in the quadrant is implemented by a fuzzy theory, a plurality of fuzzy sets are defined by a processor; when the degree of the angle's membership in the quadrant is represented by a function, One of the outputs of the function is between 0 and 1; and the shape of one of the angle membership quadrant functions is a trapezoid, a triangle or any combination thereof, and the shape is trainable. 如請求項1所述之醫學影像分析方法,其中,該病徵偵測結果為一病徵結果矩陣,該病徵結果矩陣中之各列元素包括:該病徵之一類型,該病徵之一位置、該病徵之一水平長度、該病徵之一垂直長度與該病徵之一信心值,以及該生理組織偵測結果為一生理組織結果矩陣,該生理組織結果矩陣之各列元素包括:該生理組織之一類型、該生理組織之一位置、該生理組織之一水平長度、該生理組織之一垂直長度與該生理組織之一信心值。 The medical image analysis method as described in claim 1, wherein the symptom detection result is a symptom result matrix, and each column element in the symptom result matrix includes: a type of the symptom, a location of the symptom, A horizontal length, a vertical length of the symptom and a confidence value of the symptom, and the physiological tissue detection result is a physiological tissue result matrix, and each column element of the physiological tissue result matrix includes: a type of the physiological tissue , a position of the physiological tissue, a horizontal length of the physiological tissue, a vertical length of the physiological tissue and a confidence value of the physiological tissue. 如請求項10所述之醫學影像分析方法,其中,該特徵整合轉換模組將該病徵結果矩陣與該生理組織結果矩陣整合為一病徵生理組織關係矩陣;以及該第一分類結果為一一維信心值矩陣,該特徵整合轉換模組將該病徵生理組織關係矩陣平坦化為一一維病徵生理組織關係矩陣。 The medical image analysis method as claimed in claim 10, wherein the feature integration conversion module integrates the disease symptom result matrix and the physiological tissue result matrix into a disease symptom physiological tissue relationship matrix; and the first classification result is a one-dimensional Confidence value matrix, this feature integration conversion module flattens the disease-symptom physiological tissue relationship matrix into a one-dimensional disease-symptom physiological tissue relationship matrix. 如請求項11所述之醫學影像分析方法,其中,該機器學習模組對該一維信心值矩陣與該一維病徵生理組織關係矩陣進行機器學習,以得到該影像判讀結果。 The medical image analysis method as described in claim 11, wherein the machine learning module performs machine learning on the one-dimensional confidence value matrix and the one-dimensional symptom physiological tissue relationship matrix to obtain the image interpretation result. 如請求項12所述之醫學影像分析方法,其中,該影像判讀結果包括具有該病徵與該生理組織的一醫學影像與至少一判讀結果;以及,該原始醫學影像包括一眼底影像或一非眼底影像。 The medical image analysis method as claimed in claim 12, wherein the image interpretation result includes a medical image with the symptom and the physiological tissue and at least one interpretation result; and the original medical image includes a fundus image or a non-fundus image. image. 一種醫學影像分析裝置,包括一處理器;以及一顯示單元,耦接於該處理器,其中,該處理器架構成:讀取一原始醫學影像;利用一第一分類模型、一病徵偵測模型、一生理組織偵測模型對該原始醫學影像進行影像分類及物件偵測,以分別得到一第一分類結果、一病徵偵測結果、及一生理組織偵測結果;由一特徵整合轉換模組對該病徵偵測結果及該生理組織偵測結果進行物件特徵之整合及量化轉換,以得到一量化結果,該量化結果係與從該病徵偵測結果及該生理組織偵測結果中所得到一病徵與一生理組織的關聯性相關;以及由一機器學習模組對該量化結果與該第一分類結果進行機器學習,以得到一影像判讀結果,該影像判讀結果係顯示於該顯示單元上;其中,該特徵整合轉換模組對該病徵偵測結果與該生理組織偵測結果透過量化轉換得到一純量或向量;其中,於該特徵整合轉換模組進行量化時,對於複數個象限與一病徵間的關係,量化程度分別與一角度隸屬象限程度、一面積、一信心值或其之任意組合呈一第一正向關係; 其中,於該特徵整合轉換模組進行量化時,對於一生理組織與該病徵之間的關係,量化程度分別與該病徵與一生理組織之間的一距離的一倒數、該面積、該信心值或其任意組合呈一第二正向關係。 A medical image analysis device includes a processor; and a display unit coupled to the processor, wherein the processor is configured to: read an original medical image; perform image classification and object detection on the original medical image using a first classification model, a symptom detection model, and a physiological tissue detection model to obtain a first classification result, a symptom detection result, and a physiological tissue detection result, respectively; physiological tissue detection result; integrating and quantifying the object features of the symptom detection result and the physiological tissue detection result by a feature integration conversion module to obtain a quantitative result, which is related to the correlation between a symptom and a physiological tissue obtained from the symptom detection result and the physiological tissue detection result; and integrating and quantifying the object features of the symptom detection result and the physiological tissue detection result by a machine learning module. A classification result is machine-learned to obtain an image recognition result, and the image recognition result is displayed on the display unit; wherein the feature integration conversion module obtains a scalar or vector through quantization conversion of the symptom detection result and the physiological tissue detection result; wherein, when the feature integration conversion module performs quantization, for the relationship between a plurality of quadrants and a symptom, the quantization degree is respectively in a first positive relationship with an angle belonging to a quadrant degree, an area, a confidence value or any combination thereof; wherein, when the feature integration conversion module performs quantization, for the relationship between a physiological tissue and the symptom, the quantization degree is respectively in a second positive relationship with an inverse of a distance between the symptom and a physiological tissue, the area, the confidence value or any combination thereof. 如請求項14所述之醫學影像分析裝置,其中,該處理器架構成:當判斷該原始醫學影像的一尺寸不小於一既定尺寸門檻值時,調整該原始醫學影像的該尺寸,以小於該既定尺寸門檻值。 The medical image analysis device as claimed in claim 14, wherein the processor is configured to: when determining that a size of the original medical image is not smaller than a predetermined size threshold, adjust the size of the original medical image to be smaller than the size threshold. Established size threshold. 如請求項14所述之醫學影像分析裝置,其中,該第一分類模型,用以分析該原始醫學影像以得到該第一分類結果,該第一分類結果為一疾病分類結果;該病徵偵測模型,分析該原始醫學影像以得到該病徵偵測結果,該病徵偵測結果包含各病徵的一位置、一面積、一信心值及各病徵種類的一病徵總數量;以及該生理組織偵測模型,分析該原始醫學影像以得到該生理組織偵測結果,該生理組織偵測結果包含各生理組織的一位置、一第一長度、一第二長度與一信心值。 The medical image analysis device as claimed in claim 14, wherein the first classification model is used to analyze the original medical image to obtain the first classification result, and the first classification result is a disease classification result; the disease symptom detection A model that analyzes the original medical image to obtain the symptom detection result, which includes a location, an area, a confidence value for each symptom, and a total number of symptoms for each symptom type; and the physiological tissue detection model , analyze the original medical image to obtain the physiological tissue detection result. The physiological tissue detection result includes a position, a first length, a second length and a confidence value of each physiological tissue. 如請求項16所述之醫學影像分析裝置,其中,各病徵的該面積經由該病徵偵測模型所偵測出的一定界框之一垂直長度及一水平長度計算求得。 A medical image analysis device as described in claim 16, wherein the area of each symptom is calculated by a vertical length and a horizontal length of a certain bounding box detected by the symptom detection model. 如請求項17所述之醫學影像分析裝置,其中,該些互補人工智慧模型更包括一第二分類模型,該生理組織偵測 結果輸入至該第二分類模型,該第二分類模型根據該生理組織偵測結果從該原始醫學影像中框選出一目標區域,以分析所框選出的該目標區域,而得到一第二分類結果,該第二分類結果輸入至該機器學習模組。 The medical image analysis device according to claim 17, wherein the complementary artificial intelligence models further include a second classification model, the physiological tissue detection The results are input to the second classification model. The second classification model selects a target area from the original medical image based on the physiological tissue detection result, and analyzes the selected target area to obtain a second classification result. , the second classification result is input to the machine learning module. 如請求項14所述之醫學影像分析裝置,其中,該些象限與該病徵之該第一正向關係有關於該角度隸屬象限程度與一第一參數之一第一運算結果、該面積與一第二參數之一第二運算結果,以及,該信心值與一第三參數之一第三運算結果;以及該生理組織與該病徵之該第二正向關係有關於該病徵與該生理組織之間的該距離的該倒數與一第四參數之一第四運算結果,該面積與一第五參數之一第五運算結果,以及,該信心值與一第六參數之一第六運算結果。 The medical image analysis device as claimed in claim 14, wherein the first positive relationship between the quadrants and the symptom is related to a first calculation result of the degree of the angle belonging to the quadrant and a first parameter, the area and a first parameter. A second operation result of a second parameter, and a third operation result of the confidence value and a third parameter; and the second positive relationship between the physiological tissue and the disease symptom is related to the relationship between the disease symptom and the physiological tissue. a fourth operation result of the reciprocal of the distance and a fourth parameter, a fifth operation result of the area and a fifth parameter, and a sixth operation result of the confidence value and a sixth parameter. 如請求項19所述之醫學影像分析裝置,其中,該第一參數,該第二參數,該第三參數,該第四參數,該第五參數與該第六參數透過資料學習而獲得。 The medical image analysis device as described in claim 19, wherein the first parameter, the second parameter, the third parameter, the fourth parameter, the fifth parameter and the sixth parameter are obtained through data learning. 如請求項20所述之醫學影像分析裝置,其中,該第一參數,該第二參數,該第三參數,該第四參數,該第五參數與該第六參數透過反向傳播法或貝葉斯優化進行更新。 A medical image analysis device as described in claim 20, wherein the first parameter, the second parameter, the third parameter, the fourth parameter, the fifth parameter and the sixth parameter are updated by back propagation or Bayesian optimization. 如請求項14所述之醫學影像分析裝置,其中,當該角度隸屬象限程度用一模糊理論實現時,定義複數個模糊集合; 當該角度隸屬象限程度以一函數表示時,該函數之一輸出介於0與1之間;以及該角度隸屬象限函數之一形狀為一梯形、一三角形或其任意組合,並且該形狀是可訓練的。 The medical image analysis device as claimed in claim 14, wherein when the degree of the angle belonging to a quadrant is realized using a fuzzy theory, a plurality of fuzzy sets are defined; When the degree of the angle's quadrant membership is represented by a function, one of the outputs of the function is between 0 and 1; and one shape of the angle's quadrant membership function is a trapezoid, a triangle or any combination thereof, and the shape is trained. 如請求項14所述之醫學影像分析裝置,其中,該病徵偵測結果為一病徵結果矩陣,該病徵結果矩陣中之各列元素包括:該病徵之一類型,該病徵之一位置、該病徵之一水平長度、該病徵之一垂直長度與該病徵之一信心值,以及該生理組織偵測結果為一生理組織結果矩陣,該生理組織結果矩陣之各列元素包括:該生理組織之一類型、該生理組織之一位置、該生理組織之一水平長度、該生理組織之一垂直長度與該生理組織之一信心值。 The medical image analysis device as described in claim 14, wherein the symptom detection result is a symptom result matrix, each row element in the symptom result matrix includes: a type of the symptom, a position of the symptom, a horizontal length of the symptom, a vertical length of the symptom, and a confidence value of the symptom, and the physiological tissue detection result is a physiological tissue result matrix, each row element in the physiological tissue result matrix includes: a type of the physiological tissue, a position of the physiological tissue, a horizontal length of the physiological tissue, a vertical length of the physiological tissue, and a confidence value of the physiological tissue. 如請求項23所述之醫學影像分析裝置,其中,該特徵整合轉換模組將該病徵結果矩陣與該生理組織結果矩陣整合為一病徵生理組織關係矩陣;以及該第一分類結果為一一維信心值矩陣,該特徵整合轉換模組將該病徵生理組織關係矩陣平坦化為一一維病徵生理組織關係矩陣。 The medical image analysis device as described in claim 23, wherein the feature integration conversion module integrates the symptom result matrix and the physiological tissue result matrix into a symptom physiological tissue relationship matrix; and the first classification result is a one-dimensional confidence value matrix, and the feature integration conversion module flattens the symptom physiological tissue relationship matrix into a one-dimensional symptom physiological tissue relationship matrix. 如請求項24所述之醫學影像分析裝置,其中,該機器學習模組對該一維信心值矩陣與該一維病徵生理組織關係矩陣進行機器學習,以得到該影像判讀結果。 The medical image analysis device as described in claim 24, wherein the machine learning module performs machine learning on the one-dimensional confidence value matrix and the one-dimensional symptom physiological tissue relationship matrix to obtain the image interpretation result. 如請求項25所述之醫學影像分析裝置,其中,該影像判讀結果包括具有該病徵與該生理組織的一醫學影像與至少一判讀結果;以及,該原始醫學影像包括一眼底影像或一非眼底影像。 The medical image analysis device of claim 25, wherein the image interpretation result includes a medical image with the symptom and the physiological tissue and at least one interpretation result; and the original medical image includes a fundus image or a non-fundus image. image.
TW110137780A 2020-12-16 2021-10-12 Medical image analysis method and device TWI836280B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111296777.3A CN114638781A (en) 2020-12-16 2021-11-03 Medical image analysis method and device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
TW109144443 2020-12-16
TW109144443 2020-12-16

Publications (2)

Publication Number Publication Date
TW202226270A TW202226270A (en) 2022-07-01
TWI836280B true TWI836280B (en) 2024-03-21

Family

ID=83437036

Family Applications (1)

Application Number Title Priority Date Filing Date
TW110137780A TWI836280B (en) 2020-12-16 2021-10-12 Medical image analysis method and device

Country Status (1)

Country Link
TW (1) TWI836280B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201926359A (en) * 2017-11-30 2019-07-01 美商南坦生物組學有限責任公司 Detecting intratumor heterogeneity of molecular subtypes in pathology slide images using deep-learning
US20190286990A1 (en) * 2018-03-19 2019-09-19 AI Certain, Inc. Deep Learning Apparatus and Method for Predictive Analysis, Classification, and Feature Detection
US10430946B1 (en) * 2019-03-14 2019-10-01 Inception Institute of Artificial Intelligence, Ltd. Medical image segmentation and severity grading using neural network architectures with semi-supervised learning techniques
CN110516759A (en) * 2019-09-02 2019-11-29 河南师范大学 A kind of soft tissue sarcoma based on machine learning shifts risk forecasting system
US20200160510A1 (en) * 2018-11-20 2020-05-21 International Business Machines Corporation Automated Patient Complexity Classification for Artificial Intelligence Tools
US20200211180A1 (en) * 2013-10-12 2020-07-02 H. Lee Moffitt Cancer Center And Research Institute, Inc. Systems and methods for diagnosing tumors in a subject by performing a quantitative analysis of texture-based features of a tumor object in a radiological image
TW202036592A (en) * 2019-03-19 2020-10-01 緯創資通股份有限公司 Image identifying method and image identifying device
TW202040585A (en) * 2019-02-07 2020-11-01 美商醫隼智慧公司 Method and apparatus for automated target and tissue segmentation using multi-modal imaging and ensemble machine learning models

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200211180A1 (en) * 2013-10-12 2020-07-02 H. Lee Moffitt Cancer Center And Research Institute, Inc. Systems and methods for diagnosing tumors in a subject by performing a quantitative analysis of texture-based features of a tumor object in a radiological image
TW201926359A (en) * 2017-11-30 2019-07-01 美商南坦生物組學有限責任公司 Detecting intratumor heterogeneity of molecular subtypes in pathology slide images using deep-learning
US20190286990A1 (en) * 2018-03-19 2019-09-19 AI Certain, Inc. Deep Learning Apparatus and Method for Predictive Analysis, Classification, and Feature Detection
US20200160510A1 (en) * 2018-11-20 2020-05-21 International Business Machines Corporation Automated Patient Complexity Classification for Artificial Intelligence Tools
TW202040585A (en) * 2019-02-07 2020-11-01 美商醫隼智慧公司 Method and apparatus for automated target and tissue segmentation using multi-modal imaging and ensemble machine learning models
US10430946B1 (en) * 2019-03-14 2019-10-01 Inception Institute of Artificial Intelligence, Ltd. Medical image segmentation and severity grading using neural network architectures with semi-supervised learning techniques
TW202036592A (en) * 2019-03-19 2020-10-01 緯創資通股份有限公司 Image identifying method and image identifying device
CN110516759A (en) * 2019-09-02 2019-11-29 河南师范大学 A kind of soft tissue sarcoma based on machine learning shifts risk forecasting system

Also Published As

Publication number Publication date
TW202226270A (en) 2022-07-01

Similar Documents

Publication Publication Date Title
Wu et al. Coarse-to-fine classification for diabetic retinopathy grading using convolutional neural network
Liang et al. A transfer learning method with deep residual network for pediatric pneumonia diagnosis
Lv et al. Attention guided U-Net with atrous convolution for accurate retinal vessels segmentation
Jiang et al. Learning efficient, explainable and discriminative representations for pulmonary nodules classification
Lu et al. Quantifying Parkinson’s disease motor severity under uncertainty using MDS-UPDRS videos
Xue et al. Deep membrane systems for multitask segmentation in diabetic retinopathy
Singh et al. Deep learning system applicability for rapid glaucoma prediction from fundus images across various data sets
US11610306B2 (en) Medical image analysis method and device
Ramachandran et al. A deep learning framework for the detection of Plus disease in retinal fundus images of preterm infants
Chhabra et al. A smart healthcare system based on classifier DenseNet 121 model to detect multiple diseases
Yadav et al. Microaneurysm detection using color locus detection method
Chen et al. Generative adversarial network based cerebrovascular segmentation for time-of-flight magnetic resonance angiography image
Dhindsa et al. Grading prenatal hydronephrosis from ultrasound imaging using deep convolutional neural networks
Mathews et al. A comprehensive review on automated systems for severity grading of diabetic retinopathy and macular edema
Kaya Feature fusion-based ensemble CNN learning optimization for automated detection of pediatric pneumonia
Jayachandran et al. Multi-dimensional cascades neural network models for the segmentation of retinal vessels in colour fundus images
Bao et al. Orbital and eyelid diseases: The next breakthrough in artificial intelligence?
Singh et al. A novel hybridized feature selection strategy for the effective prediction of glaucoma in retinal fundus images
Yang et al. NAUNet: lightweight retinal vessel segmentation network with nested connections and efficient attention
WO2021155829A1 (en) Medical imaging-based method and device for diagnostic information processing, and storage medium
Alshayeji et al. Two-stage framework for diabetic retinopathy diagnosis and disease stage screening with ensemble learning
Wang et al. Optic disc detection based on fully convolutional neural network and structured matrix decomposition
Perumal et al. Microaneurysms detection in fundus images using local fourier transform and neighbourhood analysis
TWI836280B (en) Medical image analysis method and device
Aurangzeb et al. Retinal vessel segmentation based on the anam-net model