TWI768483B - Method and apparatus for identifying white matter hyperintensities - Google Patents

Method and apparatus for identifying white matter hyperintensities Download PDF

Info

Publication number
TWI768483B
TWI768483B TW109133700A TW109133700A TWI768483B TW I768483 B TWI768483 B TW I768483B TW 109133700 A TW109133700 A TW 109133700A TW 109133700 A TW109133700 A TW 109133700A TW I768483 B TWI768483 B TW I768483B
Authority
TW
Taiwan
Prior art keywords
feature map
generate
white matter
image
output
Prior art date
Application number
TW109133700A
Other languages
Chinese (zh)
Other versions
TW202213378A (en
Inventor
李宜恬
彥廷 陳
林敬庭
Original Assignee
臺北醫學大學
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 臺北醫學大學 filed Critical 臺北醫學大學
Priority to TW109133700A priority Critical patent/TWI768483B/en
Publication of TW202213378A publication Critical patent/TW202213378A/en
Application granted granted Critical
Publication of TWI768483B publication Critical patent/TWI768483B/en

Links

Images

Abstract

The present disclosure is related to a method and an apparatus for identifying white matter hyperintensities (WMH). Some embodiments of the present disclosure are related to a method for identifying white matter hyperintensities (WMH). The method comprises: receiving an input image; performing a first set of convolutions; and performing an output convolution so as to produce at least two image segmentations, the output convolution including performing a convolution with at least two kernel maps.

Description

用於識別腦白質高信號之方法及裝置Method and device for identifying white matter hyperintensity

本揭露一般而言係關於一種影像識別之方法及裝置。更明確地說,本揭露係關於一種識別腦白質高信號(white matter hyperintensities;WMH)之方法及裝置。The present disclosure generally relates to a method and apparatus for image recognition. More specifically, the present disclosure relates to a method and apparatus for identifying white matter hyperintensities (WMH).

在醫學影像領域中,一般是由專科醫師進行影像的判讀。舉例而言,藉由磁振造影(Magnetic Resonance Imaging,簡稱MRI)取得腦部影像後,腦科醫師可於影像上圈選病灶。藉由以參考所圈選所病灶、病人主訴以及其他醫學檢查結果,專科醫師可以判定疾病的病因以及發病機制,並據此做出診斷及擬定治療方法。在腦科醫學領域中,腦白質高信號(white matter hyperintensities;WMH)常被醫師認為係可能的腦白質病灶(white matter leisons)。In the field of medical imaging, image interpretation is generally performed by specialist physicians. For example, after obtaining a brain image by Magnetic Resonance Imaging (MRI), a brain surgeon can circle the lesions on the image. By referring to the selected lesions, patient complaints and other medical examination results, specialists can determine the etiology and pathogenesis of the disease, and make a diagnosis and plan treatment accordingly. In the field of brain medicine, white matter hyperintensities (WMH) are often considered by physicians as possible white matter leisons.

因為影像辨識技術的發展,影像辨識技術已可用於醫學影像的初步圈選,以增加專科醫師解讀醫學影像的效率及準確率。Due to the development of image recognition technology, image recognition technology has been used in the preliminary selection of medical images to increase the efficiency and accuracy of medical image interpretation by specialists.

本揭露一些實施例係關於一種識別腦白質高信號之方法。該方法包括:接收輸入影像;執行第一組卷積;以及執行輸出卷積以產生至少兩個影像分割,該輸出卷積包含與至少兩個核心圖(kernel map)執行卷積。Some embodiments of the present disclosure relate to a method of identifying white matter hyperintensity. The method includes: receiving an input image; performing a first set of convolutions; and performing an output convolution to generate at least two image segmentations, the output convolution comprising performing a convolution with at least two kernel maps.

本揭露一些實施例係關於一種腦白質高信號量化之方法。該方法包括:接收複數個輸入影像;針對該複數個輸入影像之每一者執行一種識別腦白質高信號之方法,以產生複數個灰質影像分割、複數個白質影像分割、複數個白質高信號影像分割、複數個腦脊髓液影像分割;基於該複數個灰質影像分割,產生灰質體積;基於該複數個白質影像分割,產生白質體積;基於該複數個白質高信號影像分割,產生白質高信號體積;基於該複數個腦脊髓液影像分割,產生腦脊髓液體積;以及基於該灰質體積、該白質體積、該白質高信號體積、該腦脊髓液體積以及該複數個輸入影像之切片厚度進行影像分級。Some embodiments of the present disclosure relate to a method for quantifying white matter hyperintensity. The method includes: receiving a plurality of input images; performing a method of identifying white matter hyperintensity for each of the plurality of input images to generate a plurality of gray matter image segmentations, a plurality of white matter image segmentations, a plurality of white matter hyperintensity images segmentation, segmentation of a plurality of cerebrospinal fluid images; segmentation based on the plurality of gray matter images to generate a gray matter volume; segmentation based on the plurality of white matter images to generate a white matter volume; segmentation based on the plurality of white matter hyperintensity images to generate a white matter hyperintensity volume; Based on the plurality of cerebrospinal fluid image segmentations, a cerebrospinal fluid volume is generated; and image grading is performed based on the gray matter volume, the white matter volume, the white matter hyperintensity volume, the cerebrospinal fluid volume, and slice thicknesses of the plurality of input images.

本揭露一些實施例係關於識別腦白質高信號之裝置。該裝置包括:處理器;以及與該處理器耦合之記憶體。處理器執行儲存於該記憶體之電腦可讀取指令以執行以下操作:接收至少一個輸入影像。針對該至少一個輸入影像之每一者執行以下操作:執行第一組卷積;以及執行輸出卷積以產生至少兩個影像分割,該輸出卷積包含與至少兩個核心圖執行卷積。Some embodiments of the present disclosure relate to devices for identifying white matter hyperintensities. The apparatus includes: a processor; and a memory coupled to the processor. The processor executes computer-readable instructions stored in the memory to perform the following operations: receive at least one input image. The following operations are performed for each of the at least one input image: performing a first set of convolutions; and performing output convolutions to generate at least two image segmentations, the output convolutions including performing convolutions with at least two core maps.

描述本發明之方法、系統及其他態樣。將參考本發明之某些實施例,其實例在隨附圖式中加以說明。雖然本發明將結合實施例進行描述,但將理解,並不意欲將本發明僅限於此等特定實施例。相反地,本發明意欲覆蓋本發明之精神及範圍內的替代方案、修改及等效物。因此,應在說明性意義上而非限定性意義上看待說明書及圖式。Methods, systems, and other aspects of the present invention are described. Reference will be made to certain embodiments of the invention, examples of which are illustrated in the accompanying drawings. While the invention will be described in conjunction with the embodiments, it will be understood that the intention is not to limit the invention to these specific embodiments. On the contrary, the invention is intended to cover alternatives, modifications and equivalents within the spirit and scope of the invention. Accordingly, the specification and drawings should be regarded in an illustrative rather than a restrictive sense.

此外,在以下描述中,闡述眾多具體細節以提供對本發明之透徹理解。然而,一般熟習此項技術者將可無需此等特定細節而實踐本發明。在其他情況下,為避免混淆本發明之態樣,並未詳細描述一般熟習此項技術者已熟知的方法、程序、操作、組件及網路。Furthermore, in the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, one of ordinary skill in the art will be able to practice the present invention without these specific details. In other instances, methods, procedures, operations, components and networks well known to those of ordinary skill in the art have not been described in detail in order to avoid obscuring aspects of the present invention.

下文將參考隨附圖式詳細描述本揭露之一些實施例。Some embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.

圖1繪示根據本揭露之一些實施例的影像辨識流程圖。圖1可說明輸入影像101及輸出影像201、203、205、207、209、211。根據圖1,輸入影像101可經影像處理程序300處理。輸入影像101經影像處理程序300處理後可輸出兩個或兩個以上的影像,例如輸出影像201、203、205、207、209、211。FIG. 1 illustrates a flowchart of image recognition according to some embodiments of the present disclosure. FIG. 1 may illustrate input image 101 and output images 201 , 203 , 205 , 207 , 209 , 211 . According to FIG. 1 , the input image 101 may be processed by the image processing program 300 . After the input image 101 is processed by the image processing program 300 , two or more images can be output, such as output images 201 , 203 , 205 , 207 , 209 , and 211 .

輸入影像101可為平面影像或2D影像。在其他實施例中,輸入影像101可為立體影像或3D影像。在一些實施例中,輸入影像101可為一種MRI影像。例如,輸入影像101可為一種FLAIR(Fluid Attenuated Inversion Recovery,液體衰減反轉回復)影像。輸入影像101亦可以為一種T2-FLAIR影像。The input image 101 can be a flat image or a 2D image. In other embodiments, the input image 101 may be a stereoscopic image or a 3D image. In some embodiments, the input image 101 may be an MRI image. For example, the input image 101 may be a FLAIR (Fluid Attenuated Inversion Recovery) image. The input image 101 can also be a T2-FLAIR image.

影像處理程序300可根據不同組織的特性而將輸入影像101分割成多個影像。輸出影像201、203、205、207、209、211可指示不同的組織。在一些實施例中,輸出影像201可指示腦灰質;輸出影像203可指示腦白質;輸出影像205可指示腦白質高信號;輸出影像207可指示腦脊髓液;輸出影像209可指示頭皮及/或頭骨;輸出影像211可指示空氣。The image processing program 300 can divide the input image 101 into multiple images according to the characteristics of different tissues. The output images 201, 203, 205, 207, 209, 211 may indicate different tissues. In some embodiments, output image 201 may indicate gray matter; output image 203 may indicate white matter; output image 205 may indicate white matter hyperintensity; output image 207 may indicate cerebrospinal fluid; output image 209 may indicate scalp and/or Skull; output image 211 may indicate air.

在一些實施例中,輸入影像101經影像處理程序300可輸出兩個或兩個以上的影像。例如,輸入影像101經影像處理程序300可輸出兩個影像,一個輸出影像可指示腦白質,一個輸出影像可指示腦白質高信號。例如,輸入影像101經影像處理程序300可輸出三個影像,一個輸出影像可指示腦灰質,一個輸出影像可指示腦白質,一個輸出影像可指示腦白質高信號。In some embodiments, the input image 101 may output two or more images through the image processing program 300 . For example, the input image 101 may output two images through the image processing program 300 , one output image may indicate white matter, and the other output image may indicate white matter hyperintensity. For example, the input image 101 can output three images through the image processing program 300 , one output image can indicate gray matter, one output image can indicate white matter, and one output image can indicate white matter hyperintensity.

影像處理程序300可為一種2D影像辨識程序或3D影像辨識程序,用以辨識影像上的特徵。影像處理程序300可為藉由機器學習而產生之影像辨識程序。影像處理程序300可為藉由深度學習而產生之影像辨識程序。影像處理程序300可為藉由卷積神經網路之反向傳播而產生之影像辨識程序。影像處理程序300可為一種卷積神經網路。影像處理程序300可為一種用於影像語義分割之卷積神經網路(convolution neural network for image semantic segmentation)。影像處理程序300可為一種U-Net架構的卷積神經網路。影像處理程序300可為一種U-Net架構的多類別輸出卷積神經網路。影像處理程序300可為一種U-SegNet架構的卷積神經網路。影像處理程序300可為一種U-SegNet架構的多類別輸出卷積神經網路。The image processing program 300 may be a 2D image recognition program or a 3D image recognition program for recognizing features on the image. The image processing program 300 may be an image recognition program generated by machine learning. The image processing program 300 may be an image recognition program generated by deep learning. The image processing procedure 300 may be an image recognition procedure generated by backpropagation of a convolutional neural network. The image processing program 300 may be a convolutional neural network. The image processing program 300 may be a convolution neural network for image semantic segmentation. The image processing program 300 may be a convolutional neural network with a U-Net architecture. The image processing program 300 may be a multi-class output convolutional neural network with a U-Net architecture. The image processing program 300 may be a convolutional neural network with a U-SegNet architecture. The image processing program 300 may be a multi-class output convolutional neural network with a U-SegNet architecture.

在本揭露的一些實施例中,影像處理程序300可為藉由卷積神經網路之反向傳播而產生之卷積神經網路。在本揭露之實施例中,用以訓練及測試卷積神經網路之案例如表1所示。本揭露不詳述藉由卷積神經網路之反向傳播而產生卷積神經網路之過程。 Fazekas分級 訓練資料集 測試資料集 Grade I 834 65 Grade II 339 22 Grade III 195 13 總案例數 1368 100 表1 In some embodiments of the present disclosure, the image processing program 300 may be a convolutional neural network generated by backpropagation of a convolutional neural network. In the embodiment of the present disclosure, the cases for training and testing the convolutional neural network are shown in Table 1. This disclosure does not detail the process of generating a convolutional neural network through backpropagation of the convolutional neural network. Fazekas Grading training dataset test dataset Grade I 834 65 Grade II 339 twenty two Grade III 195 13 total number of cases 1368 100 Table 1

圖2繪示根據本揭露之一些實施例的分級流程圖。根據圖1所繪示之影像辨識流程圖,當將多張輸入影像101輸入至影像處理程序300,影像處理程序300可相應輸出多張輸出影像201、多張輸出影像203、多張輸出影像205、多張輸出影像207、多張輸出影像209及/或多張輸出影像211。舉例而言,可將100張腦部影像(例如同一病人之100影像)依序輸入至影像處理程序300中,影像處理程序300可相應輸出100張輸出影像201、100張輸出影像203、100張輸出影像205、100張輸出影像207、100張輸出影像209及/或100張輸出影像211。FIG. 2 illustrates a hierarchical flow diagram according to some embodiments of the present disclosure. According to the image recognition flowchart shown in FIG. 1 , when multiple input images 101 are input to the image processing program 300 , the image processing program 300 can correspondingly output multiple output images 201 , multiple output images 203 , and multiple output images 205 . , multiple output images 207 , multiple output images 209 and/or multiple output images 211 . For example, 100 brain images (for example, 100 images of the same patient) can be sequentially input into the image processing program 300, and the image processing program 300 can output 100 output images 201, 100 output images 203, and 100 correspondingly Output images 205 , 100 output images 207 , 100 output images 209 , and/or 100 output images 211 .

參考圖2,多張輸出影像201可計算出體積231;多張輸出影像203可計算出體積233;多張輸出影像205可計算出體積235;多張輸出影像207可計算出體積237。在一些實施例中,輸出影像201指示腦灰質;輸出影像203指示腦白質;輸出影像205指示腦白質高信號;輸出影像207指示腦脊髓液。在此等實施例中,藉由計算多張輸出影像201中每一張的腦灰質面積,可獲得腦灰質體積231;藉由計算多張輸出影像203中每一張的腦白質面積,可獲得腦白質體積233;藉由計算多張輸出影像205中每一張的腦白質高信號面積,可獲得腦白質高信號體積235;藉由計算多張輸出影像207中每一張的腦脊髓液面積,可獲得腦脊髓液體積237。Referring to FIG. 2 , a volume 231 can be calculated from multiple output images 201 ; a volume 233 can be calculated from multiple output images 203 ; a volume 235 can be calculated from multiple output images 205 ; and a volume 237 can be calculated from multiple output images 207 . In some embodiments, output image 201 is indicative of gray matter; output image 203 is indicative of white matter; output image 205 is indicative of white matter hyperintensity; and output image 207 is indicative of cerebrospinal fluid. In these embodiments, by calculating the gray matter area of each of the plurality of output images 201 , the gray matter volume 231 can be obtained; by calculating the white matter area of each of the plurality of output images 203 , the obtained white matter volume 233; by calculating the white matter hyperintensity area of each of the multiple output images 205, the white matter hyperintensity volume 235 can be obtained; by calculating the cerebrospinal fluid area of each of the multiple output images 207 , cerebrospinal fluid volume 237 can be obtained.

在一些實施例中,輸入影像101經影像處理程序300可輸出兩個或兩個以上的影像。例如,100張輸入影像101經影像處理程序300可輸出兩組影像,第一組輸出影像可包含指示腦白質之100張影像,第二組輸出影像可包含指示腦白質高信號之100張影像。在此等實施例中,藉由計算第一組輸出影像中每一張的腦白質面積,可獲得腦白質體積;藉由計算第二組輸出影像中每一張的腦白質高信號面積,可獲得腦白質高信號體積。In some embodiments, the input image 101 may output two or more images through the image processing program 300 . For example, 100 input images 101 can be output by the image processing program 300 to two groups of images, the first group of output images may include 100 images indicating white matter, and the second group of output images may include 100 images indicating white matter hyperintensity. In these embodiments, by calculating the white matter area of each of the first set of output images, the white matter volume can be obtained; by calculating the white matter hyperintensity area of each of the second set of output images, the Obtain white matter hyperintensity volumes.

在一些實施例中,輸入影像101經影像處理程序300可輸出三影像。例如,100張輸入影像101經影像處理程序300可輸出三組影像,第一組輸出影像可包含指示腦灰質之100張影像,第二組輸出影像可包含指示腦白質之100張影像,第三組影像可包含指示腦白質高信號之100張影像。在此等實施例中,藉由計算第一組輸出影像中每一張的腦灰質面積,可獲得腦灰質體積;藉由計算第二組輸出影像中每一張的腦白質面積,可獲得腦白質體積;藉由計算第三組輸出影像中每一張的腦白質高信號面積,可獲得腦白質高信號體積。In some embodiments, the input image 101 can output three images through the image processing program 300 . For example, 100 input images 101 can be outputted by the image processing program 300 to three sets of images, the first set of output images may include 100 images indicating the gray matter of the brain, the second set of output images may include 100 images indicating the white matter of the brain, and the third set of output images may include 100 images indicating the white matter of the brain. A set of images may contain 100 images indicative of white matter hyperintensity. In these embodiments, the gray matter volume can be obtained by calculating the gray matter area of each of the first set of output images; the brain gray matter volume can be obtained by calculating the white matter area of each of the second set of output images. White matter volume; by calculating the area of white matter hyperintensity in each of the third set of output images, the volume of white matter hyperintensity was obtained.

根據圖2之一些實施例,腦灰質體積231、腦白質體積233、白質高信號體235、腦脊髓液體積237可輸入至分級程序800。經分級程序800處理後,可輸出分級結果900。在一些實施例中,除了腦灰質體積231、腦白質體積233、白質高信號體積235、腦脊髓液體積237之外,亦可選擇地將影像參數239輸入至分級程序800,以增加分級程序800之效率或增加分級結果900之準確率。影像參數239可包含輸入影像101之影像參數。影像參數239可包含輸入影像101之切片厚度。According to some embodiments of FIG. 2 , gray matter volume 231 , white matter volume 233 , white matter hyperintensity volume 235 , cerebrospinal fluid volume 237 may be input to grading program 800 . After being processed by the grading program 800, the grading result 900 can be output. In some embodiments, in addition to gray matter volume 231 , white matter volume 233 , white matter hyperintensity volume 235 , and cerebrospinal fluid volume 237 , imaging parameters 239 may optionally be input into grading program 800 to add grading program 800 efficiency or increase the accuracy of the classification result 900. Image parameters 239 may include image parameters of input image 101 . Image parameters 239 may include slice thickness of input image 101 .

在一些實施例中,分級程序800可為一種梯度提升(gradient boosting)程序。梯度提升是一種用於回歸及分類問題的機器學習技術。在一些實施例,分級程序800可為一種XGBoost(eXtreme Gradient Boosting)程序。在本揭露的一些實施例中,分級程序800可為經由機器訓練而產生的梯度提升程序。在本揭露之實施例中,用以訓練及測試梯度提升程序或XGBoost程序之案例如表1所示。本揭露並不詳述藉由機器學習而產生梯度提升程序或XGBoost程序之過程。In some embodiments, the staging procedure 800 may be a gradient boosting procedure. Gradient boosting is a machine learning technique used for regression and classification problems. In some embodiments, the grading procedure 800 may be an XGBoost (eXtreme Gradient Boosting) procedure. In some embodiments of the present disclosure, the grading procedure 800 may be a gradient boosting procedure generated through machine training. In the embodiments of the present disclosure, the examples used for training and testing the gradient boosting procedure or the XGBoost procedure are shown in Table 1. This disclosure does not detail the process of generating gradient boosting programs or XGBoost programs by machine learning.

分級結果900可為根據費澤克斯等級(Fazekaz grades)之一結果。在費澤克斯等級中,等級0可指示沒有腦白質高信號或沒有腦白質病變;等級1可指示腦白質高信號面積(或腦白質病變面積)為點狀或細線;等級2可指示腦白質高信號面積(或腦白質病變面積)開始匯合(confluence);等級3可指示腦白質高信號面積(或腦白質病變面積)為大型的匯合面積(large confluent areas)。The graded result 900 may be one of the results according to the Fazekaz grades. On the Fezex scale, a grade of 0 may indicate no white matter hyperintensity or no white matter lesion; a grade of 1 may indicate that the area of white matter hyperintensity (or white matter lesion area) is punctate or thin; and a grade of 2 may indicate cerebral White matter hyperintensity areas (or white matter lesion areas) begin to confluence; a grade of 3 may indicate white matter hyperintensity areas (or white matter lesion areas) as large confluent areas.

圖3A繪示根據本揭露之一些實施例的影像處理程序300。輸入影像101可為512×512×1。輸入影像101可為具1個通道(channel)的512×512影像,或為具1個維度的512×512影像。本揭露所述512×512、256×256、128×128或64×64之單位可為像素或其他合適之單位。FIG. 3A illustrates an image processing procedure 300 according to some embodiments of the present disclosure. The input image 101 may be 512×512×1. The input image 101 can be a 512×512 image with one channel, or a 512×512 image with one dimension. The units of 512×512, 256×256, 128×128 or 64×64 described in this disclosure may be pixels or other suitable units.

對輸入影像101執行操作301而輸出特徵圖組(a feature map set)102。操作301可為與多個核心圖(kernal map)進行卷積運算(convoluation)。在一些實施例中,操作301可為與32個核心圖進行卷積運算,輸出之特徵圖組102可包含32個特徵圖。在一些實施例中,操作301可包含零填充(zero padding)。特徵圖組102中的每一特徵圖可為512×512。在一些實施例中,操作301可包含批次標準化(batch normaliztion)及整流線性單元(Rectfied Linear Unit;ReLU)函數處理。Operation 301 is performed on the input image 101 to output a feature map set 102 . Operation 301 may be a convolution operation with a plurality of kernal maps. In some embodiments, operation 301 may be a convolution operation with 32 core maps, and the output feature map group 102 may include 32 feature maps. In some embodiments, operation 301 may include zero padding. Each feature map in feature map set 102 may be 512x512. In some embodiments, operation 301 may include batch normalization and Rectified Linear Unit (ReLU) function processing.

對特徵圖組102執行操作302而輸出特徵圖組103。操作302可為與多個核心圖進行卷積運算。在一些實施例中,操作302可為與64個核心圖進行卷積運算,輸出之特徵圖組103可包含64個特徵圖。在一些實施例中,操作302可包含零填充。特徵圖組103中的每一特徵圖可為512×512。在一些實施例中,操作302可包含批次標準化及整流線性單元函數處理。操作301及操作302可被視為一組卷積運算。Operation 302 is performed on the feature map group 102 to output the feature map group 103 . Operation 302 may be a convolution operation with a plurality of core graphs. In some embodiments, operation 302 may be a convolution operation with 64 core maps, and the output feature map group 103 may include 64 feature maps. In some embodiments, operation 302 may include zero padding. Each feature map in feature map set 103 may be 512x512. In some embodiments, operation 302 may include batch normalization and rectified linear unit function processing. Operation 301 and operation 302 may be viewed as a set of convolution operations.

對特徵圖組103執行操作307而輸出一組輸出影像。操作307可為執行輸出卷積。在一些實施例中,操作307可包含零填充。操作307可為與多個核心圖進行卷積運算。操作307可為與至少兩個核心圖進行卷積運算。在一些實施例中,操作307可為與6個核心圖進行卷積運算,以產生輸出影像201、203、205、207、209、211。輸出影像201可指示腦灰質;輸出影像203可指示腦白質;輸出影像205可指示腦白質高信號;輸出影像207可指示腦脊髓液;輸出影像209可指示頭皮及/或頭骨;輸出影像211可指示空氣。Operation 307 is performed on the feature map set 103 to output a set of output images. Operation 307 may be to perform output convolution. In some embodiments, operation 307 may include zero padding. Operation 307 may be a convolution operation with the plurality of core graphs. Operation 307 may be a convolution operation with at least two core graphs. In some embodiments, operation 307 may be a convolution operation with the six core maps to generate output images 201 , 203 , 205 , 207 , 209 , 211 . Output image 201 may indicate gray matter; output image 203 may indicate white matter; output image 205 may indicate white matter hyperintensity; output image 207 may indicate cerebrospinal fluid; output image 209 may indicate scalp and/or skull; output image 211 may indicate Indicates air.

在一些實施例中,操作307可為與2個核心圖進行卷積運算,以產生兩個輸出影像,一個輸出影像可指示腦白質,一個輸出影像可指示腦白質高信號。In some embodiments, operation 307 may be a convolution operation with 2 core maps to generate two output images, one output image may be indicative of white matter and one output image may be indicative of white matter hyperintensity.

在一些實施例中,操作307可為與3個核心圖進行卷積運算,以產生三個輸出影像,一個輸出影像可指示腦灰質,一個輸出影像可指示腦白質,一個輸出影像可指示腦白質高信號。In some embodiments, operation 307 may be a convolution operation with three core maps to generate three output images, one output image may be indicative of gray matter, one output image may be indicative of white matter, and one output image may be indicative of white matter high signal.

輸入影像101經影像處理程序300處理後可輸出兩個或兩個以上的輸出影像,以對輸入影像101產生多類別的語義分割。單一類別的語義分割可為經過影像處理程序處理後僅輸出一個輸出影像,且該輸出影像可指示腦白質高信號。相較於單一類別的語義分割,多類別的語義分割可達成更佳的組織型態分辨,並可減少偽陽型(false positive)的病灶判定。After the input image 101 is processed by the image processing program 300 , two or more output images can be output to generate multi-category semantic segmentation for the input image 101 . A single class of semantic segmentation may be processed by an image processing program to output only one output image, and the output image may be indicative of white matter hyperintensity. Compared with single-category semantic segmentation, multi-category semantic segmentation can achieve better tissue morphological discrimination and reduce false positive lesion determination.

圖3B繪示根據本揭露之一些實施例的影像處理程序300。輸入影像101可為512×512×1。輸入影像101可為具1個通道(channel)的512×512影像,或為具1個維度的512×512影像。FIG. 3B illustrates an image processing procedure 300 according to some embodiments of the present disclosure. The input image 101 may be 512×512×1. The input image 101 can be a 512×512 image with one channel, or a 512×512 image with one dimension.

對輸入影像101執行操作301而輸出特徵圖組(a feature map set)102。操作301可為與多個核心圖進行卷積運算。在一些實施例中,操作301可為與32個核心圖進行卷積運算,輸出之特徵圖組102可包含32個特徵圖。在一些實施例中,操作301可包含零填充。特徵圖組102中的每一特徵圖可為512×512。在一些實施例中,操作301可包含批次標準化及整流線性單元函數處理。Operation 301 is performed on the input image 101 to output a feature map set 102 . Operation 301 may be a convolution operation with a plurality of core graphs. In some embodiments, operation 301 may be a convolution operation with 32 core maps, and the output feature map group 102 may include 32 feature maps. In some embodiments, operation 301 may include zero padding. Each feature map in feature map set 102 may be 512x512. In some embodiments, operation 301 may include batch normalization and rectified linear unit function processing.

對特徵圖組102執行操作302而輸出特徵圖組103。操作302可為與多個核心圖進行卷積運算。在一些實施例中,操作302可為與64個核心圖進行卷積運算,輸出之特徵圖組103可包含64個特徵圖。在一些實施例中,操作302可包含零填充。特徵圖組103中的每一特徵圖可為512×512。在一些實施例中,操作302可包含批次標準化及整流線性單元函數處理。操作301及操作302可被視為一組卷積運算。Operation 302 is performed on the feature map group 102 to output the feature map group 103 . Operation 302 may be a convolution operation with a plurality of core graphs. In some embodiments, operation 302 may be a convolution operation with 64 core maps, and the output feature map group 103 may include 64 feature maps. In some embodiments, operation 302 may include zero padding. Each feature map in feature map set 103 may be 512x512. In some embodiments, operation 302 may include batch normalization and rectified linear unit function processing. Operation 301 and operation 302 may be viewed as a set of convolution operations.

對特徵圖組103執行操作401而輸出特徵圖組104。操作401可為一種下採樣(down-sampling)處理。操作401可為最大池化(max poolling)處理。在一些實施例中,操作401可為2×2最大池化,輸出之特徵圖組104可包含64個特徵圖,其中每一特徵圖可為256×256。Operation 401 is performed on the feature map group 103 to output the feature map group 104 . Operation 401 may be a down-sampling process. Operation 401 may be a max pooling process. In some embodiments, operation 401 may be 2×2 max pooling, and the output feature map group 104 may include 64 feature maps, wherein each feature map may be 256×256.

對特徵圖組104執行操作303而輸出特徵圖組105。操作303可為與多個核心圖進行卷積運算。在一些實施例中,操作303可為與64個核心圖進行卷積運算,輸出之特徵圖組105可包含64個特徵圖。在一些實施例中,操作303可包含零填充。特徵圖組105中的每一特徵圖可為256×256。在一些實施例中,操作303可包含批次標準化及整流線性單元函數處理。Operation 303 is performed on the feature map group 104 to output the feature map group 105 . Operation 303 may be a convolution operation with a plurality of core graphs. In some embodiments, operation 303 may be a convolution operation with 64 core maps, and the output feature map group 105 may include 64 feature maps. In some embodiments, operation 303 may include zero padding. Each feature map in feature map set 105 may be 256x256. In some embodiments, operation 303 may include batch normalization and rectified linear unit function processing.

對特徵圖組105執行操作304而輸出特徵圖組106。操作304可為與多個核心圖進行卷積運算。在一些實施例中,操作304可為與128個核心圖進行卷積運算,輸出之特徵圖組106可包含128個特徵圖。在一些實施例中,操作304可包含零填充。特徵圖組106中的每一特徵圖可為256×256。在一些實施例中,操作304可包含批次標準化及整流線性單元函數處理。操作303及操作304可被視為一組卷積運算。Operation 304 is performed on the feature map set 105 to output the feature map set 106 . Operation 304 may be a convolution operation with the plurality of core graphs. In some embodiments, operation 304 may be a convolution operation with 128 core maps, and the output feature map set 106 may include 128 feature maps. In some embodiments, operation 304 may include zero padding. Each feature map in feature map set 106 may be 256x256. In some embodiments, operation 304 may include batch normalization and rectified linear unit function processing. Operations 303 and 304 may be viewed as a set of convolution operations.

對特徵圖組106執行操作601而輸出特徵圖組107。操作601可為一種上採樣(up-sampling)處理。操作601可為最大反池化(max unpoolling)處理。在一些實施例中,操作601可為2×2最大反池化,輸出之特徵圖組107可包含128個特徵圖,其中每一特徵圖可為512×512。在一些實施例中,可根據操作501所傳遞之下採樣索引(或池化索引)而執行操作601。例如,在執行操作401之(2×2)最大池化時,可儲存最大池化索引之資料(例如操作501),並在執行操作601時根據最大池化索引進行(2×2)最大反池化(例如填回索引所指示之位置)。Operation 601 is performed on the feature map group 106 to output the feature map group 107 . Operation 601 may be an up-sampling process. Operation 601 may be a max unpoolling process. In some embodiments, operation 601 may be 2×2 max-unpooling, and the output feature map group 107 may include 128 feature maps, wherein each feature map may be 512×512. In some embodiments, operation 601 may be performed according to the downsampling index (or pooling index) passed in operation 501 . For example, when performing the (2×2) maximum pooling in operation 401, the data of the maximum pooling index can be stored (for example, operation 501), and when performing operation 601, a (2×2) maximum inversion can be performed according to the maximum pooling index. Pooling (eg filling back the position indicated by the index).

從一解析度的下採樣時取得下採樣索引(例如最大池化索引;max pooling indices),在上採樣(例如最大反池化)回相同解析度時可根據下採樣索引而將數值填回相同位置。此種反池化可取代反卷積,可縮短所需的訓練時間,並可減少所需的訓練資料,更可減少大量的運算。此外,根據索引執行反池化可減少高頻資料的損失,並可提供較良好的組織邊界分割,更可增加小病灶(lesions)的位置準確度,且可提供更良好的組織邊界分割。The downsampling index (such as max pooling indices; max pooling indices) is obtained from the downsampling of a resolution, and the value can be filled back to the same value according to the downsampling index when upsampling (such as max unpooling) back to the same resolution Location. This de-pooling can replace deconvolution, which can shorten the required training time, reduce the required training data, and reduce a large number of operations. In addition, performing de-pooling according to the index can reduce the loss of high-frequency data, can provide better segmentation of tissue boundaries, and can increase the location accuracy of small lesions (lesions), and can provide better segmentation of tissue boundaries.

對特徵圖組103及特徵圖組107執行操作701而輸出特徵圖組108。操作701可為跳躍連接(skip connection)處理。例如,在執行操作401之池化處理之前儲存特徵圖組103,並將特徵圖組103與執行操作601之反池化後所產生之特徵圖組107連接,以產生特徵圖組108。根據圖3B之例示性實施例,特徵圖組108之特徵圖個數為64加上128,即192。Operation 701 is performed on the feature map set 103 and the feature map set 107 to output the feature map set 108 . Operation 701 may be a skip connection process. For example, the feature map group 103 is stored before the pooling process in operation 401 is performed, and the feature map group 103 is connected with the feature map group 107 generated after performing the de-pooling in operation 601 to generate the feature map group 108 . According to the exemplary embodiment of FIG. 3B , the number of feature maps in the feature map group 108 is 64 plus 128, ie, 192.

藉由跳躍連接處理,可擴展特徵圖,以結合多尺度的影像資訊。跳躍連接處理可減少 高頻資料的損失,並可提供更良好的組織邊界分割。 Through skip connection processing, feature maps can be extended to combine multi-scale image information. Skip connection processing reduces the loss of high-frequency data and provides better segmentation of tissue boundaries.

對特徵圖組108執行操作305而輸出特徵圖組109。操作305可為與多個核心圖進行卷積運算。在一些實施例中,操作305可為與64個核心圖進行卷積運算,輸出之特徵圖組109可包含64個特徵圖。在一些實施例中,操作305可包含零填充。特徵圖組109中的每一特徵圖可為512×512。在一些實施例中,操作305可包含批次標準化及整流線性單元函數處理。Operation 305 is performed on the feature map set 108 to output the feature map set 109 . Operation 305 may be a convolution operation with a plurality of core graphs. In some embodiments, operation 305 may be a convolution operation with 64 core maps, and the output feature map group 109 may include 64 feature maps. In some embodiments, operation 305 may include zero padding. Each feature map in feature map set 109 may be 512x512. In some embodiments, operation 305 may include batch normalization and rectified linear unit function processing.

對特徵圖組109執行操作306而輸出特徵圖組110。操作306可為與多個核心圖進行卷積運算。在一些實施例中,操作306可為與64個核心圖進行卷積運算,輸出之特徵圖組110可包含64個特徵圖。在一些實施例中,操作306可包含零填充。特徵圖組110中的每一特徵圖可為512×512。在一些實施例中,操作306可包含批次標準化及整流線性單元函數處理。操作305及操作306可被視為一組卷積運算。Operation 306 is performed on the feature map set 109 to output the feature map set 110 . Operation 306 may be a convolution operation with the plurality of core graphs. In some embodiments, operation 306 may be a convolution operation with 64 core maps, and the output feature map set 110 may include 64 feature maps. In some embodiments, operation 306 may include zero padding. Each feature map in feature map set 110 may be 512x512. In some embodiments, operation 306 may include batch normalization and rectified linear unit function processing. Operations 305 and 306 may be viewed as a set of convolution operations.

對特徵圖組110執行操作307而輸出一組輸出影像。操作307可為執行輸出卷積。在一些實施例中,操作307可包含零填充。操作307可為與多個核心圖進行卷積運算。操作307可為與至少兩個核心圖進行卷積運算。在一些實施例中,操作307可為與6個核心圖進行卷積運算,以產生輸出影像201、203、205、207、209、211。輸出影像201可指示腦灰質;輸出影像203可指示腦白質;輸出影像205可指示腦白質高信號;輸出影像207可指示腦脊髓液;輸出影像209可指示頭皮及/或頭骨;輸出影像211可指示空氣。Operation 307 is performed on the set of feature maps 110 to output a set of output images. Operation 307 may be to perform output convolution. In some embodiments, operation 307 may include zero padding. Operation 307 may be a convolution operation with the plurality of core graphs. Operation 307 may be a convolution operation with at least two core graphs. In some embodiments, operation 307 may be a convolution operation with the six core maps to generate output images 201 , 203 , 205 , 207 , 209 , 211 . Output image 201 may indicate gray matter; output image 203 may indicate white matter; output image 205 may indicate white matter hyperintensity; output image 207 may indicate cerebrospinal fluid; output image 209 may indicate scalp and/or skull; output image 211 may indicate Indicates air.

在一些實施例中,操作307可為與2個核心圖進行卷積運算,以產生兩個輸出影像,一個輸出影像可指示腦白質,一個輸出影像可指示腦白質高信號。In some embodiments, operation 307 may be a convolution operation with 2 core maps to generate two output images, one output image may be indicative of white matter and one output image may be indicative of white matter hyperintensity.

在一些實施例中,操作307可為與3個核心圖進行卷積運算,以產生三個輸出影像,一個輸出影像可指示腦灰質,一個輸出影像可指示腦白質,一個輸出影像可指示腦白質高信號。In some embodiments, operation 307 may be a convolution operation with three core maps to generate three output images, one output image may be indicative of gray matter, one output image may be indicative of white matter, and one output image may be indicative of white matter high signal.

輸入影像101經影像處理程序300處理後可輸出兩個或兩個以上的輸出影像,以對輸入影像101產生多類別的語義分割。單一類別的語義分割可為經過影像處理程序處理後僅輸出一個輸出影像,且該輸出影像可指示腦白質高信號。相較於單一類別的語義分割,多類別的語義分割可達成更佳的組織型態分辨,並可減少偽陽型的病灶判定。After the input image 101 is processed by the image processing program 300 , two or more output images can be output to generate multi-category semantic segmentation for the input image 101 . A single class of semantic segmentation may be processed by an image processing program to output only one output image, and the output image may be indicative of white matter hyperintensity. Compared with single-category semantic segmentation, multi-category semantic segmentation can achieve better tissue morphological discrimination and reduce false-positive lesion determination.

圖4繪示根據本揭露之一些實施例的影像處理程序300。圖4中操作301、操作302、操作401、操作303、操作304、輸入影像101、特徵圖組102、特徵圖組103、特徵圖組104、特徵圖組105、特徵圖組106之技術內容與圖3B中使用相同元件符號之操作、輸入影像或特徵圖組之技術內容相似或相同。本揭露中不再針對圖4中操作301、操作302、操作401、操作303、操作304、輸入影像101,特徵圖組102、特徵圖組103、特徵圖組104、特徵圖組105、特徵圖組106重覆說明。FIG. 4 illustrates an image processing procedure 300 according to some embodiments of the present disclosure. In FIG. 4, the technical contents of operation 301, operation 302, operation 401, operation 303, operation 304, input image 101, feature map group 102, feature map group 103, feature map group 104, feature map group 105, feature map group 106 and The operations using the same reference symbols in FIG. 3B, the technical contents of the input images or feature map groups are similar or identical. Operation 301, operation 302, operation 401, operation 303, operation 304, input image 101, feature map group 102, feature map group 103, feature map group 104, feature map group 105, and feature map in FIG. Group 106 repeats the description.

參考圖4,對包含128個特徵圖之特徵圖組106執行操作402而輸出特徵圖組111。操作402可為一種下採樣處理。操作402可為最大池化處理。在一些實施例中,操作402可為2×2最大池化,輸出之特徵圖組111可包含128個特徵圖,其中每一特徵圖可為128×128。Referring to FIG. 4 , operation 402 is performed on the feature map group 106 including 128 feature maps to output the feature map group 111 . Operation 402 may be a downsampling process. Operation 402 may be a max pooling process. In some embodiments, operation 402 may be 2x2 max pooling, and the output feature map group 111 may include 128 feature maps, wherein each feature map may be 128x128.

對特徵圖組111執行操作308而輸出特徵圖組112。操作308可為與多個核心圖進行卷積運算。在一些實施例中,操作308可為與128個核心圖進行卷積運算,輸出之特徵圖組112可包含128個特徵圖。在一些實施例中,操作308可包含零填充。特徵圖組112中的每一特徵圖可為128×128。在一些實施例中,操作308可包含批次標準化及整流線性單元函數處理。Operation 308 is performed on the feature map set 111 to output the feature map set 112 . Operation 308 may be a convolution operation with the plurality of core graphs. In some embodiments, operation 308 may be a convolution operation with 128 core maps, and the output feature map set 112 may include 128 feature maps. In some embodiments, operation 308 may include zero padding. Each feature map in feature map set 112 may be 128x128. In some embodiments, operation 308 may include batch normalization and rectified linear unit function processing.

對特徵圖組112執行操作309而輸出特徵圖組113。操作309可為與多個核心圖進行卷積運算。在一些實施例中,操作309可為與256個核心圖進行卷積運算,輸出之特徵圖組113可包含256個特徵圖。在一些實施例中,操作309可包含零填充。特徵圖組113中的每一特徵圖可為128×128。在一些實施例中,操作309可包含批次標準化及整流線性單元函數處理。操作308及操作309可被視為一組卷積運算。Operation 309 is performed on the feature map set 112 to output the feature map set 113 . Operation 309 may be a convolution operation with the plurality of core graphs. In some embodiments, operation 309 may be a convolution operation with 256 core maps, and the output feature map group 113 may include 256 feature maps. In some embodiments, operation 309 may include zero padding. Each feature map in feature map set 113 may be 128x128. In some embodiments, operation 309 may include batch normalization and rectified linear unit function processing. Operations 308 and 309 may be viewed as a set of convolution operations.

對包含256個特徵圖之特徵圖組113執行操作403而輸出特徵圖組114。操作403可為一種下採樣處理。操作403可為最大池化處理。在一些實施例中,操作403可為2×2最大池化,輸出之特徵圖組114可包含256個特徵圖,其中每一特徵圖可為64×64。Operation 403 is performed on the feature map group 113 including 256 feature maps to output the feature map group 114 . Operation 403 may be a downsampling process. Operation 403 may be a max pooling process. In some embodiments, operation 403 may be 2x2 max pooling, and the output feature map group 114 may include 256 feature maps, wherein each feature map may be 64x64.

對特徵圖組114執行操作310而輸出特徵圖組115。操作310可為與多個核心圖進行卷積運算。在一些實施例中,操作310可為與256個核心圖進行卷積運算,輸出之特徵圖組115可包含256個特徵圖。在一些實施例中,操作310可包含零填充。特徵圖組115中的每一特徵圖可為64×64。在一些實施例中,操作310可包含批次標準化及整流線性單元函數處理。Operation 310 is performed on the set of feature maps 114 to output the set of feature maps 115 . Operation 310 may be a convolution operation with a plurality of core graphs. In some embodiments, operation 310 may be a convolution operation with 256 core maps, and the output feature map group 115 may include 256 feature maps. In some embodiments, operation 310 may include zero padding. Each feature map in feature map set 115 may be 64x64. In some embodiments, operation 310 may include batch normalization and rectified linear unit function processing.

對特徵圖組115執行操作311而輸出特徵圖組116。操作311可為與多個核心圖進行卷積運算。在一些實施例中,操作311可為與512個核心圖進行卷積運算,輸出之特徵圖組116可包含512個特徵圖。在一些實施例中,操作311可包含零填充。特徵圖組116中的每一特徵圖可為64×64。在一些實施例中,操作311可包含批次標準化及整流線性單元函數處理。操作310及操作311可被視為一組卷積運算。Operation 311 is performed on the feature map set 115 to output the feature map set 116 . Operation 311 may be a convolution operation with a plurality of core graphs. In some embodiments, operation 311 may be a convolution operation with 512 core maps, and the output feature map group 116 may include 512 feature maps. In some embodiments, operation 311 may include zero padding. Each feature map in feature map set 116 may be 64x64. In some embodiments, operation 311 may include batch normalization and rectified linear unit function processing. Operation 310 and operation 311 may be viewed as a set of convolution operations.

對特徵圖組116執行操作602而輸出特徵圖組117。操作602可為一種上採樣處理。操作602可為最大反池化處理。在一些實施例中,操作602可為2×2最大反池化,輸出之特徵圖組117可包含512個特徵圖,其中每一特徵圖可為128×128。在一些實施例中,可根據操作502所傳遞之下採樣索引(或池化索引)而執行操作602。例如,在執行操作403之(2×2)最大池化時,可儲存最大池化索引之資料(例如操作502),並在執行操作602時根據最大池化索引進行(2×2)最大反池化(例如填回索引所指示之位置)。Operation 602 is performed on the feature map set 116 to output the feature map set 117 . Operation 602 may be an upsampling process. Operation 602 may be a max unpooling process. In some embodiments, operation 602 may be 2x2 max unpooling, and the output feature map set 117 may include 512 feature maps, where each feature map may be 128x128. In some embodiments, operation 602 may be performed according to the downsampling index (or pooling index) passed in operation 502 . For example, when performing the (2×2) max pooling in operation 403 , the data of the max pooling index can be stored (for example, in operation 502 ), and when performing operation 602 , the (2×2) max inversion can be performed according to the max pooling index. Pooling (eg filling back the position indicated by the index).

從一解析度的下採樣時取得下採樣索引(例如最大池化索引;max pooling indices),在上採樣(例如最大反池化)回相同解析度時可根據下採樣索引而將數值填回相同位置。此種反池化可取代反卷積,可縮短所需的訓練時間,並可減少所需的訓練資料,更可減少大量的運算。此外,根據索引執行反池化可減少高頻資料的損失,並可提供較良好的組織邊界分割,更可增加小病灶(lesions)的位置準確度,且可提供更良好的組織邊界分割。The downsampling index (such as max pooling indices; max pooling indices) is obtained from the downsampling of a resolution, and the value can be filled back to the same value according to the downsampling index when upsampling (such as max unpooling) back to the same resolution Location. This de-pooling can replace deconvolution, which can shorten the required training time, reduce the required training data, and reduce a large number of operations. In addition, performing de-pooling according to the index can reduce the loss of high-frequency data, can provide better segmentation of tissue boundaries, and can increase the location accuracy of small lesions (lesions), and can provide better segmentation of tissue boundaries.

對特徵圖組113及特徵圖組117執行操作702而輸出特徵圖組118。操作702可為跳躍連接處理。例如,在執行操作403之池化處理之前儲存特徵圖組113,並將特徵圖組113與執行操作602之反池化後所產生之特徵圖組117連接,以產生特徵圖118。根據圖4之例示性實施例,特徵圖組118之特徵圖個數為256加上512,即768。Operation 702 is performed on the feature map set 113 and the feature map set 117 to output the feature map set 118 . Operation 702 may be skip connection processing. For example, the feature map group 113 is stored before the pooling process in operation 403 is performed, and the feature map group 113 is connected with the feature map group 117 generated after performing the de-pooling in operation 602 to generate the feature map 118 . According to the exemplary embodiment of FIG. 4 , the number of feature maps in the feature map group 118 is 256 plus 512, ie, 768.

藉由跳躍連接處理,可擴展特徵圖,以結合多尺度的影像資訊。跳躍連接處理可減少 高頻資料的損失,並可提供更良好的組織邊界分割。 Through skip connection processing, feature maps can be extended to combine multi-scale image information. Skip connection processing reduces the loss of high-frequency data and provides better segmentation of tissue boundaries.

對特徵圖組118執行操作312而輸出特徵圖組119。操作312可為與多個核心圖進行卷積運算。在一些實施例中,操作312可為與256個核心圖進行卷積運算,輸出之特徵圖組119可包含256個特徵圖。在一些實施例中,操作312可包含零填充。特徵圖組119中的每一特徵圖可為128×128。在一些實施例中,操作312可包含批次標準化及整流線性單元函數處理。Operation 312 is performed on feature map set 118 to output feature map set 119 . Operation 312 may be a convolution operation with the plurality of core graphs. In some embodiments, operation 312 may be a convolution operation with 256 core maps, and the output feature map group 119 may include 256 feature maps. In some embodiments, operation 312 may include zero padding. Each feature map in feature map set 119 may be 128x128. In some embodiments, operation 312 may include batch normalization and rectified linear unit function processing.

對特徵圖組119執行操作313而輸出特徵圖組120。操作313可為與多個核心圖進行卷積運算。在一些實施例中,操作313可為與256個核心圖進行卷積運算,輸出之特徵圖組120可包含256個特徵圖。在一些實施例中,操作313可包含零填充。特徵圖組120中的每一特徵圖可為128×128。在一些實施例中,操作313可包含批次標準化及整流線性單元函數處理。操作312及操作313可被視為一組卷積運算。Operation 313 is performed on the feature map set 119 to output the feature map set 120 . Operation 313 may be a convolution operation with the plurality of core graphs. In some embodiments, operation 313 may be a convolution operation with 256 core maps, and the output feature map group 120 may include 256 feature maps. In some embodiments, operation 313 may include zero padding. Each feature map in feature map set 120 may be 128x128. In some embodiments, operation 313 may include batch normalization and rectified linear unit function processing. Operation 312 and operation 313 may be viewed as a set of convolution operations.

對特徵圖組120執行操作603而輸出特徵圖組121。操作603可為一種上採樣處理。操作603可為最大反池化處理。在一些實施例中,操作603可為2×2最大反池化,輸出之特徵圖組121可包含256個特徵圖,其中每一特徵圖可為256×256。在一些實施例中,可根據操作503所傳遞之下採樣索引(或池化索引)而執行操作603。例如,在執行操作402之(2×2)最大池化時,可儲存最大池化索引之資料(例如操作503),並在執行操作603時根據最大池化索引進行(2×2)最大反池化(例如填回索引所指示之位置)。Operation 603 is performed on the feature map group 120 to output the feature map group 121 . Operation 603 may be an upsampling process. Operation 603 may be a max unpooling process. In some embodiments, operation 603 may be 2×2 max-unpooling, and the output feature map group 121 may include 256 feature maps, wherein each feature map may be 256×256. In some embodiments, operation 603 may be performed according to the downsampling index (or pooling index) passed in operation 503 . For example, when performing the (2×2) maximum pooling in operation 402, the data of the maximum pooling index can be stored (for example, in operation 503), and when performing operation 603, a (2×2) maximum inversion can be performed according to the maximum pooling index. Pooling (eg filling back the position indicated by the index).

對特徵圖組106及特徵圖組121執行操作703而輸出特徵圖組122。操作703可為跳躍連接處理。例如,在執行操作402之池化處理之前儲存特徵圖組106,並將特徵圖組106與執行操作603之反池化後所產生之特徵圖組121連接,以產生特徵圖122。根據圖4之例示性實施例,特徵圖組122之特徵圖個數為128加上256,即384。Operation 703 is performed on the feature map set 106 and the feature map set 121 to output the feature map set 122 . Operation 703 may be skip connection processing. For example, the feature map group 106 is stored before the pooling process in operation 402 is performed, and the feature map group 106 is connected with the feature map group 121 generated after performing the de-pooling in operation 603 to generate the feature map 122 . According to the exemplary embodiment of FIG. 4 , the number of feature maps in the feature map group 122 is 128 plus 256, ie, 384.

對特徵圖組122執行操作314而輸出特徵圖組123。操作314可為與多個核心圖進行卷積運算。在一些實施例中,操作314可為與128個核心圖進行卷積運算,輸出之特徵圖組123可包含128特徵圖。在一些實施例中,操作314可包含零填充。特徵圖組123中的每一特徵圖可為256×256。在一些實施例中,操作314可包含批次標準化及整流線性單元函數處理。Operation 314 is performed on the feature map set 122 to output the feature map set 123 . Operation 314 may be a convolution operation with the plurality of core graphs. In some embodiments, operation 314 may be a convolution operation with 128 core maps, and the output feature map set 123 may include 128 feature maps. In some embodiments, operation 314 may include zero padding. Each feature map in feature map set 123 may be 256x256. In some embodiments, operation 314 may include batch normalization and rectified linear unit function processing.

對特徵圖組123執行操作315而輸出特徵圖組124。操作315可為與多個核心圖進行卷積運算。在一些實施例中,操作315可為與128個核心圖進行卷積運算,輸出之特徵圖組124可包含128個特徵圖。在一些實施例中,操作315可包含零填充。特徵圖組124中的每一特徵圖可為256×256。在一些實施例中,操作315可包含批次標準化及整流線性單元函數處理。操作314及操作315可被視為一組卷積運算。Operation 315 is performed on the feature map set 123 to output the feature map set 124 . Operation 315 may be a convolution operation with the plurality of core graphs. In some embodiments, operation 315 may be a convolution operation with 128 core maps, and the output feature map group 124 may include 128 feature maps. In some embodiments, operation 315 may include zero padding. Each feature map in feature map set 124 may be 256x256. In some embodiments, operation 315 may include batch normalization and rectified linear unit function processing. Operations 314 and 315 may be viewed as a set of convolution operations.

對特徵圖組124執行操作601而輸出特徵圖組107。操作601可為一種上採樣處理。操作601可為最大反池化處理。在一些實施例中,操作601可為2×2最大反池化,輸出之特徵圖組107可包含128個特徵圖,其中每一特徵圖可為512×512。在一些實施例中,可根據操作501所傳遞之下採樣索引(或池化索引)而執行操作601。例如,在執行操作401之(2×2)最大池化時,可儲存最大池化索引之資料(例如操作501),並在執行操作601時根據最大池化索引進行(2×2)最大反池化(例如填回索引所指示之位置)。Operation 601 is performed on the feature map set 124 to output the feature map set 107 . Operation 601 may be an upsampling process. Operation 601 may be a max unpooling process. In some embodiments, operation 601 may be 2×2 max-unpooling, and the output feature map group 107 may include 128 feature maps, wherein each feature map may be 512×512. In some embodiments, operation 601 may be performed according to the downsampling index (or pooling index) passed in operation 501 . For example, when performing the (2×2) maximum pooling in operation 401, the data of the maximum pooling index can be stored (for example, operation 501), and when performing operation 601, a (2×2) maximum inversion can be performed according to the maximum pooling index. Pooling (eg filling back the position indicated by the index).

對特徵圖組103及特徵圖組107執行操作701而輸出特徵圖組108。操作701可為跳躍連接處理。例如,在執行操作401之池化處理之前儲存特徵圖組103,並將特徵圖組103與執行操作601之反池化後所產生之特徵圖組107連接,以產生特徵圖108。根據圖4之例示性實施例,特徵圖組108之特徵圖個數為64加上128,即192。Operation 701 is performed on the feature map set 103 and the feature map set 107 to output the feature map set 108 . Operation 701 may be skip connection processing. For example, the feature map group 103 is stored before the pooling process in operation 401 is performed, and the feature map group 103 is connected with the feature map group 107 generated after performing the de-pooling in operation 601 to generate the feature map 108 . According to the exemplary embodiment of FIG. 4 , the number of feature maps in the feature map group 108 is 64 plus 128, ie, 192.

對特徵圖組108執行操作305而輸出特徵圖組109。操作305可為與多個核心圖進行卷積運算。在一些實施例中,操作305可為與64個核心圖進行卷積運算,輸出之特徵圖組109可包含64個特徵圖。在一些實施例中,操作305可包含零填充。特徵圖組109中的每一特徵圖可為512×512。在一些實施例中,操作305可包含批次標準化及整流線性單元函數處理。Operation 305 is performed on the feature map set 108 to output the feature map set 109 . Operation 305 may be a convolution operation with a plurality of core graphs. In some embodiments, operation 305 may be a convolution operation with 64 core maps, and the output feature map group 109 may include 64 feature maps. In some embodiments, operation 305 may include zero padding. Each feature map in feature map set 109 may be 512x512. In some embodiments, operation 305 may include batch normalization and rectified linear unit function processing.

對特徵圖組109執行操作306而輸出特徵圖組110。操作306可為與多個核心圖進行卷積運算。在一些實施例中,操作306可為與64個核心圖進行卷積運算,輸出之特徵圖組110可包含64個特徵圖。在一些實施例中,操作306可包含零填充。特徵圖組110中的每一特徵圖可為512×512。在一些實施例中,操作306可包含批次標準化及整流線性單元函數處理。操作305及操作306可被視為一組卷積運算。Operation 306 is performed on the feature map set 109 to output the feature map set 110 . Operation 306 may be a convolution operation with the plurality of core graphs. In some embodiments, operation 306 may be a convolution operation with 64 core maps, and the output feature map set 110 may include 64 feature maps. In some embodiments, operation 306 may include zero padding. Each feature map in feature map set 110 may be 512x512. In some embodiments, operation 306 may include batch normalization and rectified linear unit function processing. Operations 305 and 306 may be viewed as a set of convolution operations.

對特徵圖組110執行操作307而輸出一組輸出影像。操作307可為執行輸出卷積。在一些實施例中,操作306可包含零填充。操作307可為與多個核心圖進行卷積運算。操作307可為與至少兩個核心圖進行卷積運算。在一些實施例中,操作307可為與6個核心圖進行卷積運算,以產生輸出影像201、203、205、207、209、211。輸出影像201可指示腦灰質;輸出影像203可指示腦白質;輸出影像205可指示腦白質高信號;輸出影像207可指示腦脊髓液;輸出影像209可指示頭皮及/或頭骨;輸出影像211可指示空氣。Operation 307 is performed on the set of feature maps 110 to output a set of output images. Operation 307 may be to perform output convolution. In some embodiments, operation 306 may include zero padding. Operation 307 may be a convolution operation with the plurality of core graphs. Operation 307 may be a convolution operation with at least two core graphs. In some embodiments, operation 307 may be a convolution operation with the six core maps to generate output images 201 , 203 , 205 , 207 , 209 , 211 . Output image 201 may indicate gray matter; output image 203 may indicate white matter; output image 205 may indicate white matter hyperintensity; output image 207 may indicate cerebrospinal fluid; output image 209 may indicate scalp and/or skull; output image 211 may indicate Indicates air.

在一些實施例中,操作307可為與2個核心圖進行卷積運算,以產生兩個輸出影像,一個輸出影像可指示腦白質,一個輸出影像可指示腦白質高信號。In some embodiments, operation 307 may be a convolution operation with 2 core maps to generate two output images, one output image may be indicative of white matter and one output image may be indicative of white matter hyperintensity.

在一些實施例中,操作307可為與3個核心圖進行卷積運算,以產生三個輸出影像,一個輸出影像可指示腦灰質,一個輸出影像可指示腦白質,一個輸出影像可指示腦白質高信號。In some embodiments, operation 307 may be a convolution operation with three core maps to generate three output images, one output image may be indicative of gray matter, one output image may be indicative of white matter, and one output image may be indicative of white matter high signal.

輸入影像101經影像處理程序300處理後可輸出兩個或兩個以上的輸出影像,以對輸入影像101產生多類別的語義分割。單一類別的語義分割可為經過影像處理程序處理後僅輸出一個輸出影像,且該輸出影像可指示腦白質高信號。相較於單一類別的語義分割,單一類別的語義分割可達成更佳的組織型態分辨,並可減少偽陽型(false positive)的病灶判定。After the input image 101 is processed by the image processing program 300 , two or more output images can be output to generate multi-category semantic segmentation for the input image 101 . A single class of semantic segmentation may be processed by an image processing program to output only one output image, and the output image may be indicative of white matter hyperintensity. Compared with single-category semantic segmentation, single-category semantic segmentation can achieve better tissue morphological discrimination and reduce false positive lesion determination.

本揭露圖3A、圖3B及圖4所述為影像處理程序300之例示性實施例,但此等例示性實施並非用以限制本發明。熟習此項技術者應可理解,在不脫離本揭露所載發明之真實精神及範圍內,影像尺寸、特徵圖尺寸、特徵圖數目、核心圖數目、卷積運算次數、下採樣次數、上採樣次數、跳躍連接次數以及池化索引傳遞次數可有所變動。3A, 3B and 4 of the present disclosure are exemplary embodiments of the image processing program 300, but these exemplary implementations are not intended to limit the present invention. Those skilled in the art should understand that the image size, feature map size, number of feature maps, number of core maps, number of convolution operations, number of down-sampling, up-sampling The number of times, the number of skip connections, and the number of pooled index passes can vary.

圖5A至圖5E展示根據本揭露之一些實施例的腦部MRI 2D影像。圖5A至圖5E展示根據本揭露之一些實施例的腦部T2-FLAIR影像。圖5A至圖5E之每一者可為本揭露圖1、圖3A、圖3B或圖4中所示的輸入影像101。5A-5E show brain MRI 2D images according to some embodiments of the present disclosure. 5A-5E show brain T2-FLAIR images according to some embodiments of the present disclosure. Each of FIGS. 5A-5E may be the input image 101 shown in FIG. 1 , FIG. 3A , FIG. 3B or FIG. 4 for the present disclosure.

圖6A至圖6E展示根據本揭露之一些實施例的腦部MRI 2D影像。圖6A至圖6E展示根據本揭露之一些實施例的腦部T2-FLAIR影像。圖6A至圖6E之每一者包含被圈選為腦白質高信號之區域。6A-6E show brain MRI 2D images according to some embodiments of the present disclosure. 6A-6E show brain T2-FLAIR images according to some embodiments of the present disclosure. Each of FIGS. 6A-6E includes a region circled as white matter hyperintensity.

圖6A至圖6E之每一者中之綠色圈選部分指示僅被腦科醫師圈選之腦白質高信號區域。圖6A至圖6E之每一者中之紅色圈選部分指示僅被影像處理程序300(如圖1、圖3A、圖3B或圖4中所示)圈選之腦白質高信號區域。圖6A至圖6E之每一者中之黃色圈選部分指示被腦科醫師及影像處理程序300(如圖1、圖3A、圖3B或圖4中所示)兩者圈選之腦白質高信號區域。The green circled portions in each of FIGS. 6A-6E indicate areas of white matter hyperintensity circled only by the encephalologist. The red circled portions in each of FIGS. 6A-6E indicate white matter hyperintense regions only circled by the image processing program 300 (as shown in FIG. 1 , FIG. 3A , FIG. 3B or FIG. 4 ). The yellow circled portions in each of FIGS. 6A-6E indicate white matter elevations circled by both the neurologist and the image processing program 300 (as shown in FIGS. 1 , 3A, 3B, or 4 ) signal area.

圖6A至圖6E之每一者中之綠色圈選部分可指示影像處理程序300之偽陰型結果。圖6A至圖6E之每一者中之紅色圈選部分可指示影像處理程序300之偽陽型結果。圖6A至圖6E之每一者中之黃色圈選部分可指示影像處理程序300之真陽型結果。The green circled portion in each of FIGS. 6A-6E may indicate a false negative result of the image processing routine 300 . The red circled portion in each of FIGS. 6A-6E may indicate a false positive result of the image processing procedure 300 . The yellow circled portion in each of FIGS. 6A-6E may indicate a true positive result for the image processing routine 300 .

參考圖6A至圖6E,大多為黃色圈選部分,意即影像處理程序300所圈選之腦白質高信號區域與腦科醫師所圈選之腦白質高信號區域大致重疊。換言之,影像處理程序300圈選腦白質高信號區域之能力與腦科醫師圈選腦白質高信號區域之能力大致相同。Referring to FIG. 6A to FIG. 6E , most of the parts circled in yellow, that is, the white matter hyperintensity region circled by the image processing program 300 substantially overlaps the white matter hyperintensity region circled by the neurologist. In other words, the ability of the image processing program 300 to circle the white matter hyperintensity regions is approximately the same as the ability of the neurologist to circle the white matter hyperintensity regions.

參考圖6A至圖6E,綠色圈選部分稀少,意即偽陰型結果稀少。參考圖6A至圖6E,紅色圈選部分稀少,意即偽陽型結果稀少。Referring to FIG. 6A to FIG. 6E , the green circled parts are rare, which means that false negative results are rare. Referring to FIG. 6A to FIG. 6E , the red circled part is rare, which means that false positive results are rare.

圖7A及圖7B展示根據本揭露之單一類別輸出U-Net的統計結果。單一類別輸出可指影像之單一類別語義分割。單一類別輸出U-Net可為圖3B中不執行操作501,且在操作307(輸出卷積)僅使用一個核心圖之影像處理程序,故僅輸出一輸出影像,且輸出影像指示腦白質高信號區域。單一類別U-Net可為圖4中不執行操作501、操作502、操作503,且在操作307(輸出卷積)僅使用一個核心圖之影像處理程序,故僅輸出一輸出影像,且輸出影像指示腦白質高信號區域。單一類別U-Net可為圖3B或圖4中不執行池化索引傳遞,且在輸出卷積僅使用一個核心圖之影像處理程序,故僅輸出一輸出影像,且輸出影像指示腦白質高信號區域。7A and 7B show statistical results of a single class output U-Net according to the present disclosure. Single-class output may refer to a single-class semantic segmentation of an image. The single-class output U-Net can be an image processing procedure that does not perform operation 501 in FIG. 3B and uses only one core map in operation 307 (output convolution), so only one output image is output, and the output image indicates white matter hyperintensity area. A single-class U-Net can be an image processing procedure in FIG. 4 that does not perform operations 501, 502, and 503, and uses only one core map in operation 307 (output convolution), so only one output image is output, and the output image is Indicates areas of white matter hyperintensity. A single-class U-Net can be an image processing procedure that does not perform pooling index transfer as in Figure 3B or Figure 4, and uses only one core image in the output convolution, so only one output image is output, and the output image indicates white matter hyperintensity area.

圖7A是實際腦白質病灶面積(或實際腦白質高信號區域)與單一類別輸出U-Net所預測腦白質病灶面積(或預測腦白質高信號區域)之回歸分析圖。圖7A中X軸為實際腦白質病灶面積,Y軸為單一類別輸出U-Net所預測腦白質病灶面積,其等之單位為cm 27A is a regression analysis diagram of the actual white matter lesion area (or actual white matter hyperintensity area) and the predicted white matter lesion area (or predicted white matter hyperintensity area) of the single-class output U-Net. In FIG. 7A , the X-axis is the actual white matter lesion area, and the Y-axis is the predicted white matter lesion area of the single-category output U-Net, and the unit is cm 2 .

根據圖7A所示回歸分析圖,r值為0.996,意即Y軸數值(即預測腦白質病灶面積)的變異中有99.6%可以被X軸數值(即實際腦白質病灶面積)的變異解釋。根據圖7A所示回歸分析圖,p值為趨近0,意即Y軸數值(即預測腦白質病灶面積)與X軸數值(即實際腦白質病灶面積)之關係為顯著。根據圖7A所示回歸分析圖,回歸線方程式為y=1.074x+0.036。According to the regression analysis graph shown in Figure 7A, the r value is 0.996, which means that 99.6% of the variation in the Y-axis value (ie, the predicted white matter lesion area) can be explained by the variation in the X-axis value (ie, the actual white matter lesion area). According to the regression analysis graph shown in FIG. 7A , the p value approaches 0, which means that the relationship between the Y-axis value (ie, the predicted white matter lesion area) and the X-axis value (ie, the actual white matter lesion area) is significant. According to the regression analysis diagram shown in FIG. 7A , the regression line equation is y=1.074x+0.036.

圖7B是布蘭德-奧特曼圖(Bland-Altman plot)。圖7B中X軸為實際腦白質病灶面積與單一類別輸出U-Net所預測腦白質病灶面積的平均值,Y軸為實際腦白質病灶面積減去單一類別輸出U-Net所預測腦白質病灶面積,其等之單位為cm 2Figure 7B is a Bland-Altman plot. In Figure 7B, the X-axis is the average of the actual white matter lesion area and the area of the white matter lesion predicted by the single-category output U-Net, and the Y-axis is the actual white matter lesion area minus the white matter lesion area predicted by the single-category output U-Net. , and its equivalent unit is cm 2 .

根據圖7B所示,實際腦白質病灶面積與單一類別輸出U-Net所預測腦白質病灶面積之間的偏差值(Bias)為-0.206。根據圖7B所示,實際腦白質病灶面積減去單一類別輸出U-Net所預測腦白質病灶面積之分佈的正1.96標準差(+1.96SD)為0.624,負1.96標準差(-1.96SD)為-1.036。實際腦白質病灶面積減去單一類別輸出U-Net所預測腦白質病灶面積的95%一致性界限(95% limits of agreement)為0.624及-1.036。According to Fig. 7B, the deviation value (Bias) between the actual white matter lesion area and the single-class output U-Net predicted white matter lesion area is -0.206. According to Figure 7B, the positive 1.96 standard deviation (+1.96SD) of the distribution of the actual white matter lesion area minus the single-class output U-Net predicted white matter lesion area is 0.624, and the negative 1.96 standard deviation (-1.96SD) is -1.036. The 95% limits of agreement for the actual white matter lesion area minus the single category output U-Net predicted white matter lesion area were 0.624 and -1.036.

圖8A及圖8B展示根據本揭露之單一類別輸出U-SegNet的統計結果。單一類別輸出可指影像之單一類別語義分割。單一類別輸出U-SegNet可為圖3B中在操作307(輸出卷積)僅使用一個核心圖之影像處理程序,故僅輸出一輸出影像,且輸出影像指示腦白質高信號區域。單一類別U-SegNet可為圖4中在操作307(輸出卷積)僅使用一個核心圖之影像處理程序,故僅輸出一輸出影像,且輸出影像指示腦白質高信號區域。單一類別U-SegNet可為圖3B或圖4中在輸出卷積僅使用一個核心圖之影像處理程序,故僅輸出一輸出影像,且輸出影像指示腦白質高信號區域。8A and 8B show statistical results of a single class output U-SegNet according to the present disclosure. Single-class output may refer to a single-class semantic segmentation of an image. The single-class output U-SegNet may be the image processing procedure in FIG. 3B that uses only one core map in operation 307 (output convolution), so only one output image is output, and the output image indicates white matter hyperintense regions. A single-class U-SegNet may be the image processing procedure in FIG. 4 that uses only one core map at operation 307 (output convolution), so only one output image is output, and the output image indicates white matter hyperintense regions. A single-class U-SegNet can be the image processing procedure of Figure 3B or Figure 4 that uses only one core map in the output convolution, so only one output image is output, and the output image indicates white matter hyperintense regions.

圖8A是實際腦白質病灶面積(或實際腦白質高信號區域)與單一類別輸出U-SegNet所預測腦白質病灶面積(或預測腦白質高信號區域)之回歸分析圖。圖8A中X軸為實際腦白質病灶面積,Y軸為單一類別輸出U-SegNet所預測腦白質病灶面積,其等之單位為cm 28A is a regression analysis diagram of the actual white matter lesion area (or actual white matter hyperintensity area) and the predicted white matter lesion area (or predicted white matter hyperintensity area) of the single-category output U-SegNet. In FIG. 8A , the X-axis is the actual white matter lesion area, and the Y-axis is the predicted white matter lesion area by the single-category output U-SegNet, and the unit is cm 2 .

根據圖8A所示回歸分析圖,r值為0.995,意即Y軸數值(即預測腦白質病灶面積)的變異中有99.5%可以被X軸數值(即實際腦白質病灶面積)的變異解釋。根據圖8A所示回歸分析圖,p值為趨近0,意即Y軸數值(即預測腦白質病灶面積)與X軸數值(即實際腦白質病灶面積)之關係為顯著。根據圖8A所示回歸分析圖,回歸線方程式為y=1.077x+0.037。According to the regression analysis graph shown in Figure 8A, the r value is 0.995, which means that 99.5% of the variation in the Y-axis value (ie, the predicted white matter lesion area) can be explained by the variation in the X-axis value (ie, the actual white matter lesion area). According to the regression analysis graph shown in FIG. 8A , the p value approaches 0, which means that the relationship between the Y-axis value (ie, the predicted white matter lesion area) and the X-axis value (ie, the actual white matter lesion area) is significant. According to the regression analysis diagram shown in FIG. 8A , the regression line equation is y=1.077x+0.037.

圖8B是布蘭德-奧特曼圖(Bland-Altman plot)。圖7B中X軸為實際腦白質病灶面積與單一類別輸出U-SegNet所預測腦白質病灶面積的平均值,Y軸為實際腦白質病灶面積減去單一類別輸出U-SegNet所預測腦白質病灶面積,其等之單位為cm 2Figure 8B is a Bland-Altman plot. In Figure 7B, the X-axis is the average of the actual white matter lesion area and the area of the white matter lesion predicted by the single-category output U-SegNet, and the Y-axis is the actual white matter lesion area minus the single-category output U-SegNet predicted white matter lesion area , and its equivalent unit is cm 2 .

根據圖8B所示,實際腦白質病灶面積與單一類別輸出U-SegNet所預測腦白質病灶面積之間的偏差值(Bias)為-0.213。根據圖8B所示,實際腦白質病灶面積減去單一類別輸出U-Net所預測腦白質病灶面積之分佈的正1.96標準差(+1.96SD)為0.685,負1.96標準差(-1.96SD)為-1.111。實際腦白質病灶面積減去單一類別輸出U-SegNet所預測腦白質病灶面積的95%一致性界限(95% limits of agreement)為0.685及-1.111。According to Fig. 8B, the deviation value (Bias) between the actual white matter lesion area and the single-class output U-SegNet predicted white matter lesion area is -0.213. According to Figure 8B, the positive 1.96 standard deviation (+1.96SD) of the distribution of the actual white matter lesion area minus the single-class output U-Net predicted white matter lesion area is 0.685, and the negative 1.96 standard deviation (-1.96SD) is -1.111. The 95% limits of agreement for the actual white matter lesion area minus the single-class output U-SegNet predicted white matter lesion area were 0.685 and -1.111.

圖9A及圖9B展示根據本揭露之多類別輸出U-Net的統計結果。多類別輸出可指影像之多類別語義分割。多類別輸出U-Net可為圖3B中不執行操作501,且在操作307(輸出卷積)使用多個核心圖之影像處理程序,故可輸出多個輸出影像,且該多個輸出影像之每一者可分別指示腦灰質、腦白質、腦白質高信號區域、腦脊髓液、頭皮及/或頭骨以及空氣(如輸出影像201、203、205、207、209及201)。多類別U-Net可為圖4中不執行操作501、操作502、操作503,且在操作307(輸出卷積)使用多個核心圖之影像處理程序,故可輸出多個輸出影像,且該多個輸出影像之每一者可分別指示腦灰質、腦白質、腦白質高信號區域、腦脊髓液、頭皮及/或頭骨以及空氣(如輸出影像201、203、205、207、209及201)。多類別U-Net可為圖3B或圖4中不執行池化索引傳遞,且在輸出卷積使用多個核心圖之影像處理程序,故可輸出多個輸出影像,且該多個輸出影像之每一者可分別指示腦灰質、腦白質、腦白質高信號區域、腦脊髓液、頭皮及/或頭骨以及空氣(如輸出影像201、203、205、207、209及201)。9A and 9B show statistical results of the multi-class output U-Net according to the present disclosure. Multi-class output may refer to multi-class semantic segmentation of images. The multi-class output U-Net can be an image processing procedure that does not perform operation 501 in FIG. 3B and uses multiple core maps in operation 307 (output convolution), so multiple output images can be output, and the multiple output images are Each may be indicative of gray matter, white matter, white matter hyperintense regions, cerebrospinal fluid, scalp and/or skull, and air (eg, output images 201, 203, 205, 207, 209, and 201), respectively. The multi-class U-Net can be an image processing program that does not perform operations 501, 502, and 503 in FIG. 4, and uses multiple core images in operation 307 (output convolution), so multiple output images can be output, and the Each of the multiple output images may be indicative of gray matter, white matter, white matter hyperintense regions, cerebrospinal fluid, scalp and/or skull, and air, respectively (eg, output images 201, 203, 205, 207, 209, and 201) . The multi-class U-Net can be an image processing program that does not perform pooling index transfer in Figure 3B or Figure 4, and uses multiple core images in the output convolution, so multiple output images can be output, and the multiple output images Each may be indicative of gray matter, white matter, white matter hyperintense regions, cerebrospinal fluid, scalp and/or skull, and air (eg, output images 201, 203, 205, 207, 209, and 201), respectively.

圖9A是實際腦白質病灶面積(或實際腦白質高信號區域)與多類別輸出U-Net所預測腦白質病灶面積(或預測腦白質高信號區域)之回歸分析圖。圖9A中X軸為實際腦白質病灶面積,Y軸為多類別輸出U-Net所預測腦白質病灶面積,其等之單位為cm 29A is a regression analysis diagram of the actual white matter lesion area (or actual white matter hyperintensity area) and the predicted white matter lesion area (or predicted white matter hyperintensity area) of the multi-class output U-Net. In FIG. 9A , the X-axis is the actual white matter lesion area, and the Y-axis is the predicted white matter lesion area by the multi-class output U-Net, and the unit is cm 2 .

根據圖9A所示回歸分析圖,r值為0.995,意即Y軸數值(即預測腦白質病灶面積)的變異中有99.5%可以被X軸數值(即實際腦白質病灶面積)的變異解釋。根據圖9A所示回歸分析圖,p值為趨近0,意即Y軸數值(即預測腦白質病灶面積)與X軸數值(即實際腦白質病灶面積)之關係為顯著。根據圖9A所示回歸分析圖,回歸線方程式為y=1.046x+0.045。According to the regression analysis graph shown in Figure 9A, the r value is 0.995, which means that 99.5% of the variation in the Y-axis value (ie, the predicted white matter lesion area) can be explained by the variation in the X-axis value (ie, the actual white matter lesion area). According to the regression analysis graph shown in FIG. 9A , the p value approaches 0, which means that the relationship between the Y-axis value (ie, the predicted white matter lesion area) and the X-axis value (ie, the actual white matter lesion area) is significant. According to the regression analysis diagram shown in FIG. 9A , the regression line equation is y=1.046x+0.045.

圖9B是布蘭德-奧特曼圖(Bland-Altman plot)。圖9B中X軸為實際腦白質病灶面積與多類別輸出U-Net所預測腦白質病灶面積的平均值,Y軸為實際腦白質病灶面積減去多類別輸出U-Net所預測腦白質病灶面積,其等之單位為cm 2Figure 9B is a Bland-Altman plot. In Figure 9B, the X-axis is the average value of the actual white matter lesion area and the area of the white matter lesion predicted by the multi-category output U-Net, and the Y-axis is the actual white matter lesion area minus the white matter lesion area predicted by the multi-category output U-Net. , and its equivalent unit is cm 2 .

根據圖9B所示,實際腦白質病灶面積與單一類別輸出U-Net所預測腦白質病灶面積之間的偏差值(Bias)為-0.150。根據圖9B所示,實際腦白質病灶面積減去多類別輸出U-Net所預測腦白質病灶面積之分佈的正1.96標準差(+1.96SD)為0.600,負1.96標準差(-1.96SD)為-0.901。實際腦白質病灶面積減去單一類別輸出U-Net所預測腦白質病灶面積的95%一致性界限(95% limits of agreement)為0.600及-0.901。According to Fig. 9B, the deviation value (Bias) between the actual white matter lesion area and the predicted white matter lesion area of the single-class output U-Net is -0.150. According to Fig. 9B, the positive 1.96 standard deviation (+1.96SD) of the distribution of the actual white matter lesion area minus the predicted white matter lesion area of the multi-class output U-Net is 0.600, and the negative 1.96 standard deviation (-1.96SD) is -0.901. The 95% limits of agreement for the actual white matter lesion area minus the single category output U-Net predicted white matter lesion area were 0.600 and -0.901.

圖10A及圖10B展示根據本揭露之多類別輸出U-SegNet的統計結果。多類別輸出可指影像之多類別語義分割。多類別輸出U-SegNet可為圖3B之影像處理程序,故可輸出多個輸出影像,且該多個輸出影像之每一者可分別指示腦灰質、腦白質、腦白質高信號區域、腦脊髓液、頭皮及/或頭骨以及空氣(如輸出影像201、203、205、207、209及201)。多類別U-SegNet可為圖4之影像處理程序,故可輸出多個輸出影像,且該多個輸出影像之每一者可分別指示腦灰質、腦白質、腦白質高信號區域、腦脊髓液、頭皮及/或頭骨以及空氣(如輸出影像201、203、205、207、209及201)。10A and 10B show statistical results of multi-class output U-SegNet according to the present disclosure. Multi-class output may refer to multi-class semantic segmentation of images. The multi-class output U-SegNet can be the image processing program of FIG. 3B, so it can output multiple output images, and each of the multiple output images can respectively indicate the gray matter, the white matter, the white matter hyperintensity area, and the cerebrospinal cord. Fluid, scalp and/or skull, and air (eg, output images 201, 203, 205, 207, 209, and 201). The multi-class U-SegNet can be the image processing program of FIG. 4, so it can output multiple output images, and each of the multiple output images can respectively indicate gray matter, white matter, white matter hyperintensity area, cerebrospinal fluid , scalp and/or skull, and air (eg, output images 201, 203, 205, 207, 209, and 201).

圖10A是實際腦白質病灶面積(或實際腦白質高信號區域)與多類別輸出U-SegNet所預測腦白質病灶面積(或預測腦白質高信號區域)之回歸分析圖。圖9A中X軸為實際腦白質病灶面積,Y軸為多類別輸出U-SegNet所預測腦白質病灶面積,其等之單位為cm 210A is a regression analysis diagram of the actual white matter lesion area (or actual white matter hyperintensity area) and the predicted white matter lesion area (or predicted white matter hyperintensity area) of the multi-class output U-SegNet. In FIG. 9A , the X-axis is the actual white matter lesion area, and the Y-axis is the predicted white matter lesion area by the multi-class output U-SegNet, and the unit is cm 2 .

根據圖10A所示回歸分析圖,r值為0.996,意即Y軸數值(即預測腦白質病灶面積)的變異中有99.6%可以被X軸數值(即實際腦白質病灶面積)的變異解釋。根據圖10A所示回歸分析圖,p值為趨近0,意即Y軸數值(即預測腦白質病灶面積)與X軸數值(即實際腦白質病灶面積)之關係為顯著。根據圖10A所示回歸分析圖,回歸線方程式為y=1.000x+0.063,趨近於y=x。According to the regression analysis graph shown in Figure 10A, the r value is 0.996, which means that 99.6% of the variation in the Y-axis value (ie, the predicted white matter lesion area) can be explained by the variation in the X-axis value (ie, the actual white matter lesion area). According to the regression analysis graph shown in FIG. 10A , the p value approaches 0, which means that the relationship between the Y-axis value (ie, the predicted white matter lesion area) and the X-axis value (ie, the actual white matter lesion area) is significant. According to the regression analysis diagram shown in FIG. 10A , the regression line equation is y=1.000x+0.063, which is close to y=x.

圖10B是布蘭德-奧特曼圖(Bland-Altman plot)。圖10B中X軸為實際腦白質病灶面積與多類別輸出U-SegNet所預測腦白質病灶面積的平均值,Y軸為實際腦白質病灶面積減去多類別輸出U-SegNet所預測腦白質病灶面積,其等之單位為cm 2Figure 10B is a Bland-Altman plot. In Figure 10B, the X-axis is the average of the actual white matter lesion area and the area of the white matter lesion predicted by the multi-category output U-SegNet, and the Y-axis is the actual white matter lesion area minus the area of the white matter lesion predicted by the multi-category output U-SegNet. , and its equivalent unit is cm 2 .

根據圖10B所示,實際腦白質病灶面積與單一類別輸出U-SegNet所預測腦白質病灶面積之間的偏差值(Bias)為-0.206。根據圖10B所示,實際腦白質病灶面積減去多類別輸出U-SegNet所預測腦白質病灶面積之分佈的正1.96標準差(+1.96SD)為0.624,負1.96標準差(-1.96SD)為-1.036。實際腦白質病灶面積減去單一類別輸出U-Net所預測腦白質病灶面積的95%一致性界限(95% limits of agreement)為0.624及-1.036。According to Fig. 10B, the deviation value (Bias) between the actual white matter lesion area and the predicted white matter lesion area of the single-class output U-SegNet is -0.206. According to Figure 10B, the positive 1.96 standard deviation (+1.96SD) of the distribution of the actual white matter lesion area minus the multi-class output U-SegNet predicted white matter lesion area is 0.624, and the negative 1.96 standard deviation (-1.96SD) is -1.036. The 95% limits of agreement for the actual white matter lesion area minus the single category output U-Net predicted white matter lesion area were 0.624 and -1.036.

圖11A至圖11E展示根據本揭露之一些實施例的腦部MRI 2D影像。圖11A至圖11E展示根據本揭露之一些實施例的腦部T2-FLAIR影像。圖11A可為用以輸入至影像處理程序之輸入影像。圖11A可為本揭露圖1、圖3A、圖3B或圖4中輸入影像101。圖11B至圖11E之每一者包含被圈選為腦白質高信號之區域。圖11B至圖11E之每一者可指示基於圖11A之腦白質高信號識別結果。11A-11E show brain MRI 2D images according to some embodiments of the present disclosure. 11A-11E show brain T2-FLAIR images according to some embodiments of the present disclosure. FIG. 11A may be an input image for input to an image processing program. FIG. 11A can be the input image 101 in FIG. 1 , FIG. 3A , FIG. 3B or FIG. 4 of the present disclosure. Each of FIGS. 11B-11E includes a region circled as white matter hyperintensity. Each of FIGS. 11B-11E may indicate white matter hyperintensity identification results based on FIG. 11A .

圖11B至圖11E之每一者中之綠色圈選部分指示僅被腦科醫師圈選之腦白質高信號區域。圖11B至圖11E之每一者中之紅色圈選部分指示僅被影像處理程序圈選之腦白質高信號區域。圖11B至圖11E之每一者中之黃色圈選部分指示被腦科醫師及影像處理程序兩者圈選之腦白質高信號區域。The green circled portions in each of FIGS. 11B-11E indicate white matter hyperintensity regions circled only by the neurologist. The red circled portions in each of FIGS. 11B-11E indicate white matter hyperintense regions only circled by the image processing program. The yellow circled portions in each of FIGS. 11B-11E indicate areas of white matter hyperintensity circled by both the neurologist and the image processing program.

圖11B至圖11E之每一者中之綠色圈選部分可指示影像處理程序之偽陰型結果。圖11B至圖11E之每一者中之紅色圈選部分可指示影像處理程序之偽陽型結果。圖11B至圖11E之每一者中之黃色圈選部分可指示影像處理程序之真陽型結果。The green circled portion in each of FIGS. 11B-11E may indicate a false negative result of the image processing procedure. The red circled portion in each of FIGS. 11B-11E may indicate a false positive result of the image processing procedure. The yellow circled portion in each of FIGS. 11B-11E may indicate a true positive result of the image processing procedure.

圖11B所示可包含單一類別U-Net影像處理程序之識別結果。圖11B中之箭頭指示偽陽型結果(即紅色圈選部分)。圖11B中亦包含偽陰型結果(即綠色圈選部分)。Figure 11B shows the recognition results that may include a single class of U-Net image processing procedures. Arrows in Figure 11B indicate false positive results (ie, the red circled portion). Pseudo-negative results (ie, the green circled portion) are also included in FIG. 11B.

圖11C所示可包含單一類別U-SegNet影像處理程序之識別結果。圖11C中之兩個箭頭指示偽陽型結果(即紅色圈選部分)。Figure 11C shows the recognition results that may include a single class of U-SegNet image processing procedures. The two arrows in Figure 11C indicate false positive results (ie, the red circled portion).

圖11D所示可包含多類別U-Net影像處理程序之識別結果。圖11D中之箭頭指示偽陽型結果(即紅色圈選部分)。相較於圖11B,圖11D不包含偽陰型結果(即綠色圈選部分)。FIG. 11D shows the recognition results that may include multi-class U-Net image processing procedures. Arrows in Figure 11D indicate false positive results (ie, the red circled portion). Compared to FIG. 11B , FIG. 11D does not include false negative results (ie, the portion circled in green).

圖11E所示可包含多類別U-SegNet影像處理程序(即圖3B或圖4所示之影像處理程序)之識別結果。圖11E中不包含紅色圈選部分或綠色圈選部分。圖11E中不包含偽陽型結果或偽陰型結果。FIG. 11E shows the recognition result which may include multiple types of U-SegNet image processing programs (ie, the image processing programs shown in FIG. 3B or FIG. 4 ). Figure 11E does not contain red circled parts or green circled parts. False positive or false negative results are not included in Figure 11E.

相較於圖11B所示單一類別U-Net影像處理程序之識別結果,圖11E所示多類別U-SegNet影像處理程序(即圖3B或圖4所示之影像處理程序)之識別結果較佳。相較於圖11C所示單一類別U-SegNet影像處理程序之識別結果,圖11E所示多類別U-SegNet影像處理程序(即圖3B或圖4所示之影像處理程序)之識別結果較佳。相較於圖11D所示多類別U-Net影像處理程序之識別結果,圖11E所示多類別U-SegNet影像處理程序(即圖3B或圖4所示之影像處理程序)之識別結果較佳。Compared with the recognition result of the single-type U-Net image processing program shown in FIG. 11B , the recognition result of the multi-type U-SegNet image processing program shown in FIG. 11E (ie, the image processing program shown in FIG. 3B or FIG. 4 ) is better . Compared with the recognition result of the single-type U-SegNet image processing program shown in FIG. 11C , the recognition result of the multi-type U-SegNet image processing program shown in FIG. 11E (ie, the image processing program shown in FIG. 3B or FIG. 4 ) is better . Compared with the recognition result of the multi-class U-Net image processing program shown in FIG. 11D , the recognition result of the multi-class U-SegNet image processing program shown in FIG. 11E (ie, the image processing program shown in FIG. 3B or FIG. 4 ) is better .

圖12A至圖12E展示根據本揭露之一些實施例的腦部MRI 2D影像。圖12A至圖12E展示根據本揭露之一些實施例的腦部T2-FLAIR影像。圖12A可為用以輸入至影像處理程序之輸入影像。圖12A可為本揭露圖1、圖3B或圖4中所示的輸入影像101。圖12B至圖12E之每一者包含被圈選為腦白質高信號之區域。圖12B至圖12E之每一者可指示基於圖11A之腦白質高信號識別結果。12A-12E show brain MRI 2D images according to some embodiments of the present disclosure. 12A-12E show brain T2-FLAIR images according to some embodiments of the present disclosure. Figure 12A may be an input image for input to an image processing program. FIG. 12A may be the input image 101 shown in FIG. 1 , FIG. 3B or FIG. 4 of the present disclosure. Each of Figures 12B-12E includes a region circled as white matter hyperintensity. Each of FIGS. 12B-12E may indicate white matter hyperintensity identification results based on FIG. 11A .

圖12B至圖12E之每一者中之綠色圈選部分指示僅被腦科醫師圈選之腦白質高信號區域。圖12B至圖12E之每一者中之紅色圈選部分指示僅被影像處理程序圈選之腦白質高信號區域。圖12B至圖12E之每一者中之黃色圈選部分指示被腦科醫師及影像處理程序兩者圈選之腦白質高信號區域。The green circled portions in each of Figures 12B-12E indicate white matter hyperintensity regions circled only by the neurologist. The red circled portions in each of Figures 12B-12E indicate white matter hyperintense regions only circled by the image processing program. The yellow circled portions in each of Figures 12B-12E indicate white matter hyperintense regions circled by both the neurologist and the image processing program.

圖12B至圖12E之每一者中之綠色圈選部分可指示影像處理程序之偽陰型結果。圖12B至圖12E之每一者中之紅色圈選部分可指示影像處理程序之偽陽型結果。圖12B至圖12E之每一者中之黃色圈選部分可指示影像處理程序之真陽型結果。The green circled portion in each of Figures 12B-12E may indicate a false negative result of the image processing procedure. The red circled portion in each of Figures 12B-12E may indicate a false positive result of the image processing procedure. The yellow circled portion in each of Figures 12B-12E may indicate a true positive result of the image processing procedure.

圖12B所示可包含單一類別U-Net影像處理程序之識別結果。圖12B中之三個箭頭指示偽陽型結果(即紅色圈選部分)。圖12B中亦包含偽陰型結果(即綠色圈選部分)。Figure 12B shows the recognition results that may include a single class of U-Net image processing procedures. The three arrows in Figure 12B indicate false positive results (ie, the red circled portion). Figure 12B also includes false negative results (ie, the green circled portion).

圖12C所示可包含單一類別U-SegNet影像處理程序之識別結果。圖12C中之三個箭頭指示偽陽型結果(即紅色圈選部分)。圖12C中亦包含偽陰型結果(即綠色圈選部分)。Figure 12C shows the recognition results that may include a single class of U-SegNet image processing procedures. The three arrows in Figure 12C indicate false positive results (ie, the red circled portion). Pseudo-negative results (ie, the green circled portion) are also included in Figure 12C.

圖12D所示可包含多類別U-Net影像處理程序之識別結果。圖12D中之兩個箭頭指示偽陽型結果(即紅色圈選部分)。圖12D中亦包含偽陰型結果(即綠色圈選部分)。圖12D之紅色圈選部分少於圖12B之紅色圈選部分。圖12D之偽陽型結果少於圖12B之偽陽型結果。圖12D之綠色圈選部分少於圖12B之綠色圈選部分。圖12D之偽陰型結果少於圖12B之偽陰型結果。FIG. 12D shows the recognition result of the U-Net image processing program which may include multiple classes. The two arrows in Figure 12D indicate false positive results (ie, the red circled portion). Pseudo-negative results (ie, the green circled portion) are also included in Figure 12D. The red circled portion of Fig. 12D is less than the red circled portion of Fig. 12B. The false positive results of Figure 12D are less than the false positive results of Figure 12B. The green circled portion of Fig. 12D is less than the green circled portion of Fig. 12B. The false negative results of Figure 12D are less than the false negative results of Figure 12B.

圖12E所示可包含多類別U-SegNet影像處理程序(即圖3B或圖4所示之影像處理程序)之識別結果。圖12E中不包含紅色圈選部分。圖11E中不包含偽陽型結果。圖12E之綠色圈選部分少於圖12C之綠色圈選部分。圖12E之偽陰型結果少於圖12C之偽陰型結果。FIG. 12E shows the recognition result that may include multiple types of U-SegNet image processing programs (ie, the image processing programs shown in FIG. 3B or FIG. 4 ). The red circled portion is not included in Figure 12E. False positive results are not included in Figure 11E. The green circled portion of Fig. 12E is less than the green circled portion of Fig. 12C. The false negative results of Figure 12E are less than the false negative results of Figure 12C.

相較於圖12B所示單一類別U-Net影像處理程序之識別結果,圖12E所示多類別U-SegNet影像處理程序(即圖3B或圖4所示之影像處理程序)之識別結果較佳。相較於圖12C所示單一類別U-SegNet影像處理程序之識別結果,圖12E所示多類別U-SegNet影像處理程序(即圖3B或圖4所示之影像處理程序)之識別結果較佳。相較於圖12D所示多類別U-Net影像處理程序之識別結果,圖12E所示多類別U-SegNet影像處理程序(即圖3B或圖4所示之影像處理程序)之識別結果較佳。Compared with the recognition result of the single-type U-Net image processing program shown in FIG. 12B , the recognition result of the multi-type U-SegNet image processing program shown in FIG. 12E (ie, the image processing program shown in FIG. 3B or FIG. 4 ) is better . Compared with the recognition result of the single-type U-SegNet image processing program shown in FIG. 12C , the recognition result of the multi-type U-SegNet image processing program shown in FIG. 12E (ie, the image processing program shown in FIG. 3B or FIG. 4 ) is better . Compared with the recognition result of the multi-class U-Net image processing program shown in FIG. 12D , the recognition result of the multi-class U-SegNet image processing program shown in FIG. 12E (ie, the image processing program shown in FIG. 3B or FIG. 4 ) is better .

基於真陽型結果(True Positive;TP)、真陰型結果(True Negative;TN),偽陽型結果(False Positive;FP)、偽陰型結果(False Negative;FN),評價機器學習結果之量化指標包含以下數種: 準確率(Accuracy)= (tp+tn)/(tp+fp+fn+tn); 精確率(Precision)= tp/(tp+fp)(即預測為陽性的之結果中有幾個是預測正確的); 召回率(Recall)= tp/(tp+fn) (即實際為陽性之樣本中有幾個是預測正確的); F1分數 = 2/((1/Precision)+(1/Recall))(即精確率與召回率的調和平均數); 靈敏度(Sensitivity)= TP/(TP+FN)(與召回率相同);及 特異性(Specificity)= TN/(FP+TN)。 Based on True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN), the machine learning results are evaluated. Quantitative indicators include the following: Accuracy = (tp+tn)/(tp+fp+fn+tn); Precision = tp/(tp+fp) (that is, how many of the results predicted to be positive are correct); Recall rate (Recall) = tp/(tp+fn) (that is, several of the actual positive samples are predicted correctly); F1 score = 2/((1/Precision)+(1/Recall)) (that is, the harmonic mean of precision and recall); Sensitivity = TP/(TP+FN) (same as recall); and Specificity=TN/(FP+TN).

關於單一類別輸出U-Net(如圖7A、圖7B、圖11B、圖12B所使用之影像處理程序),F1分數為88.19%;靈敏度為90.61%;特異性為99.96%;準確率為99.93%。Regarding the single-class output U-Net (the image processing program used in Figure 7A, Figure 7B, Figure 11B, Figure 12B), the F1 score was 88.19%; the sensitivity was 90.61%; the specificity was 99.96%; and the accuracy was 99.93% .

關於單一類別輸出U-SegNet(如圖8A、圖8B、圖11C、圖12C所使用之影像處理程序),F1分數為87.75%;靈敏度為90.40%;特異性為99.95%;準確率為99.93%。Regarding the single-class output U-SegNet (the image processing program used in Figure 8A, Figure 8B, Figure 11C, Figure 12C), the F1 score was 87.75%; the sensitivity was 90.40%; the specificity was 99.95%; and the accuracy was 99.93% .

關於多類別輸出U-Net(如圖9A、圖9B、圖11D、圖12D所使用之影像處理程序),F1分數為88.73%;靈敏度為92.21%;特異性為99.95%;準確率為99.93%。Regarding the multi-class output U-Net (image processing program used in Figure 9A, Figure 9B, Figure 11D, Figure 12D), the F1 score was 88.73%; the sensitivity was 92.21%; the specificity was 99.95%; the accuracy was 99.93% .

關於多類別輸出U-SegNet(如圖10A、圖10B、圖11E、圖12E所使用之影像處理程序),F1分數為90.01%;靈敏度為94.04%;特異性為99.95%;準確率為99.94%。多類別輸出U-SegNet之F1分數高於單一類別輸出U-Net之F1分數、單一類別輸出U-SegNet之F1分數及多類別輸出U-Net之F1分數。Regarding the multi-class output U-SegNet (the image processing program used in Figure 10A, Figure 10B, Figure 11E, Figure 12E), the F1 score was 90.01%; the sensitivity was 94.04%; the specificity was 99.95%; the accuracy was 99.94% . The F1 score of the multi-class output U-SegNet is higher than the F1 score of the single-class output U-Net, the F1 score of the single-class output U-SegNet and the F1 score of the multi-class output U-Net.

圖13展示自圖2之分級程序800輸出之分級結果900的混淆矩陣(confusion matrix)。圖13展示自XGBoost輸出之分級結果900的混淆矩陣。XGBoost之輸入資料包含腦灰質體積231、腦白質體積233、白質高信號體235、腦脊髓液體積237及影像參數239(如影像切片厚度)。在一些實施例中,腦灰質體積231、腦白質體積233、白質高信號體235、腦脊髓液體積237是根據將一系列輸入影像101輸入至如本揭露圖3B或圖4之影像處理程序300(例如多類別U-SegNet影像處理程序)而得。在一些實施例中,影像參數239為一系列輸入影像101之影像切片厚度。FIG. 13 shows a confusion matrix of the classification result 900 output from the classification process 800 of FIG. 2 . Figure 13 shows the confusion matrix of the binning result 900 output from XGBoost. The input data of XGBoost includes gray matter volume 231 , white matter volume 233 , white matter hyperintensity volume 235 , cerebrospinal fluid volume 237 , and image parameters 239 (eg, image slice thickness). In some embodiments, gray matter volume 231 , white matter volume 233 , white matter hyperintensity volume 235 , cerebrospinal fluid volume 237 are based on inputting a series of input images 101 to the image processing program 300 of FIG. 3B or FIG. 4 of the present disclosure (such as the multi-class U-SegNet image processing program). In some embodiments, the image parameter 239 is the image slice thickness of the series of input images 101 .

參考圖13,混淆矩陣之X軸可為XGBoost所預測之費澤克斯等級,混淆矩陣之Y軸可為實際費澤克斯等級。在實際費澤克斯等級為1時,預測費澤克斯等級為1之機率為0.8294,預測費澤克斯等級為2之機率為0.1706,預測費澤克斯等級為3之機率為0。在預測澤克斯等級為1時,實際費澤克斯等級為1之機率為0.8294,預測費澤克斯等級為2之機率為0.1412,預測費澤克斯等級為3之機率為0。Referring to FIG. 13 , the X-axis of the confusion matrix may be the Fezex level predicted by XGBoost, and the Y-axis of the confusion matrix may be the actual Fazex level. When the actual Fezex level is 1, the predicted probability of a Fezex level of 1 is 0.8294, the predicted probability of a Fezex level of 2 is 0.1706, and the predicted probability of a Fezex level of 3 is 0. When the predicted Fezex level is 1, the probability of the actual Fezex level is 0.8294, the predicted Fezex level is 0.1412, and the predicted Fezex level is 0. The probability is 0.

實際費澤克斯等級為2時,預測費澤克斯等級為2之機率為0.7406;實際費澤克斯等級為3時,預測費澤克斯等級為3之機率為0.8254。When the actual Fezex level is 2, the predicted probability of Fezex level 2 is 0.7406; when the actual Fezex level is 3, the predicted probability of Fezex level 3 is 0.8254.

根據藉由如本揭露圖3B或圖4之影像處理程序300(例如多類別U-SegNet影像處理程序)而得之腦灰質體積231、腦白質體積233、白質高信號體235、腦脊髓液體積237進行分級,費澤克斯等級預測正確之機率為高。According to the gray matter volume 231 , the white matter volume 233 , the white matter hyperintensity volume 235 , the cerebrospinal fluid volume obtained by the image processing program 300 (eg, the multi-class U-SegNet image processing program) as shown in FIG. 3B or FIG. 4 of the present disclosure 237 for grading, the probability of correct prediction of Fazex grade is high.

圖14繪示根據本揭露之一些實施例的系統1400。系統1400包含用戶端1410、資料庫1420以及伺服器端1430。用戶端1410包含處理器1411及記憶體1412,記憶體1412可儲存使處理器1411執行本揭露所載之程序或操作。資料庫1420包含處理器1421及記憶體1422,記憶體1422可儲存使處理器1421執行本揭露所載之程序或操作。伺服器端1430包含處理器1431及記憶體1432,記憶體1432可儲存使處理器1431執行本揭露所載之程序或操作。14 illustrates a system 1400 according to some embodiments of the present disclosure. The system 1400 includes a client 1410 , a database 1420 and a server 1430 . The client 1410 includes a processor 1411 and a memory 1412, and the memory 1412 can store the programs or operations described in the present disclosure for the processor 1411 to be stored. The database 1420 includes a processor 1421 and a memory 1422. The memory 1422 can store the programs or operations described in the present disclosure for the processor 1421 to be stored. The server side 1430 includes a processor 1431 and a memory 1432. The memory 1432 can store the programs or operations described in the present disclosure for the processor 1431 to be stored.

在一些實施例中,用戶端1410可自資料庫1420存取資料。例如,用戶端1410可自資料庫1420存取影像,以供用戶(例如專科醫師)使用。用戶端1410可自資料庫1420存取腦部MRI 2D影像或腦部T2-FLAIR影像,以供用戶使用。用戶端1410可對資料庫1420傳送請求,使資料庫1420將用戶端1410所選取之一或多個影像傳送至伺服器端1430進行影像辨識及/或分級。In some embodiments, the client 1410 can access data from the database 1420 . For example, client 1410 may access images from database 1420 for use by a user (eg, a specialist). The client 1410 can access the brain MRI 2D image or the brain T2-FLAIR image from the database 1420 for the user to use. The client 1410 may send a request to the database 1420 so that the database 1420 transmits one or more images selected by the client 1410 to the server 1430 for image recognition and/or classification.

在伺服器端1430收到資料庫1420所傳送之影像辨識請求及相關聯之一或多個影像後,伺服器端1430將該一或多個影像之每一者作為輸入影像101(如圖1、圖3B或圖4所示)而輸入至影像處理程序300(如圖1、圖3B或圖4所示)進行影像辨識。After the server 1430 receives the image recognition request and the associated one or more images sent from the database 1420, the server 1430 uses each of the one or more images as the input image 101 (as shown in FIG. 1 ). , FIG. 3B or FIG. 4 ) and input to the image processing program 300 (as shown in FIG. 1 , FIG. 3B or FIG. 4 ) for image recognition.

經過影像處理程序300處理後,對應於自資料庫1420收到的該一或多個影像,伺服器端1430產生一或多個輸出影像201、一或多個輸出影像203、一或多個輸出影像205、一或多個輸出影像207、一或多個輸出影像209、一或多個輸出影像211。After being processed by the image processing program 300, corresponding to the one or more images received from the database 1420, the server 1430 generates one or more output images 201, one or more output images 203, one or more output images Image 205 , one or more output images 207 , one or more output images 209 , one or more output images 211 .

在一些實施例中,伺服器端1430可根據資料庫1420所傳送之請求而進一步對資料庫1420所傳送之一或多個影像進行分級。伺服器端1430可根據影像處理程序300輸出之一或多個輸出影像201、一或多個輸出影像203、一或多個輸出影像205以及一或多個輸出影像207計算組織之體積,以產生體積231、體積233、體積235及體積237。伺服器端1430將體積231、體積233、體積235及體積237輸入至分級程序800。在一些實施例中,除體積231、體積233、體積235及體積237之外,伺服器端1430亦將影像參數239輸入至分級程序800。影像參數230可為自資料庫1420接收之一或多個影像的影像切片厚度。分級程序800根據輸入之資料輸出分級結果900。In some embodiments, the server 1430 may further grade one or more images transmitted by the database 1420 according to the request transmitted by the database 1420 . The server 1430 can calculate the volume of the tissue according to the one or more output images 201 , one or more output images 203 , one or more output images 205 , and one or more output images 207 output by the image processing program 300 to generate Volume 231, Volume 233, Volume 235, and Volume 237. The server side 1430 inputs the volume 231 , the volume 233 , the volume 235 and the volume 237 to the classification process 800 . In some embodiments, in addition to volume 231 , volume 233 , volume 235 , and volume 237 , server 1430 also inputs image parameters 239 to grading process 800 . Image parameter 230 may be an image slice thickness for one or more images received from database 1420 . The grading program 800 outputs a grading result 900 according to the input data.

伺服器端1430可根據一或多個輸出影像201、一或多個輸出影像203、一或多個輸出影像205、一或多個輸出影像207、一或多個輸出影像209、一或多個輸出影像211產生病灶邊緣座標及病灶體積。伺服器端1430可根據分級結果900產生分級報告。伺服器端1430可將病灶邊緣座標、病灶體積及分級報告傳送至資料庫1420或用戶端1410。伺服器端可將病灶邊緣座標、病灶體積及分級報告藉由資料庫1420傳送至用戶端1410。The server side 1430 can be based on one or more output images 201, one or more output images 203, one or more output images 205, one or more output images 207, one or more output images 209, one or more output images The output image 211 generates lesion edge coordinates and lesion volume. The server 1430 can generate a rating report according to the rating result 900 . The server 1430 can transmit the lesion edge coordinates, the lesion volume and the grading report to the database 1420 or the client 1410 . The server side can transmit the lesion edge coordinates, lesion volume and grading report to the client terminal 1410 through the database 1420 .

資料庫1420可根據病灶邊緣座標而在該一或多個相對應影像上圈選病灶,並將包含圈選病灶之影像傳送至用戶端1410以供用戶使用。用戶端1410可根據病灶邊緣座標而在該一或多個相對應影像上圈選病灶,以供用戶使用。病灶體積及分級報告可傳送至用戶端1410以供用戶使用。The database 1420 can circle the lesions on the one or more corresponding images according to the lesion edge coordinates, and transmit the images including the circled lesions to the user terminal 1410 for the user to use. The user terminal 1410 can circle the lesion on the one or more corresponding images according to the lesion edge coordinates for the user to use. The lesion volume and grading report may be transmitted to the client 1410 for use by the user.

雖然已參考本揭露之具體實施例描述及說明本發明,但此等描述及說明並不限制本發明。熟習此項技術者應理解,在不脫離如由隨附申請專利範圍界定的本發明之真實精神及範圍的情況下,可作出各種改變且可取代等效物。說明可不必按比例繪製。歸因於製造製程及公差,本申請中之藝術再現與實際發明中之藝術再現之間可存在區別。可存在並未特定說明的本發明之其他實施例。應將本說明書及圖式視為說明性而非限制性的。可作出修改,以使特定情況、材料、物質之組成、方法或製程適應於本發明之目標、精神及範圍。所有此類修改意欲在此處附加之申請專利範圍之範圍內。雖然已參考按特定次序執行之特定操作描述本文中所揭示的方法,但將理解,在不脫離本發明之教示的情況下,可組合、再細分或重新定序此等操作以形成等效方法。因此,除非本文中另外特定地指示,否則操作之次序及分組並非本發明之限制。此外,在上述實施例及其類似者中詳述之效果僅為實例。因此,本申請可進一步具有其他效果。While the invention has been described and illustrated with reference to specific embodiments of the present disclosure, such description and illustration do not limit the invention. It will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the true spirit and scope of the invention as defined by the appended claims. The illustrations may not necessarily be drawn to scale. Due to manufacturing processes and tolerances, differences may exist between the artistic representation in this application and the artistic representation in the actual invention. There may be other embodiments of the invention not specifically described. The specification and drawings are to be regarded in an illustrative rather than a restrictive sense. Modifications may be made to adapt a particular situation, material, composition of matter, method or process to the object, spirit and scope of the invention. All such modifications are intended to be within the scope of the claims appended hereto. Although the methods disclosed herein have been described with reference to specific operations performed in a specific order, it is to be understood that such operations may be combined, subdivided, or reordered to form equivalent methods without departing from the teachings of the present invention . Accordingly, unless specifically indicated otherwise herein, the order and grouping of operations are not limitations of the invention. Furthermore, the effects detailed in the above-described embodiments and the like are merely examples. Therefore, the present application can further have other effects.

另外,圖中所繪示之邏輯流程未必需要所展示之特定次序或順序次序來實現合意結果。另外,可提供其他步驟,或可自所闡述流程消除若干步驟,且可向所闡述系統添加或自所闡述系統移除其他組件。因此,其他實施例皆在所附申請專利範圍之範疇內。Additionally, the logic flows depicted in the figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. Additionally, other steps may be provided, or steps may be eliminated from the illustrated flows, and other components may be added to or removed from the illustrated systems. Accordingly, other embodiments are within the scope of the appended claims.

101:輸入影像 102:特徵圖組 103:特徵圖組 104:特徵圖組 105:特徵圖組 106:特徵圖組 107:特徵圖組 108:特徵圖組 109:特徵圖組 110:特徵圖組 111:特徵圖組 112:特徵圖組 113:特徵圖組 114:特徵圖組 115:特徵圖組 116:特徵圖組 117:特徵圖組 118:特徵圖組 119:特徵圖組 120:特徵圖組 121:特徵圖組 122:特徵圖組 123:特徵圖組 124:特徵圖組 201:輸出影像 203:輸出影像 205:輸出影像 207:輸出影像 209:輸出影像 211:輸出影像 300:影像處理程序 231:體積 233:體積 235:體積 237:體積 239:影像參數 301:操作 302:操作 303:操作 304:操作 305:操作 306:操作 307:操作 308:操作 309:操作 310:操作 311:操作 312:操作 313:操作 314:操作 315:操作 401:操作 402:操作 403:操作 501:操作 502:操作 503:操作 601:操作 602:操作 603:操作 701:操作 702:操作 703:操作 800:分級程序 900:分級結果 101: Input image 102: Feature map group 103: Feature Map Group 104: Feature Map Group 105: Feature Map Group 106: Feature Map Group 107: Feature Map Group 108: Feature map group 109: Feature Map Group 110: Feature map group 111: Feature map group 112: Feature map group 113: Feature map group 114: Feature map group 115: Feature map group 116: Feature map group 117: Feature map group 118: Feature map group 119: Feature map group 120: Feature map group 121: Feature map group 122: Feature map group 123: Feature map group 124: Feature map group 201: Output image 203: output image 205: Output image 207: Output image 209: Output image 211: output image 300: Image processing program 231: Volume 233: Volume 235: volume 237: Volume 239: Image parameters 301: Operation 302: Operation 303: Operation 304: Operation 305: Operation 306: Operation 307: Operation 308: Operation 309:Operation 310: Operation 311: Operation 312: Operation 313: Operation 314: Operation 315: Operation 401: Operation 402: Operation 403: Operation 501: Operation 502: Operation 503: Operation 601: Operation 602: Operation 603: Operation 701: Operation 702: Operation 703: Operation 800: Grading Procedures 900: Grading Results

圖1繪示根據本揭露之一些實施例的影像辨識流程圖。FIG. 1 illustrates a flowchart of image recognition according to some embodiments of the present disclosure.

圖2繪示根據本揭露之一些實施例的分級流程圖。FIG. 2 illustrates a hierarchical flow diagram according to some embodiments of the present disclosure.

圖3A繪示根據本揭露之一些實施例的影像處理程序。FIG. 3A illustrates an image processing procedure according to some embodiments of the present disclosure.

圖3B繪示根據本揭露之一些實施例的影像處理程序。FIG. 3B illustrates an image processing procedure according to some embodiments of the present disclosure.

圖4繪示根據本揭露之一些實施例的影像處理程序。FIG. 4 illustrates an image processing procedure according to some embodiments of the present disclosure.

圖5A至圖5E展示根據本揭露之一些實施例的腦部影像。5A-5E show brain images according to some embodiments of the present disclosure.

圖6A至圖6E展示根據本揭露之一些實施例的腦部影像。6A-6E show brain images according to some embodiments of the present disclosure.

圖7A及圖7B展示根據本揭露之一些實施例的統計結果。7A and 7B show statistical results according to some embodiments of the present disclosure.

圖8A及圖8B展示根據本揭露之一些實施例的統計結果。8A and 8B show statistical results according to some embodiments of the present disclosure.

圖9A及圖9B展示根據本揭露之一些實施例的統計結果。9A and 9B show statistical results according to some embodiments of the present disclosure.

圖10A及圖10B展示根據本揭露之一些實施例的統計結果。10A and 10B show statistical results according to some embodiments of the present disclosure.

圖11A至圖11E展示根據本揭露之一些實施例的腦部影像。11A-11E show brain images according to some embodiments of the present disclosure.

圖12A至圖12E展示根據本揭露之一些實施例的腦部影像。12A-12E show brain images according to some embodiments of the present disclosure.

圖13展示根據本揭露之一些實施例的統計結果。13 shows statistical results according to some embodiments of the present disclosure.

圖14繪示根據本揭露之一些實施例的系統。14 illustrates a system according to some embodiments of the present disclosure.

為更好地理解本揭露之前述態樣以及其額外態樣及實施例,應結合以上圖式參考下文實施方式。在各個圖式中,相似參考符號指示相似元件。For a better understanding of the foregoing aspects of the present disclosure, as well as additional aspects and embodiments thereof, reference should be made to the following description in conjunction with the above drawings. In the various figures, like reference characters indicate similar elements.

1410:客戶端裝置 1410: Client Device

1411:處理器 1411: Processor

1412:記憶體 1412: Memory

1420:資料庫裝置 1420: Database Device

1421:處理器 1421: Processor

1422:記憶體 1422: Memory

1430:伺服器端裝置 1430: Server side device

1431:處理器 1431: Processor

1432:記憶體 1432: Memory

Claims (14)

一種識別腦白質高信號之方法,其包括:接收一輸入影像;執行一第一組卷積(a first set of convolutions);在執行該第一組卷積後產生一第一特徵圖組(first feature map set);對該第一特徵圖組執行一第一池化(a first pooling),以產生一第一組最大池化索引(a first set of max pooling indices)以及一第二特徵圖組;及對該第二特徵圖組執行一第二組卷積(a second set of convolutions),以產生一第三特徵圖組;以及執行一輸出卷積以產生至少兩個影像分割(image segmentation),該輸出卷積包含與至少兩個核心圖(kernel map)執行一卷積。 A method for identifying white matter hyperintensities, comprising: receiving an input image; performing a first set of convolutions; generating a first set of feature maps after performing the first set of convolutions feature map set); perform a first pooling on the first feature map set to generate a first set of max pooling indices and a second feature map set and performing a second set of convolutions on the second set of feature maps to generate a third set of feature maps; and performing an output convolution to generate at least two image segmentations , the output convolution involves performing a convolution with at least two kernel maps. 如請求項1之方法,其進一步包括:基於該第一組最大池化索引(a first set of max pooling indices),對該第三特徵圖組執行一第一反池化(a first unpooling),以產生一第四特徵圖組;將該第一特徵圖組跳躍連接(skip connect)至該第四特徵圖組,以產生一第五特徵圖組;及對該第五特徵圖組執行一第三組卷積以及該輸出卷積,以產生該至少兩個影像分割。 The method of claim 1, further comprising: performing a first unpooling (a first unpooling) on the third feature map group based on the first set of max pooling indices (a first set of max pooling indices), to generate a fourth feature map set; skip connect the first feature map set to the fourth feature map set to generate a fifth feature map set; and perform a first feature map set on the fifth feature map set Three sets of convolutions and the output convolution produce the at least two image segmentations. 如請求項1之方法,其中:該輸入影像包含經由磁振造影(MRI;magnetic resonance imaging)產生之一T2權重-液體衰減反轉恢復(FLAIR;fluid-attenuated inversion recovery)影像。 The method of claim 1, wherein: the input image comprises a T2-weighted fluid-attenuated inversion recovery (FLAIR; fluid-attenuated inversion recovery) image generated by magnetic resonance imaging (MRI). 如請求項1之方法,其中:該至少兩個影像分割包含灰質影像分割、白質影像分割、白質高信號影像分割、腦脊髓液影像分割、頭皮及頭骨影像分割或空氣影像分割。 The method of claim 1, wherein: the at least two image segmentations comprise gray matter image segmentation, white matter image segmentation, white matter hyperintensity image segmentation, cerebrospinal fluid image segmentation, scalp and skull image segmentation, or air image segmentation. 如請求項1之方法,其進一步包括:對該第三特徵圖組執行一第二池化,以產生一第二組最大池化索引以及一第四特徵圖組;對該第四特徵圖組執行一第三組卷積,以產生一第五特徵圖組;對該第五特徵圖組執行一第三池化,以產生一第三組最大池化索引以及一第六特徵圖組;及對該第六特徵圖組執行一第四組卷積,以產生一第七特徵圖組。 The method of claim 1, further comprising: performing a second pooling on the third feature map set to generate a second set of max-pooling indices and a fourth feature map set; and the fourth feature map set performing a third set of convolutions to generate a fifth set of feature maps; performing a third pooling on the fifth set of feature maps to generate a third set of max-pooling indices and a sixth set of feature maps; and A fourth set of convolutions is performed on the sixth set of feature maps to generate a seventh set of feature maps. 如請求項5之方法,其進一步包括:基於該第三組最大池化索引,對該第七特徵圖組執行一第一反池化,以產生一第八特徵圖組;將該第五特徵圖組跳躍連接至該第八特徵圖組,以產生一第九特 徵圖組;對該第九特徵圖組執行一第五組卷積,以產生一第十特徵圖組;基於該第二組最大池化索引,對該第十特徵圖組執行一第二反池化,以產生一第十一特徵圖組;將該第三特徵圖組跳躍連接至該第十一特徵圖組,以產生一第十二特徵圖組;對該第十二特徵圖組執行一第六組卷積,以產生一第十三特徵圖組;基於該第一組最大池化索引,對該第十三特徵圖組執行一第三反池化,以產生一第十四特徵圖組;將該第一特徵圖組跳躍連接至該第十四特徵圖組,以產生一第十五特徵圖組;及對該第十五特徵圖組執行一第七組卷積以及該輸出卷積,以產生該至少兩個影像分割。 The method of claim 5, further comprising: performing a first de-pooling on the seventh set of feature maps based on the third set of max-pooling indices to generate an eighth set of feature maps; The graph group is jump-connected to the eighth feature graph group to generate a ninth feature graph feature map group; perform a fifth set of convolutions on the ninth feature map set to generate a tenth feature map set; based on the second set of maximum pooling indices, perform a second inversion on the tenth feature map set pooling to generate an eleventh feature map group; jump-connecting the third feature map group to the eleventh feature map group to generate a twelfth feature map group; executing the twelfth feature map group a sixth set of convolutions to generate a thirteenth feature map set; based on the first set of max-pooling indices, a third de-pooling is performed on the thirteenth feature map set to generate a fourteenth feature map group; skip connecting the first feature map group to the fourteenth feature map group to generate a fifteenth feature map group; and performing a seventh group of convolutions on the fifteenth feature map group and the output convolution to generate the at least two image segmentations. 一種腦白質高信號量化之方法,其包括:接收複數個輸入影像;針對該複數個輸入影像之每一者執行如請求項4之方法,以產生複數個灰質影像分割、複數個白質影像分割、複數個白質高信號影像分割、複數個腦脊髓液影像分割;基於該複數個灰質影像分割,產生一灰質體積;基於該複數個白質影像分割,產生一白質體積;基於該複數個白質高信號影像分割,產生一白質高信號體積; 基於該複數個腦脊髓液影像分割,產生一腦脊髓液體積;以及基於該灰質體積、該白質體積、該白質高信號體積、該腦脊髓液體積以及該複數個輸入影像之切片厚度執行影像分級。 A method of white matter hyperintensity quantification, comprising: receiving a plurality of input images; performing the method of claim 4 for each of the plurality of input images to generate a plurality of gray matter image segmentations, a plurality of white matter image segmentations, Segmenting a plurality of white matter hyperintensity images, segmenting a plurality of cerebrospinal fluid images; segmenting a plurality of gray matter images to generate a gray matter volume; segmenting a plurality of white matter images to generate a white matter volume; based on the plurality of white matter hyperintensity images segmentation, resulting in a white matter hyperintensity volume; generating a cerebrospinal fluid volume based on the plurality of cerebrospinal fluid image segmentation; and performing image grading based on the gray matter volume, the white matter volume, the white matter hyperintensity volume, the cerebrospinal fluid volume, and slice thicknesses of the plurality of input images . 一種識別腦白質高信號之裝置,其包括:一處理器;以及一記憶體,其與該處理器耦合,其中該處理器執行儲存於該記憶體之電腦可讀取指令以執行以下操作:接收至少一個輸入影像;以及針對該至少一個輸入影像之每一者執行以下操作:執行一第一組卷積;在執行該第一組卷積後產生一第一特徵圖組;對該第一特徵圖組執行一第一池化,以產生一第一組最大池化索引以及一第二特徵圖組;及對該第二特徵圖組執行一第二組卷積,以產生一第三特徵圖組;以及執行一輸出卷積以產生至少兩個影像分割,該輸出卷積包含與至少兩個核心圖執行一卷積。 A device for identifying white matter hyperintensities, comprising: a processor; and a memory coupled to the processor, wherein the processor executes computer-readable instructions stored in the memory to perform the following operations: receive at least one input image; and performing the following operations for each of the at least one input image: performing a first set of convolutions; generating a first set of feature maps after performing the first set of convolutions; the first feature A first pooling is performed on the set of graphs to generate a first set of max-pooling indices and a second set of feature maps; and a second set of convolutions are performed on the second set of feature maps to generate a third feature map and performing an output convolution to generate at least two image segmentations, the output convolution including performing a convolution with at least two core maps. 如請求項8之裝置,其中針對該複數個輸入影像之每一者進一步包括以下操作:基於該第一組最大池化索引,對該第三特徵圖組執行一第一反池 化,以產生一第四特徵圖組;將該第一特徵圖組跳躍連接至該第四特徵圖組,以產生一第五特徵圖組;及對該第五特徵圖組執行一第三組卷積以及該輸出卷積,以產生該至少兩個影像分割。 The apparatus of claim 8, wherein for each of the plurality of input images further comprising the following operations: performing a first inverse pooling on the third set of feature maps based on the first set of max-pooling indices to generate a fourth set of feature maps; jump-connect the first set of feature maps to the fourth set of feature maps to generate a fifth set of feature maps; and perform a third set of the fifth set of feature maps convolution and the output convolution to generate the at least two image segmentations. 如請求項8之裝置,其中:該輸入影像包括經由磁振造影產生之一T2權重-液體衰減反轉恢復影像。 8. The apparatus of claim 8, wherein: the input image comprises a T2-weighted-liquid-attenuated inversion recovery image generated via MRI. 如請求項8之裝置,其中:該至少兩個影像分割包括灰質影像分割、白質影像分割、白質高信號影像分割、腦脊髓液影像分割、頭皮及頭骨影像分割或空氣影像分割。 The apparatus of claim 8, wherein: the at least two image segmentations include gray matter image segmentation, white matter image segmentation, white matter hyperintensity image segmentation, cerebrospinal fluid image segmentation, scalp and skull image segmentation, or air image segmentation. 如請求項8之裝置,其中針對該複數個輸入影像之每一者進一步包括以下操作:對該第三特徵圖組執行一第二池化,以產生一第二組最大池化索引以及一第四特徵圖組;對該第四特徵圖組執行一第三組卷積,以產生一第五特徵圖組;對該第五特徵圖組執行一第三池化,以產生一第三組最大池化索引以及一第六特徵圖組;及對該第六特徵圖組執行一第四組卷積,以產生一第七特徵圖組。 The apparatus of claim 8, wherein for each of the plurality of input images further comprising the following operations: performing a second pooling on the third set of feature maps to generate a second set of max-pooling indices and a first Four feature map sets; perform a third set of convolutions on the fourth feature map set to generate a fifth feature map set; perform a third pooling on the fifth feature map set to generate a third set of maximum pooling the index and a sixth set of feature maps; and performing a fourth set of convolutions on the sixth set of feature maps to generate a seventh set of feature maps. 如請求項12之裝置,其中針對該複數個輸入影像之每一者進一步包括以下操作:基於該第三組最大池化索引,對該第七特徵圖組執行一第一反池化,以產生一第八特徵圖組;將該第五特徵圖組跳躍連接至該第八特徵圖組,以產生一第九特徵圖組;對該第九特徵圖組執行一第五組卷積,以產生一第十特徵圖組;基於該第二組最大池化索引,對該第十特徵圖組執行一第二反池化,以產生一第十一特徵圖組;將該第三特徵圖組跳躍連接至該第十一特徵圖組,以產生一第十二特徵圖組;對該第十二特徵圖組執行一第六組卷積,以產生一第十三特徵圖組;基於該第一組最大池化索引,對該第十三特徵圖組執行一第三反池化,以產生一第十四特徵圖組;將該第一特徵圖組跳躍連接至該第十四特徵圖組,以產生一第十五特徵圖組;及對該第十五特徵圖組執行一第七組卷積以及該輸出卷積,以產生該至少兩個影像分割。 The apparatus of claim 12, wherein for each of the plurality of input images further comprising the following operations: based on the third set of max-pooling indices, performing a first unpooling on the seventh set of feature maps to generate an eighth set of feature maps; jump-connect the fifth set of feature maps to the eighth set of feature maps to generate a ninth set of feature maps; perform a fifth set of convolutions on the ninth set of feature maps to generate a tenth feature map group; based on the second set of maximum pooling indices, perform a second de-pooling on the tenth feature map group to generate an eleventh feature map group; skip the third feature map group connecting to the eleventh feature map set to generate a twelfth feature map set; performing a sixth set of convolutions on the twelfth feature map set to generate a thirteenth feature map set; based on the first group max pooling index, perform a third de-pooling on the thirteenth feature map group to generate a fourteenth feature map group; jump connect the first feature map group to the fourteenth feature map group, to generate a fifteenth feature map set; and perform a seventh set of convolutions and the output convolution on the fifteenth feature map set to generate the at least two image segmentations. 如請求項11之裝置,其中該處理器進一步執行以下操作:基於該複數個輸入影像產生複數個灰質影像分割、複數個白質影 像分割、複數個白質高信號影像分割、複數個腦脊髓液影像分割;基於該複數個灰質影像分割,產生一灰質體積;基於該複數個白質影像分割,產生一白質體積;基於該複數個白質高信號影像分割,產生一白質高信號體積;基於該複數個腦脊髓液影像分割,產生一腦脊髓液體積;基於該灰質體積、該白質體積、該白質高信號體積、該腦脊髓液體積以及該複數個輸入影像之切片厚度執行影像分級。 The apparatus of claim 11, wherein the processor further performs the following operations: generating a plurality of gray matter image segmentations, a plurality of white matter shadows based on the plurality of input images Image segmentation, multiple white matter hyperintensity image segmentation, multiple cerebrospinal fluid image segmentation; based on the multiple gray matter image segmentation, a gray matter volume is generated; based on the multiple white matter image segmentation, a white matter volume is generated; based on the multiple white matter image segmentation segmenting the hyperintensity image to generate a white matter hyperintensity volume; segmenting the plurality of cerebrospinal fluid images to generate a cerebrospinal fluid volume; based on the gray matter volume, the white matter volume, the white matter hyperintensity volume, the cerebrospinal fluid volume, and Image grading is performed on slice thicknesses of the plurality of input images.
TW109133700A 2020-09-28 2020-09-28 Method and apparatus for identifying white matter hyperintensities TWI768483B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW109133700A TWI768483B (en) 2020-09-28 2020-09-28 Method and apparatus for identifying white matter hyperintensities

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW109133700A TWI768483B (en) 2020-09-28 2020-09-28 Method and apparatus for identifying white matter hyperintensities

Publications (2)

Publication Number Publication Date
TW202213378A TW202213378A (en) 2022-04-01
TWI768483B true TWI768483B (en) 2022-06-21

Family

ID=82197118

Family Applications (1)

Application Number Title Priority Date Filing Date
TW109133700A TWI768483B (en) 2020-09-28 2020-09-28 Method and apparatus for identifying white matter hyperintensities

Country Status (1)

Country Link
TW (1) TWI768483B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109300531A (en) * 2018-08-24 2019-02-01 深圳大学 A kind of cerebral disease method of early diagnosis and device
US20190246904A1 (en) * 2016-10-20 2019-08-15 Jlk Inspection Stroke diagnosis and prognosis prediction method and system
CN110797123A (en) * 2019-10-28 2020-02-14 大连海事大学 Graph convolution neural network evolution method of dynamic brain structure
TW202027028A (en) * 2018-08-15 2020-07-16 美商超精細研究股份有限公司 Deep learning techniques for suppressing artefacts in magnetic resonance images

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190246904A1 (en) * 2016-10-20 2019-08-15 Jlk Inspection Stroke diagnosis and prognosis prediction method and system
TW202027028A (en) * 2018-08-15 2020-07-16 美商超精細研究股份有限公司 Deep learning techniques for suppressing artefacts in magnetic resonance images
CN109300531A (en) * 2018-08-24 2019-02-01 深圳大学 A kind of cerebral disease method of early diagnosis and device
CN110797123A (en) * 2019-10-28 2020-02-14 大连海事大学 Graph convolution neural network evolution method of dynamic brain structure

Also Published As

Publication number Publication date
TW202213378A (en) 2022-04-01

Similar Documents

Publication Publication Date Title
Farid et al. A novel approach of CT images feature analysis and prediction to screen for corona virus disease (COVID-19)
CN110236543B (en) Alzheimer disease multi-classification diagnosis system based on deep learning
CN112101451B (en) Breast cancer tissue pathological type classification method based on generation of antagonism network screening image block
Ortiz-Ramón et al. Identification of the presence of ischaemic stroke lesions by means of texture analysis on brain magnetic resonance images
Zhang et al. Automated semantic segmentation of red blood cells for sickle cell disease
US10621307B2 (en) Image-based patient profiles
CN111415324B (en) Classification and identification method for brain disease focus image space distribution characteristics based on magnetic resonance imaging
Alksas et al. A novel computer-aided diagnostic system for accurate detection and grading of liver tumors
US10706534B2 (en) Method and apparatus for classifying a data point in imaging data
JP7187244B2 (en) Medical image processing device, medical image processing system and medical image processing program
US9811904B2 (en) Method and system for determining a phenotype of a neoplasm in a human or animal body
WO2023241031A1 (en) Deep learning-based three-dimensional intelligent diagnosis method and system for osteoarthritis
US11373309B2 (en) Image analysis in pathology
CN111640095B (en) Quantification method of cerebral micro hemorrhage and computer readable storage medium
WO2020097100A1 (en) Systems and methods for semi-automatic tumor segmentation
WO2022247573A1 (en) Model training method and apparatus, image processing method and apparatus, device, and storage medium
CN110738702B (en) Three-dimensional ultrasonic image processing method, device, equipment and storage medium
Saiviroonporn et al. Cardiothoracic ratio measurement using artificial intelligence: observer and method validation studies
CN113764101B (en) Novel auxiliary chemotherapy multi-mode ultrasonic diagnosis system for breast cancer based on CNN
Agarwala et al. A-UNet: Attention 3D UNet architecture for multiclass segmentation of Brain Tumor
Das et al. A study on MANOVA as an effective feature reduction technique in classification of childhood medulloblastoma and its subtypes
CN109214451A (en) A kind of classification method and equipment of brain exception
TWI768483B (en) Method and apparatus for identifying white matter hyperintensities
Mirchandani et al. Comparing the Architecture and Performance of AlexNet Faster R-CNN and YOLOv4 in the Multiclass Classification of Alzheimer Brain MRI Scans
TWI785390B (en) Method and apparatus for identifying brain tissues and determining brain age