TWI768483B - Method and apparatus for identifying white matter hyperintensities - Google Patents
Method and apparatus for identifying white matter hyperintensities Download PDFInfo
- Publication number
- TWI768483B TWI768483B TW109133700A TW109133700A TWI768483B TW I768483 B TWI768483 B TW I768483B TW 109133700 A TW109133700 A TW 109133700A TW 109133700 A TW109133700 A TW 109133700A TW I768483 B TWI768483 B TW I768483B
- Authority
- TW
- Taiwan
- Prior art keywords
- feature map
- generate
- white matter
- image
- output
- Prior art date
Links
Images
Abstract
Description
本揭露一般而言係關於一種影像識別之方法及裝置。更明確地說,本揭露係關於一種識別腦白質高信號(white matter hyperintensities;WMH)之方法及裝置。The present disclosure generally relates to a method and apparatus for image recognition. More specifically, the present disclosure relates to a method and apparatus for identifying white matter hyperintensities (WMH).
在醫學影像領域中,一般是由專科醫師進行影像的判讀。舉例而言,藉由磁振造影(Magnetic Resonance Imaging,簡稱MRI)取得腦部影像後,腦科醫師可於影像上圈選病灶。藉由以參考所圈選所病灶、病人主訴以及其他醫學檢查結果,專科醫師可以判定疾病的病因以及發病機制,並據此做出診斷及擬定治療方法。在腦科醫學領域中,腦白質高信號(white matter hyperintensities;WMH)常被醫師認為係可能的腦白質病灶(white matter leisons)。In the field of medical imaging, image interpretation is generally performed by specialist physicians. For example, after obtaining a brain image by Magnetic Resonance Imaging (MRI), a brain surgeon can circle the lesions on the image. By referring to the selected lesions, patient complaints and other medical examination results, specialists can determine the etiology and pathogenesis of the disease, and make a diagnosis and plan treatment accordingly. In the field of brain medicine, white matter hyperintensities (WMH) are often considered by physicians as possible white matter leisons.
因為影像辨識技術的發展,影像辨識技術已可用於醫學影像的初步圈選,以增加專科醫師解讀醫學影像的效率及準確率。Due to the development of image recognition technology, image recognition technology has been used in the preliminary selection of medical images to increase the efficiency and accuracy of medical image interpretation by specialists.
本揭露一些實施例係關於一種識別腦白質高信號之方法。該方法包括:接收輸入影像;執行第一組卷積;以及執行輸出卷積以產生至少兩個影像分割,該輸出卷積包含與至少兩個核心圖(kernel map)執行卷積。Some embodiments of the present disclosure relate to a method of identifying white matter hyperintensity. The method includes: receiving an input image; performing a first set of convolutions; and performing an output convolution to generate at least two image segmentations, the output convolution comprising performing a convolution with at least two kernel maps.
本揭露一些實施例係關於一種腦白質高信號量化之方法。該方法包括:接收複數個輸入影像;針對該複數個輸入影像之每一者執行一種識別腦白質高信號之方法,以產生複數個灰質影像分割、複數個白質影像分割、複數個白質高信號影像分割、複數個腦脊髓液影像分割;基於該複數個灰質影像分割,產生灰質體積;基於該複數個白質影像分割,產生白質體積;基於該複數個白質高信號影像分割,產生白質高信號體積;基於該複數個腦脊髓液影像分割,產生腦脊髓液體積;以及基於該灰質體積、該白質體積、該白質高信號體積、該腦脊髓液體積以及該複數個輸入影像之切片厚度進行影像分級。Some embodiments of the present disclosure relate to a method for quantifying white matter hyperintensity. The method includes: receiving a plurality of input images; performing a method of identifying white matter hyperintensity for each of the plurality of input images to generate a plurality of gray matter image segmentations, a plurality of white matter image segmentations, a plurality of white matter hyperintensity images segmentation, segmentation of a plurality of cerebrospinal fluid images; segmentation based on the plurality of gray matter images to generate a gray matter volume; segmentation based on the plurality of white matter images to generate a white matter volume; segmentation based on the plurality of white matter hyperintensity images to generate a white matter hyperintensity volume; Based on the plurality of cerebrospinal fluid image segmentations, a cerebrospinal fluid volume is generated; and image grading is performed based on the gray matter volume, the white matter volume, the white matter hyperintensity volume, the cerebrospinal fluid volume, and slice thicknesses of the plurality of input images.
本揭露一些實施例係關於識別腦白質高信號之裝置。該裝置包括:處理器;以及與該處理器耦合之記憶體。處理器執行儲存於該記憶體之電腦可讀取指令以執行以下操作:接收至少一個輸入影像。針對該至少一個輸入影像之每一者執行以下操作:執行第一組卷積;以及執行輸出卷積以產生至少兩個影像分割,該輸出卷積包含與至少兩個核心圖執行卷積。Some embodiments of the present disclosure relate to devices for identifying white matter hyperintensities. The apparatus includes: a processor; and a memory coupled to the processor. The processor executes computer-readable instructions stored in the memory to perform the following operations: receive at least one input image. The following operations are performed for each of the at least one input image: performing a first set of convolutions; and performing output convolutions to generate at least two image segmentations, the output convolutions including performing convolutions with at least two core maps.
描述本發明之方法、系統及其他態樣。將參考本發明之某些實施例,其實例在隨附圖式中加以說明。雖然本發明將結合實施例進行描述,但將理解,並不意欲將本發明僅限於此等特定實施例。相反地,本發明意欲覆蓋本發明之精神及範圍內的替代方案、修改及等效物。因此,應在說明性意義上而非限定性意義上看待說明書及圖式。Methods, systems, and other aspects of the present invention are described. Reference will be made to certain embodiments of the invention, examples of which are illustrated in the accompanying drawings. While the invention will be described in conjunction with the embodiments, it will be understood that the intention is not to limit the invention to these specific embodiments. On the contrary, the invention is intended to cover alternatives, modifications and equivalents within the spirit and scope of the invention. Accordingly, the specification and drawings should be regarded in an illustrative rather than a restrictive sense.
此外,在以下描述中,闡述眾多具體細節以提供對本發明之透徹理解。然而,一般熟習此項技術者將可無需此等特定細節而實踐本發明。在其他情況下,為避免混淆本發明之態樣,並未詳細描述一般熟習此項技術者已熟知的方法、程序、操作、組件及網路。Furthermore, in the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, one of ordinary skill in the art will be able to practice the present invention without these specific details. In other instances, methods, procedures, operations, components and networks well known to those of ordinary skill in the art have not been described in detail in order to avoid obscuring aspects of the present invention.
下文將參考隨附圖式詳細描述本揭露之一些實施例。Some embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
圖1繪示根據本揭露之一些實施例的影像辨識流程圖。圖1可說明輸入影像101及輸出影像201、203、205、207、209、211。根據圖1,輸入影像101可經影像處理程序300處理。輸入影像101經影像處理程序300處理後可輸出兩個或兩個以上的影像,例如輸出影像201、203、205、207、209、211。FIG. 1 illustrates a flowchart of image recognition according to some embodiments of the present disclosure. FIG. 1 may illustrate
輸入影像101可為平面影像或2D影像。在其他實施例中,輸入影像101可為立體影像或3D影像。在一些實施例中,輸入影像101可為一種MRI影像。例如,輸入影像101可為一種FLAIR(Fluid Attenuated Inversion Recovery,液體衰減反轉回復)影像。輸入影像101亦可以為一種T2-FLAIR影像。The
影像處理程序300可根據不同組織的特性而將輸入影像101分割成多個影像。輸出影像201、203、205、207、209、211可指示不同的組織。在一些實施例中,輸出影像201可指示腦灰質;輸出影像203可指示腦白質;輸出影像205可指示腦白質高信號;輸出影像207可指示腦脊髓液;輸出影像209可指示頭皮及/或頭骨;輸出影像211可指示空氣。The
在一些實施例中,輸入影像101經影像處理程序300可輸出兩個或兩個以上的影像。例如,輸入影像101經影像處理程序300可輸出兩個影像,一個輸出影像可指示腦白質,一個輸出影像可指示腦白質高信號。例如,輸入影像101經影像處理程序300可輸出三個影像,一個輸出影像可指示腦灰質,一個輸出影像可指示腦白質,一個輸出影像可指示腦白質高信號。In some embodiments, the
影像處理程序300可為一種2D影像辨識程序或3D影像辨識程序,用以辨識影像上的特徵。影像處理程序300可為藉由機器學習而產生之影像辨識程序。影像處理程序300可為藉由深度學習而產生之影像辨識程序。影像處理程序300可為藉由卷積神經網路之反向傳播而產生之影像辨識程序。影像處理程序300可為一種卷積神經網路。影像處理程序300可為一種用於影像語義分割之卷積神經網路(convolution neural network for image semantic segmentation)。影像處理程序300可為一種U-Net架構的卷積神經網路。影像處理程序300可為一種U-Net架構的多類別輸出卷積神經網路。影像處理程序300可為一種U-SegNet架構的卷積神經網路。影像處理程序300可為一種U-SegNet架構的多類別輸出卷積神經網路。The
在本揭露的一些實施例中,影像處理程序300可為藉由卷積神經網路之反向傳播而產生之卷積神經網路。在本揭露之實施例中,用以訓練及測試卷積神經網路之案例如表1所示。本揭露不詳述藉由卷積神經網路之反向傳播而產生卷積神經網路之過程。
圖2繪示根據本揭露之一些實施例的分級流程圖。根據圖1所繪示之影像辨識流程圖,當將多張輸入影像101輸入至影像處理程序300,影像處理程序300可相應輸出多張輸出影像201、多張輸出影像203、多張輸出影像205、多張輸出影像207、多張輸出影像209及/或多張輸出影像211。舉例而言,可將100張腦部影像(例如同一病人之100影像)依序輸入至影像處理程序300中,影像處理程序300可相應輸出100張輸出影像201、100張輸出影像203、100張輸出影像205、100張輸出影像207、100張輸出影像209及/或100張輸出影像211。FIG. 2 illustrates a hierarchical flow diagram according to some embodiments of the present disclosure. According to the image recognition flowchart shown in FIG. 1 , when
參考圖2,多張輸出影像201可計算出體積231;多張輸出影像203可計算出體積233;多張輸出影像205可計算出體積235;多張輸出影像207可計算出體積237。在一些實施例中,輸出影像201指示腦灰質;輸出影像203指示腦白質;輸出影像205指示腦白質高信號;輸出影像207指示腦脊髓液。在此等實施例中,藉由計算多張輸出影像201中每一張的腦灰質面積,可獲得腦灰質體積231;藉由計算多張輸出影像203中每一張的腦白質面積,可獲得腦白質體積233;藉由計算多張輸出影像205中每一張的腦白質高信號面積,可獲得腦白質高信號體積235;藉由計算多張輸出影像207中每一張的腦脊髓液面積,可獲得腦脊髓液體積237。Referring to FIG. 2 , a
在一些實施例中,輸入影像101經影像處理程序300可輸出兩個或兩個以上的影像。例如,100張輸入影像101經影像處理程序300可輸出兩組影像,第一組輸出影像可包含指示腦白質之100張影像,第二組輸出影像可包含指示腦白質高信號之100張影像。在此等實施例中,藉由計算第一組輸出影像中每一張的腦白質面積,可獲得腦白質體積;藉由計算第二組輸出影像中每一張的腦白質高信號面積,可獲得腦白質高信號體積。In some embodiments, the
在一些實施例中,輸入影像101經影像處理程序300可輸出三影像。例如,100張輸入影像101經影像處理程序300可輸出三組影像,第一組輸出影像可包含指示腦灰質之100張影像,第二組輸出影像可包含指示腦白質之100張影像,第三組影像可包含指示腦白質高信號之100張影像。在此等實施例中,藉由計算第一組輸出影像中每一張的腦灰質面積,可獲得腦灰質體積;藉由計算第二組輸出影像中每一張的腦白質面積,可獲得腦白質體積;藉由計算第三組輸出影像中每一張的腦白質高信號面積,可獲得腦白質高信號體積。In some embodiments, the
根據圖2之一些實施例,腦灰質體積231、腦白質體積233、白質高信號體235、腦脊髓液體積237可輸入至分級程序800。經分級程序800處理後,可輸出分級結果900。在一些實施例中,除了腦灰質體積231、腦白質體積233、白質高信號體積235、腦脊髓液體積237之外,亦可選擇地將影像參數239輸入至分級程序800,以增加分級程序800之效率或增加分級結果900之準確率。影像參數239可包含輸入影像101之影像參數。影像參數239可包含輸入影像101之切片厚度。According to some embodiments of FIG. 2 ,
在一些實施例中,分級程序800可為一種梯度提升(gradient boosting)程序。梯度提升是一種用於回歸及分類問題的機器學習技術。在一些實施例,分級程序800可為一種XGBoost(eXtreme Gradient Boosting)程序。在本揭露的一些實施例中,分級程序800可為經由機器訓練而產生的梯度提升程序。在本揭露之實施例中,用以訓練及測試梯度提升程序或XGBoost程序之案例如表1所示。本揭露並不詳述藉由機器學習而產生梯度提升程序或XGBoost程序之過程。In some embodiments, the
分級結果900可為根據費澤克斯等級(Fazekaz grades)之一結果。在費澤克斯等級中,等級0可指示沒有腦白質高信號或沒有腦白質病變;等級1可指示腦白質高信號面積(或腦白質病變面積)為點狀或細線;等級2可指示腦白質高信號面積(或腦白質病變面積)開始匯合(confluence);等級3可指示腦白質高信號面積(或腦白質病變面積)為大型的匯合面積(large confluent areas)。The graded
圖3A繪示根據本揭露之一些實施例的影像處理程序300。輸入影像101可為512×512×1。輸入影像101可為具1個通道(channel)的512×512影像,或為具1個維度的512×512影像。本揭露所述512×512、256×256、128×128或64×64之單位可為像素或其他合適之單位。FIG. 3A illustrates an
對輸入影像101執行操作301而輸出特徵圖組(a feature map set)102。操作301可為與多個核心圖(kernal map)進行卷積運算(convoluation)。在一些實施例中,操作301可為與32個核心圖進行卷積運算,輸出之特徵圖組102可包含32個特徵圖。在一些實施例中,操作301可包含零填充(zero padding)。特徵圖組102中的每一特徵圖可為512×512。在一些實施例中,操作301可包含批次標準化(batch normaliztion)及整流線性單元(Rectfied Linear Unit;ReLU)函數處理。
對特徵圖組102執行操作302而輸出特徵圖組103。操作302可為與多個核心圖進行卷積運算。在一些實施例中,操作302可為與64個核心圖進行卷積運算,輸出之特徵圖組103可包含64個特徵圖。在一些實施例中,操作302可包含零填充。特徵圖組103中的每一特徵圖可為512×512。在一些實施例中,操作302可包含批次標準化及整流線性單元函數處理。操作301及操作302可被視為一組卷積運算。
對特徵圖組103執行操作307而輸出一組輸出影像。操作307可為執行輸出卷積。在一些實施例中,操作307可包含零填充。操作307可為與多個核心圖進行卷積運算。操作307可為與至少兩個核心圖進行卷積運算。在一些實施例中,操作307可為與6個核心圖進行卷積運算,以產生輸出影像201、203、205、207、209、211。輸出影像201可指示腦灰質;輸出影像203可指示腦白質;輸出影像205可指示腦白質高信號;輸出影像207可指示腦脊髓液;輸出影像209可指示頭皮及/或頭骨;輸出影像211可指示空氣。
在一些實施例中,操作307可為與2個核心圖進行卷積運算,以產生兩個輸出影像,一個輸出影像可指示腦白質,一個輸出影像可指示腦白質高信號。In some embodiments,
在一些實施例中,操作307可為與3個核心圖進行卷積運算,以產生三個輸出影像,一個輸出影像可指示腦灰質,一個輸出影像可指示腦白質,一個輸出影像可指示腦白質高信號。In some embodiments,
輸入影像101經影像處理程序300處理後可輸出兩個或兩個以上的輸出影像,以對輸入影像101產生多類別的語義分割。單一類別的語義分割可為經過影像處理程序處理後僅輸出一個輸出影像,且該輸出影像可指示腦白質高信號。相較於單一類別的語義分割,多類別的語義分割可達成更佳的組織型態分辨,並可減少偽陽型(false positive)的病灶判定。After the
圖3B繪示根據本揭露之一些實施例的影像處理程序300。輸入影像101可為512×512×1。輸入影像101可為具1個通道(channel)的512×512影像,或為具1個維度的512×512影像。FIG. 3B illustrates an
對輸入影像101執行操作301而輸出特徵圖組(a feature map set)102。操作301可為與多個核心圖進行卷積運算。在一些實施例中,操作301可為與32個核心圖進行卷積運算,輸出之特徵圖組102可包含32個特徵圖。在一些實施例中,操作301可包含零填充。特徵圖組102中的每一特徵圖可為512×512。在一些實施例中,操作301可包含批次標準化及整流線性單元函數處理。
對特徵圖組102執行操作302而輸出特徵圖組103。操作302可為與多個核心圖進行卷積運算。在一些實施例中,操作302可為與64個核心圖進行卷積運算,輸出之特徵圖組103可包含64個特徵圖。在一些實施例中,操作302可包含零填充。特徵圖組103中的每一特徵圖可為512×512。在一些實施例中,操作302可包含批次標準化及整流線性單元函數處理。操作301及操作302可被視為一組卷積運算。
對特徵圖組103執行操作401而輸出特徵圖組104。操作401可為一種下採樣(down-sampling)處理。操作401可為最大池化(max poolling)處理。在一些實施例中,操作401可為2×2最大池化,輸出之特徵圖組104可包含64個特徵圖,其中每一特徵圖可為256×256。
對特徵圖組104執行操作303而輸出特徵圖組105。操作303可為與多個核心圖進行卷積運算。在一些實施例中,操作303可為與64個核心圖進行卷積運算,輸出之特徵圖組105可包含64個特徵圖。在一些實施例中,操作303可包含零填充。特徵圖組105中的每一特徵圖可為256×256。在一些實施例中,操作303可包含批次標準化及整流線性單元函數處理。
對特徵圖組105執行操作304而輸出特徵圖組106。操作304可為與多個核心圖進行卷積運算。在一些實施例中,操作304可為與128個核心圖進行卷積運算,輸出之特徵圖組106可包含128個特徵圖。在一些實施例中,操作304可包含零填充。特徵圖組106中的每一特徵圖可為256×256。在一些實施例中,操作304可包含批次標準化及整流線性單元函數處理。操作303及操作304可被視為一組卷積運算。
對特徵圖組106執行操作601而輸出特徵圖組107。操作601可為一種上採樣(up-sampling)處理。操作601可為最大反池化(max unpoolling)處理。在一些實施例中,操作601可為2×2最大反池化,輸出之特徵圖組107可包含128個特徵圖,其中每一特徵圖可為512×512。在一些實施例中,可根據操作501所傳遞之下採樣索引(或池化索引)而執行操作601。例如,在執行操作401之(2×2)最大池化時,可儲存最大池化索引之資料(例如操作501),並在執行操作601時根據最大池化索引進行(2×2)最大反池化(例如填回索引所指示之位置)。
從一解析度的下採樣時取得下採樣索引(例如最大池化索引;max pooling indices),在上採樣(例如最大反池化)回相同解析度時可根據下採樣索引而將數值填回相同位置。此種反池化可取代反卷積,可縮短所需的訓練時間,並可減少所需的訓練資料,更可減少大量的運算。此外,根據索引執行反池化可減少高頻資料的損失,並可提供較良好的組織邊界分割,更可增加小病灶(lesions)的位置準確度,且可提供更良好的組織邊界分割。The downsampling index (such as max pooling indices; max pooling indices) is obtained from the downsampling of a resolution, and the value can be filled back to the same value according to the downsampling index when upsampling (such as max unpooling) back to the same resolution Location. This de-pooling can replace deconvolution, which can shorten the required training time, reduce the required training data, and reduce a large number of operations. In addition, performing de-pooling according to the index can reduce the loss of high-frequency data, can provide better segmentation of tissue boundaries, and can increase the location accuracy of small lesions (lesions), and can provide better segmentation of tissue boundaries.
對特徵圖組103及特徵圖組107執行操作701而輸出特徵圖組108。操作701可為跳躍連接(skip connection)處理。例如,在執行操作401之池化處理之前儲存特徵圖組103,並將特徵圖組103與執行操作601之反池化後所產生之特徵圖組107連接,以產生特徵圖組108。根據圖3B之例示性實施例,特徵圖組108之特徵圖個數為64加上128,即192。
藉由跳躍連接處理,可擴展特徵圖,以結合多尺度的影像資訊。跳躍連接處理可減少 高頻資料的損失,並可提供更良好的組織邊界分割。 Through skip connection processing, feature maps can be extended to combine multi-scale image information. Skip connection processing reduces the loss of high-frequency data and provides better segmentation of tissue boundaries.
對特徵圖組108執行操作305而輸出特徵圖組109。操作305可為與多個核心圖進行卷積運算。在一些實施例中,操作305可為與64個核心圖進行卷積運算,輸出之特徵圖組109可包含64個特徵圖。在一些實施例中,操作305可包含零填充。特徵圖組109中的每一特徵圖可為512×512。在一些實施例中,操作305可包含批次標準化及整流線性單元函數處理。
對特徵圖組109執行操作306而輸出特徵圖組110。操作306可為與多個核心圖進行卷積運算。在一些實施例中,操作306可為與64個核心圖進行卷積運算,輸出之特徵圖組110可包含64個特徵圖。在一些實施例中,操作306可包含零填充。特徵圖組110中的每一特徵圖可為512×512。在一些實施例中,操作306可包含批次標準化及整流線性單元函數處理。操作305及操作306可被視為一組卷積運算。
對特徵圖組110執行操作307而輸出一組輸出影像。操作307可為執行輸出卷積。在一些實施例中,操作307可包含零填充。操作307可為與多個核心圖進行卷積運算。操作307可為與至少兩個核心圖進行卷積運算。在一些實施例中,操作307可為與6個核心圖進行卷積運算,以產生輸出影像201、203、205、207、209、211。輸出影像201可指示腦灰質;輸出影像203可指示腦白質;輸出影像205可指示腦白質高信號;輸出影像207可指示腦脊髓液;輸出影像209可指示頭皮及/或頭骨;輸出影像211可指示空氣。
在一些實施例中,操作307可為與2個核心圖進行卷積運算,以產生兩個輸出影像,一個輸出影像可指示腦白質,一個輸出影像可指示腦白質高信號。In some embodiments,
在一些實施例中,操作307可為與3個核心圖進行卷積運算,以產生三個輸出影像,一個輸出影像可指示腦灰質,一個輸出影像可指示腦白質,一個輸出影像可指示腦白質高信號。In some embodiments,
輸入影像101經影像處理程序300處理後可輸出兩個或兩個以上的輸出影像,以對輸入影像101產生多類別的語義分割。單一類別的語義分割可為經過影像處理程序處理後僅輸出一個輸出影像,且該輸出影像可指示腦白質高信號。相較於單一類別的語義分割,多類別的語義分割可達成更佳的組織型態分辨,並可減少偽陽型的病灶判定。After the
圖4繪示根據本揭露之一些實施例的影像處理程序300。圖4中操作301、操作302、操作401、操作303、操作304、輸入影像101、特徵圖組102、特徵圖組103、特徵圖組104、特徵圖組105、特徵圖組106之技術內容與圖3B中使用相同元件符號之操作、輸入影像或特徵圖組之技術內容相似或相同。本揭露中不再針對圖4中操作301、操作302、操作401、操作303、操作304、輸入影像101,特徵圖組102、特徵圖組103、特徵圖組104、特徵圖組105、特徵圖組106重覆說明。FIG. 4 illustrates an
參考圖4,對包含128個特徵圖之特徵圖組106執行操作402而輸出特徵圖組111。操作402可為一種下採樣處理。操作402可為最大池化處理。在一些實施例中,操作402可為2×2最大池化,輸出之特徵圖組111可包含128個特徵圖,其中每一特徵圖可為128×128。Referring to FIG. 4 ,
對特徵圖組111執行操作308而輸出特徵圖組112。操作308可為與多個核心圖進行卷積運算。在一些實施例中,操作308可為與128個核心圖進行卷積運算,輸出之特徵圖組112可包含128個特徵圖。在一些實施例中,操作308可包含零填充。特徵圖組112中的每一特徵圖可為128×128。在一些實施例中,操作308可包含批次標準化及整流線性單元函數處理。
對特徵圖組112執行操作309而輸出特徵圖組113。操作309可為與多個核心圖進行卷積運算。在一些實施例中,操作309可為與256個核心圖進行卷積運算,輸出之特徵圖組113可包含256個特徵圖。在一些實施例中,操作309可包含零填充。特徵圖組113中的每一特徵圖可為128×128。在一些實施例中,操作309可包含批次標準化及整流線性單元函數處理。操作308及操作309可被視為一組卷積運算。
對包含256個特徵圖之特徵圖組113執行操作403而輸出特徵圖組114。操作403可為一種下採樣處理。操作403可為最大池化處理。在一些實施例中,操作403可為2×2最大池化,輸出之特徵圖組114可包含256個特徵圖,其中每一特徵圖可為64×64。
對特徵圖組114執行操作310而輸出特徵圖組115。操作310可為與多個核心圖進行卷積運算。在一些實施例中,操作310可為與256個核心圖進行卷積運算,輸出之特徵圖組115可包含256個特徵圖。在一些實施例中,操作310可包含零填充。特徵圖組115中的每一特徵圖可為64×64。在一些實施例中,操作310可包含批次標準化及整流線性單元函數處理。
對特徵圖組115執行操作311而輸出特徵圖組116。操作311可為與多個核心圖進行卷積運算。在一些實施例中,操作311可為與512個核心圖進行卷積運算,輸出之特徵圖組116可包含512個特徵圖。在一些實施例中,操作311可包含零填充。特徵圖組116中的每一特徵圖可為64×64。在一些實施例中,操作311可包含批次標準化及整流線性單元函數處理。操作310及操作311可被視為一組卷積運算。
對特徵圖組116執行操作602而輸出特徵圖組117。操作602可為一種上採樣處理。操作602可為最大反池化處理。在一些實施例中,操作602可為2×2最大反池化,輸出之特徵圖組117可包含512個特徵圖,其中每一特徵圖可為128×128。在一些實施例中,可根據操作502所傳遞之下採樣索引(或池化索引)而執行操作602。例如,在執行操作403之(2×2)最大池化時,可儲存最大池化索引之資料(例如操作502),並在執行操作602時根據最大池化索引進行(2×2)最大反池化(例如填回索引所指示之位置)。
從一解析度的下採樣時取得下採樣索引(例如最大池化索引;max pooling indices),在上採樣(例如最大反池化)回相同解析度時可根據下採樣索引而將數值填回相同位置。此種反池化可取代反卷積,可縮短所需的訓練時間,並可減少所需的訓練資料,更可減少大量的運算。此外,根據索引執行反池化可減少高頻資料的損失,並可提供較良好的組織邊界分割,更可增加小病灶(lesions)的位置準確度,且可提供更良好的組織邊界分割。The downsampling index (such as max pooling indices; max pooling indices) is obtained from the downsampling of a resolution, and the value can be filled back to the same value according to the downsampling index when upsampling (such as max unpooling) back to the same resolution Location. This de-pooling can replace deconvolution, which can shorten the required training time, reduce the required training data, and reduce a large number of operations. In addition, performing de-pooling according to the index can reduce the loss of high-frequency data, can provide better segmentation of tissue boundaries, and can increase the location accuracy of small lesions (lesions), and can provide better segmentation of tissue boundaries.
對特徵圖組113及特徵圖組117執行操作702而輸出特徵圖組118。操作702可為跳躍連接處理。例如,在執行操作403之池化處理之前儲存特徵圖組113,並將特徵圖組113與執行操作602之反池化後所產生之特徵圖組117連接,以產生特徵圖118。根據圖4之例示性實施例,特徵圖組118之特徵圖個數為256加上512,即768。
藉由跳躍連接處理,可擴展特徵圖,以結合多尺度的影像資訊。跳躍連接處理可減少 高頻資料的損失,並可提供更良好的組織邊界分割。 Through skip connection processing, feature maps can be extended to combine multi-scale image information. Skip connection processing reduces the loss of high-frequency data and provides better segmentation of tissue boundaries.
對特徵圖組118執行操作312而輸出特徵圖組119。操作312可為與多個核心圖進行卷積運算。在一些實施例中,操作312可為與256個核心圖進行卷積運算,輸出之特徵圖組119可包含256個特徵圖。在一些實施例中,操作312可包含零填充。特徵圖組119中的每一特徵圖可為128×128。在一些實施例中,操作312可包含批次標準化及整流線性單元函數處理。Operation 312 is performed on feature map set 118 to output feature map set 119 . Operation 312 may be a convolution operation with the plurality of core graphs. In some embodiments, operation 312 may be a convolution operation with 256 core maps, and the output
對特徵圖組119執行操作313而輸出特徵圖組120。操作313可為與多個核心圖進行卷積運算。在一些實施例中,操作313可為與256個核心圖進行卷積運算,輸出之特徵圖組120可包含256個特徵圖。在一些實施例中,操作313可包含零填充。特徵圖組120中的每一特徵圖可為128×128。在一些實施例中,操作313可包含批次標準化及整流線性單元函數處理。操作312及操作313可被視為一組卷積運算。Operation 313 is performed on the feature map set 119 to output the feature map set 120 . Operation 313 may be a convolution operation with the plurality of core graphs. In some embodiments, operation 313 may be a convolution operation with 256 core maps, and the output
對特徵圖組120執行操作603而輸出特徵圖組121。操作603可為一種上採樣處理。操作603可為最大反池化處理。在一些實施例中,操作603可為2×2最大反池化,輸出之特徵圖組121可包含256個特徵圖,其中每一特徵圖可為256×256。在一些實施例中,可根據操作503所傳遞之下採樣索引(或池化索引)而執行操作603。例如,在執行操作402之(2×2)最大池化時,可儲存最大池化索引之資料(例如操作503),並在執行操作603時根據最大池化索引進行(2×2)最大反池化(例如填回索引所指示之位置)。
對特徵圖組106及特徵圖組121執行操作703而輸出特徵圖組122。操作703可為跳躍連接處理。例如,在執行操作402之池化處理之前儲存特徵圖組106,並將特徵圖組106與執行操作603之反池化後所產生之特徵圖組121連接,以產生特徵圖122。根據圖4之例示性實施例,特徵圖組122之特徵圖個數為128加上256,即384。
對特徵圖組122執行操作314而輸出特徵圖組123。操作314可為與多個核心圖進行卷積運算。在一些實施例中,操作314可為與128個核心圖進行卷積運算,輸出之特徵圖組123可包含128特徵圖。在一些實施例中,操作314可包含零填充。特徵圖組123中的每一特徵圖可為256×256。在一些實施例中,操作314可包含批次標準化及整流線性單元函數處理。
對特徵圖組123執行操作315而輸出特徵圖組124。操作315可為與多個核心圖進行卷積運算。在一些實施例中,操作315可為與128個核心圖進行卷積運算,輸出之特徵圖組124可包含128個特徵圖。在一些實施例中,操作315可包含零填充。特徵圖組124中的每一特徵圖可為256×256。在一些實施例中,操作315可包含批次標準化及整流線性單元函數處理。操作314及操作315可被視為一組卷積運算。
對特徵圖組124執行操作601而輸出特徵圖組107。操作601可為一種上採樣處理。操作601可為最大反池化處理。在一些實施例中,操作601可為2×2最大反池化,輸出之特徵圖組107可包含128個特徵圖,其中每一特徵圖可為512×512。在一些實施例中,可根據操作501所傳遞之下採樣索引(或池化索引)而執行操作601。例如,在執行操作401之(2×2)最大池化時,可儲存最大池化索引之資料(例如操作501),並在執行操作601時根據最大池化索引進行(2×2)最大反池化(例如填回索引所指示之位置)。
對特徵圖組103及特徵圖組107執行操作701而輸出特徵圖組108。操作701可為跳躍連接處理。例如,在執行操作401之池化處理之前儲存特徵圖組103,並將特徵圖組103與執行操作601之反池化後所產生之特徵圖組107連接,以產生特徵圖108。根據圖4之例示性實施例,特徵圖組108之特徵圖個數為64加上128,即192。
對特徵圖組108執行操作305而輸出特徵圖組109。操作305可為與多個核心圖進行卷積運算。在一些實施例中,操作305可為與64個核心圖進行卷積運算,輸出之特徵圖組109可包含64個特徵圖。在一些實施例中,操作305可包含零填充。特徵圖組109中的每一特徵圖可為512×512。在一些實施例中,操作305可包含批次標準化及整流線性單元函數處理。
對特徵圖組109執行操作306而輸出特徵圖組110。操作306可為與多個核心圖進行卷積運算。在一些實施例中,操作306可為與64個核心圖進行卷積運算,輸出之特徵圖組110可包含64個特徵圖。在一些實施例中,操作306可包含零填充。特徵圖組110中的每一特徵圖可為512×512。在一些實施例中,操作306可包含批次標準化及整流線性單元函數處理。操作305及操作306可被視為一組卷積運算。
對特徵圖組110執行操作307而輸出一組輸出影像。操作307可為執行輸出卷積。在一些實施例中,操作306可包含零填充。操作307可為與多個核心圖進行卷積運算。操作307可為與至少兩個核心圖進行卷積運算。在一些實施例中,操作307可為與6個核心圖進行卷積運算,以產生輸出影像201、203、205、207、209、211。輸出影像201可指示腦灰質;輸出影像203可指示腦白質;輸出影像205可指示腦白質高信號;輸出影像207可指示腦脊髓液;輸出影像209可指示頭皮及/或頭骨;輸出影像211可指示空氣。
在一些實施例中,操作307可為與2個核心圖進行卷積運算,以產生兩個輸出影像,一個輸出影像可指示腦白質,一個輸出影像可指示腦白質高信號。In some embodiments,
在一些實施例中,操作307可為與3個核心圖進行卷積運算,以產生三個輸出影像,一個輸出影像可指示腦灰質,一個輸出影像可指示腦白質,一個輸出影像可指示腦白質高信號。In some embodiments,
輸入影像101經影像處理程序300處理後可輸出兩個或兩個以上的輸出影像,以對輸入影像101產生多類別的語義分割。單一類別的語義分割可為經過影像處理程序處理後僅輸出一個輸出影像,且該輸出影像可指示腦白質高信號。相較於單一類別的語義分割,單一類別的語義分割可達成更佳的組織型態分辨,並可減少偽陽型(false positive)的病灶判定。After the
本揭露圖3A、圖3B及圖4所述為影像處理程序300之例示性實施例,但此等例示性實施並非用以限制本發明。熟習此項技術者應可理解,在不脫離本揭露所載發明之真實精神及範圍內,影像尺寸、特徵圖尺寸、特徵圖數目、核心圖數目、卷積運算次數、下採樣次數、上採樣次數、跳躍連接次數以及池化索引傳遞次數可有所變動。3A, 3B and 4 of the present disclosure are exemplary embodiments of the
圖5A至圖5E展示根據本揭露之一些實施例的腦部MRI 2D影像。圖5A至圖5E展示根據本揭露之一些實施例的腦部T2-FLAIR影像。圖5A至圖5E之每一者可為本揭露圖1、圖3A、圖3B或圖4中所示的輸入影像101。5A-5E show brain MRI 2D images according to some embodiments of the present disclosure. 5A-5E show brain T2-FLAIR images according to some embodiments of the present disclosure. Each of FIGS. 5A-5E may be the
圖6A至圖6E展示根據本揭露之一些實施例的腦部MRI 2D影像。圖6A至圖6E展示根據本揭露之一些實施例的腦部T2-FLAIR影像。圖6A至圖6E之每一者包含被圈選為腦白質高信號之區域。6A-6E show brain MRI 2D images according to some embodiments of the present disclosure. 6A-6E show brain T2-FLAIR images according to some embodiments of the present disclosure. Each of FIGS. 6A-6E includes a region circled as white matter hyperintensity.
圖6A至圖6E之每一者中之綠色圈選部分指示僅被腦科醫師圈選之腦白質高信號區域。圖6A至圖6E之每一者中之紅色圈選部分指示僅被影像處理程序300(如圖1、圖3A、圖3B或圖4中所示)圈選之腦白質高信號區域。圖6A至圖6E之每一者中之黃色圈選部分指示被腦科醫師及影像處理程序300(如圖1、圖3A、圖3B或圖4中所示)兩者圈選之腦白質高信號區域。The green circled portions in each of FIGS. 6A-6E indicate areas of white matter hyperintensity circled only by the encephalologist. The red circled portions in each of FIGS. 6A-6E indicate white matter hyperintense regions only circled by the image processing program 300 (as shown in FIG. 1 , FIG. 3A , FIG. 3B or FIG. 4 ). The yellow circled portions in each of FIGS. 6A-6E indicate white matter elevations circled by both the neurologist and the image processing program 300 (as shown in FIGS. 1 , 3A, 3B, or 4 ) signal area.
圖6A至圖6E之每一者中之綠色圈選部分可指示影像處理程序300之偽陰型結果。圖6A至圖6E之每一者中之紅色圈選部分可指示影像處理程序300之偽陽型結果。圖6A至圖6E之每一者中之黃色圈選部分可指示影像處理程序300之真陽型結果。The green circled portion in each of FIGS. 6A-6E may indicate a false negative result of the
參考圖6A至圖6E,大多為黃色圈選部分,意即影像處理程序300所圈選之腦白質高信號區域與腦科醫師所圈選之腦白質高信號區域大致重疊。換言之,影像處理程序300圈選腦白質高信號區域之能力與腦科醫師圈選腦白質高信號區域之能力大致相同。Referring to FIG. 6A to FIG. 6E , most of the parts circled in yellow, that is, the white matter hyperintensity region circled by the
參考圖6A至圖6E,綠色圈選部分稀少,意即偽陰型結果稀少。參考圖6A至圖6E,紅色圈選部分稀少,意即偽陽型結果稀少。Referring to FIG. 6A to FIG. 6E , the green circled parts are rare, which means that false negative results are rare. Referring to FIG. 6A to FIG. 6E , the red circled part is rare, which means that false positive results are rare.
圖7A及圖7B展示根據本揭露之單一類別輸出U-Net的統計結果。單一類別輸出可指影像之單一類別語義分割。單一類別輸出U-Net可為圖3B中不執行操作501,且在操作307(輸出卷積)僅使用一個核心圖之影像處理程序,故僅輸出一輸出影像,且輸出影像指示腦白質高信號區域。單一類別U-Net可為圖4中不執行操作501、操作502、操作503,且在操作307(輸出卷積)僅使用一個核心圖之影像處理程序,故僅輸出一輸出影像,且輸出影像指示腦白質高信號區域。單一類別U-Net可為圖3B或圖4中不執行池化索引傳遞,且在輸出卷積僅使用一個核心圖之影像處理程序,故僅輸出一輸出影像,且輸出影像指示腦白質高信號區域。7A and 7B show statistical results of a single class output U-Net according to the present disclosure. Single-class output may refer to a single-class semantic segmentation of an image. The single-class output U-Net can be an image processing procedure that does not perform
圖7A是實際腦白質病灶面積(或實際腦白質高信號區域)與單一類別輸出U-Net所預測腦白質病灶面積(或預測腦白質高信號區域)之回歸分析圖。圖7A中X軸為實際腦白質病灶面積,Y軸為單一類別輸出U-Net所預測腦白質病灶面積,其等之單位為cm 2。 7A is a regression analysis diagram of the actual white matter lesion area (or actual white matter hyperintensity area) and the predicted white matter lesion area (or predicted white matter hyperintensity area) of the single-class output U-Net. In FIG. 7A , the X-axis is the actual white matter lesion area, and the Y-axis is the predicted white matter lesion area of the single-category output U-Net, and the unit is cm 2 .
根據圖7A所示回歸分析圖,r值為0.996,意即Y軸數值(即預測腦白質病灶面積)的變異中有99.6%可以被X軸數值(即實際腦白質病灶面積)的變異解釋。根據圖7A所示回歸分析圖,p值為趨近0,意即Y軸數值(即預測腦白質病灶面積)與X軸數值(即實際腦白質病灶面積)之關係為顯著。根據圖7A所示回歸分析圖,回歸線方程式為y=1.074x+0.036。According to the regression analysis graph shown in Figure 7A, the r value is 0.996, which means that 99.6% of the variation in the Y-axis value (ie, the predicted white matter lesion area) can be explained by the variation in the X-axis value (ie, the actual white matter lesion area). According to the regression analysis graph shown in FIG. 7A , the p value approaches 0, which means that the relationship between the Y-axis value (ie, the predicted white matter lesion area) and the X-axis value (ie, the actual white matter lesion area) is significant. According to the regression analysis diagram shown in FIG. 7A , the regression line equation is y=1.074x+0.036.
圖7B是布蘭德-奧特曼圖(Bland-Altman plot)。圖7B中X軸為實際腦白質病灶面積與單一類別輸出U-Net所預測腦白質病灶面積的平均值,Y軸為實際腦白質病灶面積減去單一類別輸出U-Net所預測腦白質病灶面積,其等之單位為cm 2。 Figure 7B is a Bland-Altman plot. In Figure 7B, the X-axis is the average of the actual white matter lesion area and the area of the white matter lesion predicted by the single-category output U-Net, and the Y-axis is the actual white matter lesion area minus the white matter lesion area predicted by the single-category output U-Net. , and its equivalent unit is cm 2 .
根據圖7B所示,實際腦白質病灶面積與單一類別輸出U-Net所預測腦白質病灶面積之間的偏差值(Bias)為-0.206。根據圖7B所示,實際腦白質病灶面積減去單一類別輸出U-Net所預測腦白質病灶面積之分佈的正1.96標準差(+1.96SD)為0.624,負1.96標準差(-1.96SD)為-1.036。實際腦白質病灶面積減去單一類別輸出U-Net所預測腦白質病灶面積的95%一致性界限(95% limits of agreement)為0.624及-1.036。According to Fig. 7B, the deviation value (Bias) between the actual white matter lesion area and the single-class output U-Net predicted white matter lesion area is -0.206. According to Figure 7B, the positive 1.96 standard deviation (+1.96SD) of the distribution of the actual white matter lesion area minus the single-class output U-Net predicted white matter lesion area is 0.624, and the negative 1.96 standard deviation (-1.96SD) is -1.036. The 95% limits of agreement for the actual white matter lesion area minus the single category output U-Net predicted white matter lesion area were 0.624 and -1.036.
圖8A及圖8B展示根據本揭露之單一類別輸出U-SegNet的統計結果。單一類別輸出可指影像之單一類別語義分割。單一類別輸出U-SegNet可為圖3B中在操作307(輸出卷積)僅使用一個核心圖之影像處理程序,故僅輸出一輸出影像,且輸出影像指示腦白質高信號區域。單一類別U-SegNet可為圖4中在操作307(輸出卷積)僅使用一個核心圖之影像處理程序,故僅輸出一輸出影像,且輸出影像指示腦白質高信號區域。單一類別U-SegNet可為圖3B或圖4中在輸出卷積僅使用一個核心圖之影像處理程序,故僅輸出一輸出影像,且輸出影像指示腦白質高信號區域。8A and 8B show statistical results of a single class output U-SegNet according to the present disclosure. Single-class output may refer to a single-class semantic segmentation of an image. The single-class output U-SegNet may be the image processing procedure in FIG. 3B that uses only one core map in operation 307 (output convolution), so only one output image is output, and the output image indicates white matter hyperintense regions. A single-class U-SegNet may be the image processing procedure in FIG. 4 that uses only one core map at operation 307 (output convolution), so only one output image is output, and the output image indicates white matter hyperintense regions. A single-class U-SegNet can be the image processing procedure of Figure 3B or Figure 4 that uses only one core map in the output convolution, so only one output image is output, and the output image indicates white matter hyperintense regions.
圖8A是實際腦白質病灶面積(或實際腦白質高信號區域)與單一類別輸出U-SegNet所預測腦白質病灶面積(或預測腦白質高信號區域)之回歸分析圖。圖8A中X軸為實際腦白質病灶面積,Y軸為單一類別輸出U-SegNet所預測腦白質病灶面積,其等之單位為cm 2。 8A is a regression analysis diagram of the actual white matter lesion area (or actual white matter hyperintensity area) and the predicted white matter lesion area (or predicted white matter hyperintensity area) of the single-category output U-SegNet. In FIG. 8A , the X-axis is the actual white matter lesion area, and the Y-axis is the predicted white matter lesion area by the single-category output U-SegNet, and the unit is cm 2 .
根據圖8A所示回歸分析圖,r值為0.995,意即Y軸數值(即預測腦白質病灶面積)的變異中有99.5%可以被X軸數值(即實際腦白質病灶面積)的變異解釋。根據圖8A所示回歸分析圖,p值為趨近0,意即Y軸數值(即預測腦白質病灶面積)與X軸數值(即實際腦白質病灶面積)之關係為顯著。根據圖8A所示回歸分析圖,回歸線方程式為y=1.077x+0.037。According to the regression analysis graph shown in Figure 8A, the r value is 0.995, which means that 99.5% of the variation in the Y-axis value (ie, the predicted white matter lesion area) can be explained by the variation in the X-axis value (ie, the actual white matter lesion area). According to the regression analysis graph shown in FIG. 8A , the p value approaches 0, which means that the relationship between the Y-axis value (ie, the predicted white matter lesion area) and the X-axis value (ie, the actual white matter lesion area) is significant. According to the regression analysis diagram shown in FIG. 8A , the regression line equation is y=1.077x+0.037.
圖8B是布蘭德-奧特曼圖(Bland-Altman plot)。圖7B中X軸為實際腦白質病灶面積與單一類別輸出U-SegNet所預測腦白質病灶面積的平均值,Y軸為實際腦白質病灶面積減去單一類別輸出U-SegNet所預測腦白質病灶面積,其等之單位為cm 2。 Figure 8B is a Bland-Altman plot. In Figure 7B, the X-axis is the average of the actual white matter lesion area and the area of the white matter lesion predicted by the single-category output U-SegNet, and the Y-axis is the actual white matter lesion area minus the single-category output U-SegNet predicted white matter lesion area , and its equivalent unit is cm 2 .
根據圖8B所示,實際腦白質病灶面積與單一類別輸出U-SegNet所預測腦白質病灶面積之間的偏差值(Bias)為-0.213。根據圖8B所示,實際腦白質病灶面積減去單一類別輸出U-Net所預測腦白質病灶面積之分佈的正1.96標準差(+1.96SD)為0.685,負1.96標準差(-1.96SD)為-1.111。實際腦白質病灶面積減去單一類別輸出U-SegNet所預測腦白質病灶面積的95%一致性界限(95% limits of agreement)為0.685及-1.111。According to Fig. 8B, the deviation value (Bias) between the actual white matter lesion area and the single-class output U-SegNet predicted white matter lesion area is -0.213. According to Figure 8B, the positive 1.96 standard deviation (+1.96SD) of the distribution of the actual white matter lesion area minus the single-class output U-Net predicted white matter lesion area is 0.685, and the negative 1.96 standard deviation (-1.96SD) is -1.111. The 95% limits of agreement for the actual white matter lesion area minus the single-class output U-SegNet predicted white matter lesion area were 0.685 and -1.111.
圖9A及圖9B展示根據本揭露之多類別輸出U-Net的統計結果。多類別輸出可指影像之多類別語義分割。多類別輸出U-Net可為圖3B中不執行操作501,且在操作307(輸出卷積)使用多個核心圖之影像處理程序,故可輸出多個輸出影像,且該多個輸出影像之每一者可分別指示腦灰質、腦白質、腦白質高信號區域、腦脊髓液、頭皮及/或頭骨以及空氣(如輸出影像201、203、205、207、209及201)。多類別U-Net可為圖4中不執行操作501、操作502、操作503,且在操作307(輸出卷積)使用多個核心圖之影像處理程序,故可輸出多個輸出影像,且該多個輸出影像之每一者可分別指示腦灰質、腦白質、腦白質高信號區域、腦脊髓液、頭皮及/或頭骨以及空氣(如輸出影像201、203、205、207、209及201)。多類別U-Net可為圖3B或圖4中不執行池化索引傳遞,且在輸出卷積使用多個核心圖之影像處理程序,故可輸出多個輸出影像,且該多個輸出影像之每一者可分別指示腦灰質、腦白質、腦白質高信號區域、腦脊髓液、頭皮及/或頭骨以及空氣(如輸出影像201、203、205、207、209及201)。9A and 9B show statistical results of the multi-class output U-Net according to the present disclosure. Multi-class output may refer to multi-class semantic segmentation of images. The multi-class output U-Net can be an image processing procedure that does not perform
圖9A是實際腦白質病灶面積(或實際腦白質高信號區域)與多類別輸出U-Net所預測腦白質病灶面積(或預測腦白質高信號區域)之回歸分析圖。圖9A中X軸為實際腦白質病灶面積,Y軸為多類別輸出U-Net所預測腦白質病灶面積,其等之單位為cm 2。 9A is a regression analysis diagram of the actual white matter lesion area (or actual white matter hyperintensity area) and the predicted white matter lesion area (or predicted white matter hyperintensity area) of the multi-class output U-Net. In FIG. 9A , the X-axis is the actual white matter lesion area, and the Y-axis is the predicted white matter lesion area by the multi-class output U-Net, and the unit is cm 2 .
根據圖9A所示回歸分析圖,r值為0.995,意即Y軸數值(即預測腦白質病灶面積)的變異中有99.5%可以被X軸數值(即實際腦白質病灶面積)的變異解釋。根據圖9A所示回歸分析圖,p值為趨近0,意即Y軸數值(即預測腦白質病灶面積)與X軸數值(即實際腦白質病灶面積)之關係為顯著。根據圖9A所示回歸分析圖,回歸線方程式為y=1.046x+0.045。According to the regression analysis graph shown in Figure 9A, the r value is 0.995, which means that 99.5% of the variation in the Y-axis value (ie, the predicted white matter lesion area) can be explained by the variation in the X-axis value (ie, the actual white matter lesion area). According to the regression analysis graph shown in FIG. 9A , the p value approaches 0, which means that the relationship between the Y-axis value (ie, the predicted white matter lesion area) and the X-axis value (ie, the actual white matter lesion area) is significant. According to the regression analysis diagram shown in FIG. 9A , the regression line equation is y=1.046x+0.045.
圖9B是布蘭德-奧特曼圖(Bland-Altman plot)。圖9B中X軸為實際腦白質病灶面積與多類別輸出U-Net所預測腦白質病灶面積的平均值,Y軸為實際腦白質病灶面積減去多類別輸出U-Net所預測腦白質病灶面積,其等之單位為cm 2。 Figure 9B is a Bland-Altman plot. In Figure 9B, the X-axis is the average value of the actual white matter lesion area and the area of the white matter lesion predicted by the multi-category output U-Net, and the Y-axis is the actual white matter lesion area minus the white matter lesion area predicted by the multi-category output U-Net. , and its equivalent unit is cm 2 .
根據圖9B所示,實際腦白質病灶面積與單一類別輸出U-Net所預測腦白質病灶面積之間的偏差值(Bias)為-0.150。根據圖9B所示,實際腦白質病灶面積減去多類別輸出U-Net所預測腦白質病灶面積之分佈的正1.96標準差(+1.96SD)為0.600,負1.96標準差(-1.96SD)為-0.901。實際腦白質病灶面積減去單一類別輸出U-Net所預測腦白質病灶面積的95%一致性界限(95% limits of agreement)為0.600及-0.901。According to Fig. 9B, the deviation value (Bias) between the actual white matter lesion area and the predicted white matter lesion area of the single-class output U-Net is -0.150. According to Fig. 9B, the positive 1.96 standard deviation (+1.96SD) of the distribution of the actual white matter lesion area minus the predicted white matter lesion area of the multi-class output U-Net is 0.600, and the negative 1.96 standard deviation (-1.96SD) is -0.901. The 95% limits of agreement for the actual white matter lesion area minus the single category output U-Net predicted white matter lesion area were 0.600 and -0.901.
圖10A及圖10B展示根據本揭露之多類別輸出U-SegNet的統計結果。多類別輸出可指影像之多類別語義分割。多類別輸出U-SegNet可為圖3B之影像處理程序,故可輸出多個輸出影像,且該多個輸出影像之每一者可分別指示腦灰質、腦白質、腦白質高信號區域、腦脊髓液、頭皮及/或頭骨以及空氣(如輸出影像201、203、205、207、209及201)。多類別U-SegNet可為圖4之影像處理程序,故可輸出多個輸出影像,且該多個輸出影像之每一者可分別指示腦灰質、腦白質、腦白質高信號區域、腦脊髓液、頭皮及/或頭骨以及空氣(如輸出影像201、203、205、207、209及201)。10A and 10B show statistical results of multi-class output U-SegNet according to the present disclosure. Multi-class output may refer to multi-class semantic segmentation of images. The multi-class output U-SegNet can be the image processing program of FIG. 3B, so it can output multiple output images, and each of the multiple output images can respectively indicate the gray matter, the white matter, the white matter hyperintensity area, and the cerebrospinal cord. Fluid, scalp and/or skull, and air (eg,
圖10A是實際腦白質病灶面積(或實際腦白質高信號區域)與多類別輸出U-SegNet所預測腦白質病灶面積(或預測腦白質高信號區域)之回歸分析圖。圖9A中X軸為實際腦白質病灶面積,Y軸為多類別輸出U-SegNet所預測腦白質病灶面積,其等之單位為cm 2。 10A is a regression analysis diagram of the actual white matter lesion area (or actual white matter hyperintensity area) and the predicted white matter lesion area (or predicted white matter hyperintensity area) of the multi-class output U-SegNet. In FIG. 9A , the X-axis is the actual white matter lesion area, and the Y-axis is the predicted white matter lesion area by the multi-class output U-SegNet, and the unit is cm 2 .
根據圖10A所示回歸分析圖,r值為0.996,意即Y軸數值(即預測腦白質病灶面積)的變異中有99.6%可以被X軸數值(即實際腦白質病灶面積)的變異解釋。根據圖10A所示回歸分析圖,p值為趨近0,意即Y軸數值(即預測腦白質病灶面積)與X軸數值(即實際腦白質病灶面積)之關係為顯著。根據圖10A所示回歸分析圖,回歸線方程式為y=1.000x+0.063,趨近於y=x。According to the regression analysis graph shown in Figure 10A, the r value is 0.996, which means that 99.6% of the variation in the Y-axis value (ie, the predicted white matter lesion area) can be explained by the variation in the X-axis value (ie, the actual white matter lesion area). According to the regression analysis graph shown in FIG. 10A , the p value approaches 0, which means that the relationship between the Y-axis value (ie, the predicted white matter lesion area) and the X-axis value (ie, the actual white matter lesion area) is significant. According to the regression analysis diagram shown in FIG. 10A , the regression line equation is y=1.000x+0.063, which is close to y=x.
圖10B是布蘭德-奧特曼圖(Bland-Altman plot)。圖10B中X軸為實際腦白質病灶面積與多類別輸出U-SegNet所預測腦白質病灶面積的平均值,Y軸為實際腦白質病灶面積減去多類別輸出U-SegNet所預測腦白質病灶面積,其等之單位為cm 2。 Figure 10B is a Bland-Altman plot. In Figure 10B, the X-axis is the average of the actual white matter lesion area and the area of the white matter lesion predicted by the multi-category output U-SegNet, and the Y-axis is the actual white matter lesion area minus the area of the white matter lesion predicted by the multi-category output U-SegNet. , and its equivalent unit is cm 2 .
根據圖10B所示,實際腦白質病灶面積與單一類別輸出U-SegNet所預測腦白質病灶面積之間的偏差值(Bias)為-0.206。根據圖10B所示,實際腦白質病灶面積減去多類別輸出U-SegNet所預測腦白質病灶面積之分佈的正1.96標準差(+1.96SD)為0.624,負1.96標準差(-1.96SD)為-1.036。實際腦白質病灶面積減去單一類別輸出U-Net所預測腦白質病灶面積的95%一致性界限(95% limits of agreement)為0.624及-1.036。According to Fig. 10B, the deviation value (Bias) between the actual white matter lesion area and the predicted white matter lesion area of the single-class output U-SegNet is -0.206. According to Figure 10B, the positive 1.96 standard deviation (+1.96SD) of the distribution of the actual white matter lesion area minus the multi-class output U-SegNet predicted white matter lesion area is 0.624, and the negative 1.96 standard deviation (-1.96SD) is -1.036. The 95% limits of agreement for the actual white matter lesion area minus the single category output U-Net predicted white matter lesion area were 0.624 and -1.036.
圖11A至圖11E展示根據本揭露之一些實施例的腦部MRI 2D影像。圖11A至圖11E展示根據本揭露之一些實施例的腦部T2-FLAIR影像。圖11A可為用以輸入至影像處理程序之輸入影像。圖11A可為本揭露圖1、圖3A、圖3B或圖4中輸入影像101。圖11B至圖11E之每一者包含被圈選為腦白質高信號之區域。圖11B至圖11E之每一者可指示基於圖11A之腦白質高信號識別結果。11A-11E show brain MRI 2D images according to some embodiments of the present disclosure. 11A-11E show brain T2-FLAIR images according to some embodiments of the present disclosure. FIG. 11A may be an input image for input to an image processing program. FIG. 11A can be the
圖11B至圖11E之每一者中之綠色圈選部分指示僅被腦科醫師圈選之腦白質高信號區域。圖11B至圖11E之每一者中之紅色圈選部分指示僅被影像處理程序圈選之腦白質高信號區域。圖11B至圖11E之每一者中之黃色圈選部分指示被腦科醫師及影像處理程序兩者圈選之腦白質高信號區域。The green circled portions in each of FIGS. 11B-11E indicate white matter hyperintensity regions circled only by the neurologist. The red circled portions in each of FIGS. 11B-11E indicate white matter hyperintense regions only circled by the image processing program. The yellow circled portions in each of FIGS. 11B-11E indicate areas of white matter hyperintensity circled by both the neurologist and the image processing program.
圖11B至圖11E之每一者中之綠色圈選部分可指示影像處理程序之偽陰型結果。圖11B至圖11E之每一者中之紅色圈選部分可指示影像處理程序之偽陽型結果。圖11B至圖11E之每一者中之黃色圈選部分可指示影像處理程序之真陽型結果。The green circled portion in each of FIGS. 11B-11E may indicate a false negative result of the image processing procedure. The red circled portion in each of FIGS. 11B-11E may indicate a false positive result of the image processing procedure. The yellow circled portion in each of FIGS. 11B-11E may indicate a true positive result of the image processing procedure.
圖11B所示可包含單一類別U-Net影像處理程序之識別結果。圖11B中之箭頭指示偽陽型結果(即紅色圈選部分)。圖11B中亦包含偽陰型結果(即綠色圈選部分)。Figure 11B shows the recognition results that may include a single class of U-Net image processing procedures. Arrows in Figure 11B indicate false positive results (ie, the red circled portion). Pseudo-negative results (ie, the green circled portion) are also included in FIG. 11B.
圖11C所示可包含單一類別U-SegNet影像處理程序之識別結果。圖11C中之兩個箭頭指示偽陽型結果(即紅色圈選部分)。Figure 11C shows the recognition results that may include a single class of U-SegNet image processing procedures. The two arrows in Figure 11C indicate false positive results (ie, the red circled portion).
圖11D所示可包含多類別U-Net影像處理程序之識別結果。圖11D中之箭頭指示偽陽型結果(即紅色圈選部分)。相較於圖11B,圖11D不包含偽陰型結果(即綠色圈選部分)。FIG. 11D shows the recognition results that may include multi-class U-Net image processing procedures. Arrows in Figure 11D indicate false positive results (ie, the red circled portion). Compared to FIG. 11B , FIG. 11D does not include false negative results (ie, the portion circled in green).
圖11E所示可包含多類別U-SegNet影像處理程序(即圖3B或圖4所示之影像處理程序)之識別結果。圖11E中不包含紅色圈選部分或綠色圈選部分。圖11E中不包含偽陽型結果或偽陰型結果。FIG. 11E shows the recognition result which may include multiple types of U-SegNet image processing programs (ie, the image processing programs shown in FIG. 3B or FIG. 4 ). Figure 11E does not contain red circled parts or green circled parts. False positive or false negative results are not included in Figure 11E.
相較於圖11B所示單一類別U-Net影像處理程序之識別結果,圖11E所示多類別U-SegNet影像處理程序(即圖3B或圖4所示之影像處理程序)之識別結果較佳。相較於圖11C所示單一類別U-SegNet影像處理程序之識別結果,圖11E所示多類別U-SegNet影像處理程序(即圖3B或圖4所示之影像處理程序)之識別結果較佳。相較於圖11D所示多類別U-Net影像處理程序之識別結果,圖11E所示多類別U-SegNet影像處理程序(即圖3B或圖4所示之影像處理程序)之識別結果較佳。Compared with the recognition result of the single-type U-Net image processing program shown in FIG. 11B , the recognition result of the multi-type U-SegNet image processing program shown in FIG. 11E (ie, the image processing program shown in FIG. 3B or FIG. 4 ) is better . Compared with the recognition result of the single-type U-SegNet image processing program shown in FIG. 11C , the recognition result of the multi-type U-SegNet image processing program shown in FIG. 11E (ie, the image processing program shown in FIG. 3B or FIG. 4 ) is better . Compared with the recognition result of the multi-class U-Net image processing program shown in FIG. 11D , the recognition result of the multi-class U-SegNet image processing program shown in FIG. 11E (ie, the image processing program shown in FIG. 3B or FIG. 4 ) is better .
圖12A至圖12E展示根據本揭露之一些實施例的腦部MRI 2D影像。圖12A至圖12E展示根據本揭露之一些實施例的腦部T2-FLAIR影像。圖12A可為用以輸入至影像處理程序之輸入影像。圖12A可為本揭露圖1、圖3B或圖4中所示的輸入影像101。圖12B至圖12E之每一者包含被圈選為腦白質高信號之區域。圖12B至圖12E之每一者可指示基於圖11A之腦白質高信號識別結果。12A-12E show brain MRI 2D images according to some embodiments of the present disclosure. 12A-12E show brain T2-FLAIR images according to some embodiments of the present disclosure. Figure 12A may be an input image for input to an image processing program. FIG. 12A may be the
圖12B至圖12E之每一者中之綠色圈選部分指示僅被腦科醫師圈選之腦白質高信號區域。圖12B至圖12E之每一者中之紅色圈選部分指示僅被影像處理程序圈選之腦白質高信號區域。圖12B至圖12E之每一者中之黃色圈選部分指示被腦科醫師及影像處理程序兩者圈選之腦白質高信號區域。The green circled portions in each of Figures 12B-12E indicate white matter hyperintensity regions circled only by the neurologist. The red circled portions in each of Figures 12B-12E indicate white matter hyperintense regions only circled by the image processing program. The yellow circled portions in each of Figures 12B-12E indicate white matter hyperintense regions circled by both the neurologist and the image processing program.
圖12B至圖12E之每一者中之綠色圈選部分可指示影像處理程序之偽陰型結果。圖12B至圖12E之每一者中之紅色圈選部分可指示影像處理程序之偽陽型結果。圖12B至圖12E之每一者中之黃色圈選部分可指示影像處理程序之真陽型結果。The green circled portion in each of Figures 12B-12E may indicate a false negative result of the image processing procedure. The red circled portion in each of Figures 12B-12E may indicate a false positive result of the image processing procedure. The yellow circled portion in each of Figures 12B-12E may indicate a true positive result of the image processing procedure.
圖12B所示可包含單一類別U-Net影像處理程序之識別結果。圖12B中之三個箭頭指示偽陽型結果(即紅色圈選部分)。圖12B中亦包含偽陰型結果(即綠色圈選部分)。Figure 12B shows the recognition results that may include a single class of U-Net image processing procedures. The three arrows in Figure 12B indicate false positive results (ie, the red circled portion). Figure 12B also includes false negative results (ie, the green circled portion).
圖12C所示可包含單一類別U-SegNet影像處理程序之識別結果。圖12C中之三個箭頭指示偽陽型結果(即紅色圈選部分)。圖12C中亦包含偽陰型結果(即綠色圈選部分)。Figure 12C shows the recognition results that may include a single class of U-SegNet image processing procedures. The three arrows in Figure 12C indicate false positive results (ie, the red circled portion). Pseudo-negative results (ie, the green circled portion) are also included in Figure 12C.
圖12D所示可包含多類別U-Net影像處理程序之識別結果。圖12D中之兩個箭頭指示偽陽型結果(即紅色圈選部分)。圖12D中亦包含偽陰型結果(即綠色圈選部分)。圖12D之紅色圈選部分少於圖12B之紅色圈選部分。圖12D之偽陽型結果少於圖12B之偽陽型結果。圖12D之綠色圈選部分少於圖12B之綠色圈選部分。圖12D之偽陰型結果少於圖12B之偽陰型結果。FIG. 12D shows the recognition result of the U-Net image processing program which may include multiple classes. The two arrows in Figure 12D indicate false positive results (ie, the red circled portion). Pseudo-negative results (ie, the green circled portion) are also included in Figure 12D. The red circled portion of Fig. 12D is less than the red circled portion of Fig. 12B. The false positive results of Figure 12D are less than the false positive results of Figure 12B. The green circled portion of Fig. 12D is less than the green circled portion of Fig. 12B. The false negative results of Figure 12D are less than the false negative results of Figure 12B.
圖12E所示可包含多類別U-SegNet影像處理程序(即圖3B或圖4所示之影像處理程序)之識別結果。圖12E中不包含紅色圈選部分。圖11E中不包含偽陽型結果。圖12E之綠色圈選部分少於圖12C之綠色圈選部分。圖12E之偽陰型結果少於圖12C之偽陰型結果。FIG. 12E shows the recognition result that may include multiple types of U-SegNet image processing programs (ie, the image processing programs shown in FIG. 3B or FIG. 4 ). The red circled portion is not included in Figure 12E. False positive results are not included in Figure 11E. The green circled portion of Fig. 12E is less than the green circled portion of Fig. 12C. The false negative results of Figure 12E are less than the false negative results of Figure 12C.
相較於圖12B所示單一類別U-Net影像處理程序之識別結果,圖12E所示多類別U-SegNet影像處理程序(即圖3B或圖4所示之影像處理程序)之識別結果較佳。相較於圖12C所示單一類別U-SegNet影像處理程序之識別結果,圖12E所示多類別U-SegNet影像處理程序(即圖3B或圖4所示之影像處理程序)之識別結果較佳。相較於圖12D所示多類別U-Net影像處理程序之識別結果,圖12E所示多類別U-SegNet影像處理程序(即圖3B或圖4所示之影像處理程序)之識別結果較佳。Compared with the recognition result of the single-type U-Net image processing program shown in FIG. 12B , the recognition result of the multi-type U-SegNet image processing program shown in FIG. 12E (ie, the image processing program shown in FIG. 3B or FIG. 4 ) is better . Compared with the recognition result of the single-type U-SegNet image processing program shown in FIG. 12C , the recognition result of the multi-type U-SegNet image processing program shown in FIG. 12E (ie, the image processing program shown in FIG. 3B or FIG. 4 ) is better . Compared with the recognition result of the multi-class U-Net image processing program shown in FIG. 12D , the recognition result of the multi-class U-SegNet image processing program shown in FIG. 12E (ie, the image processing program shown in FIG. 3B or FIG. 4 ) is better .
基於真陽型結果(True Positive;TP)、真陰型結果(True Negative;TN),偽陽型結果(False Positive;FP)、偽陰型結果(False Negative;FN),評價機器學習結果之量化指標包含以下數種: 準確率(Accuracy)= (tp+tn)/(tp+fp+fn+tn); 精確率(Precision)= tp/(tp+fp)(即預測為陽性的之結果中有幾個是預測正確的); 召回率(Recall)= tp/(tp+fn) (即實際為陽性之樣本中有幾個是預測正確的); F1分數 = 2/((1/Precision)+(1/Recall))(即精確率與召回率的調和平均數); 靈敏度(Sensitivity)= TP/(TP+FN)(與召回率相同);及 特異性(Specificity)= TN/(FP+TN)。 Based on True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN), the machine learning results are evaluated. Quantitative indicators include the following: Accuracy = (tp+tn)/(tp+fp+fn+tn); Precision = tp/(tp+fp) (that is, how many of the results predicted to be positive are correct); Recall rate (Recall) = tp/(tp+fn) (that is, several of the actual positive samples are predicted correctly); F1 score = 2/((1/Precision)+(1/Recall)) (that is, the harmonic mean of precision and recall); Sensitivity = TP/(TP+FN) (same as recall); and Specificity=TN/(FP+TN).
關於單一類別輸出U-Net(如圖7A、圖7B、圖11B、圖12B所使用之影像處理程序),F1分數為88.19%;靈敏度為90.61%;特異性為99.96%;準確率為99.93%。Regarding the single-class output U-Net (the image processing program used in Figure 7A, Figure 7B, Figure 11B, Figure 12B), the F1 score was 88.19%; the sensitivity was 90.61%; the specificity was 99.96%; and the accuracy was 99.93% .
關於單一類別輸出U-SegNet(如圖8A、圖8B、圖11C、圖12C所使用之影像處理程序),F1分數為87.75%;靈敏度為90.40%;特異性為99.95%;準確率為99.93%。Regarding the single-class output U-SegNet (the image processing program used in Figure 8A, Figure 8B, Figure 11C, Figure 12C), the F1 score was 87.75%; the sensitivity was 90.40%; the specificity was 99.95%; and the accuracy was 99.93% .
關於多類別輸出U-Net(如圖9A、圖9B、圖11D、圖12D所使用之影像處理程序),F1分數為88.73%;靈敏度為92.21%;特異性為99.95%;準確率為99.93%。Regarding the multi-class output U-Net (image processing program used in Figure 9A, Figure 9B, Figure 11D, Figure 12D), the F1 score was 88.73%; the sensitivity was 92.21%; the specificity was 99.95%; the accuracy was 99.93% .
關於多類別輸出U-SegNet(如圖10A、圖10B、圖11E、圖12E所使用之影像處理程序),F1分數為90.01%;靈敏度為94.04%;特異性為99.95%;準確率為99.94%。多類別輸出U-SegNet之F1分數高於單一類別輸出U-Net之F1分數、單一類別輸出U-SegNet之F1分數及多類別輸出U-Net之F1分數。Regarding the multi-class output U-SegNet (the image processing program used in Figure 10A, Figure 10B, Figure 11E, Figure 12E), the F1 score was 90.01%; the sensitivity was 94.04%; the specificity was 99.95%; the accuracy was 99.94% . The F1 score of the multi-class output U-SegNet is higher than the F1 score of the single-class output U-Net, the F1 score of the single-class output U-SegNet and the F1 score of the multi-class output U-Net.
圖13展示自圖2之分級程序800輸出之分級結果900的混淆矩陣(confusion matrix)。圖13展示自XGBoost輸出之分級結果900的混淆矩陣。XGBoost之輸入資料包含腦灰質體積231、腦白質體積233、白質高信號體235、腦脊髓液體積237及影像參數239(如影像切片厚度)。在一些實施例中,腦灰質體積231、腦白質體積233、白質高信號體235、腦脊髓液體積237是根據將一系列輸入影像101輸入至如本揭露圖3B或圖4之影像處理程序300(例如多類別U-SegNet影像處理程序)而得。在一些實施例中,影像參數239為一系列輸入影像101之影像切片厚度。FIG. 13 shows a confusion matrix of the
參考圖13,混淆矩陣之X軸可為XGBoost所預測之費澤克斯等級,混淆矩陣之Y軸可為實際費澤克斯等級。在實際費澤克斯等級為1時,預測費澤克斯等級為1之機率為0.8294,預測費澤克斯等級為2之機率為0.1706,預測費澤克斯等級為3之機率為0。在預測澤克斯等級為1時,實際費澤克斯等級為1之機率為0.8294,預測費澤克斯等級為2之機率為0.1412,預測費澤克斯等級為3之機率為0。Referring to FIG. 13 , the X-axis of the confusion matrix may be the Fezex level predicted by XGBoost, and the Y-axis of the confusion matrix may be the actual Fazex level. When the actual Fezex level is 1, the predicted probability of a Fezex level of 1 is 0.8294, the predicted probability of a Fezex level of 2 is 0.1706, and the predicted probability of a Fezex level of 3 is 0. When the predicted Fezex level is 1, the probability of the actual Fezex level is 0.8294, the predicted Fezex level is 0.1412, and the predicted Fezex level is 0. The probability is 0.
實際費澤克斯等級為2時,預測費澤克斯等級為2之機率為0.7406;實際費澤克斯等級為3時,預測費澤克斯等級為3之機率為0.8254。When the actual Fezex level is 2, the predicted probability of
根據藉由如本揭露圖3B或圖4之影像處理程序300(例如多類別U-SegNet影像處理程序)而得之腦灰質體積231、腦白質體積233、白質高信號體235、腦脊髓液體積237進行分級,費澤克斯等級預測正確之機率為高。According to the
圖14繪示根據本揭露之一些實施例的系統1400。系統1400包含用戶端1410、資料庫1420以及伺服器端1430。用戶端1410包含處理器1411及記憶體1412,記憶體1412可儲存使處理器1411執行本揭露所載之程序或操作。資料庫1420包含處理器1421及記憶體1422,記憶體1422可儲存使處理器1421執行本揭露所載之程序或操作。伺服器端1430包含處理器1431及記憶體1432,記憶體1432可儲存使處理器1431執行本揭露所載之程序或操作。14 illustrates a
在一些實施例中,用戶端1410可自資料庫1420存取資料。例如,用戶端1410可自資料庫1420存取影像,以供用戶(例如專科醫師)使用。用戶端1410可自資料庫1420存取腦部MRI 2D影像或腦部T2-FLAIR影像,以供用戶使用。用戶端1410可對資料庫1420傳送請求,使資料庫1420將用戶端1410所選取之一或多個影像傳送至伺服器端1430進行影像辨識及/或分級。In some embodiments, the
在伺服器端1430收到資料庫1420所傳送之影像辨識請求及相關聯之一或多個影像後,伺服器端1430將該一或多個影像之每一者作為輸入影像101(如圖1、圖3B或圖4所示)而輸入至影像處理程序300(如圖1、圖3B或圖4所示)進行影像辨識。After the
經過影像處理程序300處理後,對應於自資料庫1420收到的該一或多個影像,伺服器端1430產生一或多個輸出影像201、一或多個輸出影像203、一或多個輸出影像205、一或多個輸出影像207、一或多個輸出影像209、一或多個輸出影像211。After being processed by the
在一些實施例中,伺服器端1430可根據資料庫1420所傳送之請求而進一步對資料庫1420所傳送之一或多個影像進行分級。伺服器端1430可根據影像處理程序300輸出之一或多個輸出影像201、一或多個輸出影像203、一或多個輸出影像205以及一或多個輸出影像207計算組織之體積,以產生體積231、體積233、體積235及體積237。伺服器端1430將體積231、體積233、體積235及體積237輸入至分級程序800。在一些實施例中,除體積231、體積233、體積235及體積237之外,伺服器端1430亦將影像參數239輸入至分級程序800。影像參數230可為自資料庫1420接收之一或多個影像的影像切片厚度。分級程序800根據輸入之資料輸出分級結果900。In some embodiments, the
伺服器端1430可根據一或多個輸出影像201、一或多個輸出影像203、一或多個輸出影像205、一或多個輸出影像207、一或多個輸出影像209、一或多個輸出影像211產生病灶邊緣座標及病灶體積。伺服器端1430可根據分級結果900產生分級報告。伺服器端1430可將病灶邊緣座標、病灶體積及分級報告傳送至資料庫1420或用戶端1410。伺服器端可將病灶邊緣座標、病灶體積及分級報告藉由資料庫1420傳送至用戶端1410。The
資料庫1420可根據病灶邊緣座標而在該一或多個相對應影像上圈選病灶,並將包含圈選病灶之影像傳送至用戶端1410以供用戶使用。用戶端1410可根據病灶邊緣座標而在該一或多個相對應影像上圈選病灶,以供用戶使用。病灶體積及分級報告可傳送至用戶端1410以供用戶使用。The
雖然已參考本揭露之具體實施例描述及說明本發明,但此等描述及說明並不限制本發明。熟習此項技術者應理解,在不脫離如由隨附申請專利範圍界定的本發明之真實精神及範圍的情況下,可作出各種改變且可取代等效物。說明可不必按比例繪製。歸因於製造製程及公差,本申請中之藝術再現與實際發明中之藝術再現之間可存在區別。可存在並未特定說明的本發明之其他實施例。應將本說明書及圖式視為說明性而非限制性的。可作出修改,以使特定情況、材料、物質之組成、方法或製程適應於本發明之目標、精神及範圍。所有此類修改意欲在此處附加之申請專利範圍之範圍內。雖然已參考按特定次序執行之特定操作描述本文中所揭示的方法,但將理解,在不脫離本發明之教示的情況下,可組合、再細分或重新定序此等操作以形成等效方法。因此,除非本文中另外特定地指示,否則操作之次序及分組並非本發明之限制。此外,在上述實施例及其類似者中詳述之效果僅為實例。因此,本申請可進一步具有其他效果。While the invention has been described and illustrated with reference to specific embodiments of the present disclosure, such description and illustration do not limit the invention. It will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the true spirit and scope of the invention as defined by the appended claims. The illustrations may not necessarily be drawn to scale. Due to manufacturing processes and tolerances, differences may exist between the artistic representation in this application and the artistic representation in the actual invention. There may be other embodiments of the invention not specifically described. The specification and drawings are to be regarded in an illustrative rather than a restrictive sense. Modifications may be made to adapt a particular situation, material, composition of matter, method or process to the object, spirit and scope of the invention. All such modifications are intended to be within the scope of the claims appended hereto. Although the methods disclosed herein have been described with reference to specific operations performed in a specific order, it is to be understood that such operations may be combined, subdivided, or reordered to form equivalent methods without departing from the teachings of the present invention . Accordingly, unless specifically indicated otherwise herein, the order and grouping of operations are not limitations of the invention. Furthermore, the effects detailed in the above-described embodiments and the like are merely examples. Therefore, the present application can further have other effects.
另外,圖中所繪示之邏輯流程未必需要所展示之特定次序或順序次序來實現合意結果。另外,可提供其他步驟,或可自所闡述流程消除若干步驟,且可向所闡述系統添加或自所闡述系統移除其他組件。因此,其他實施例皆在所附申請專利範圍之範疇內。Additionally, the logic flows depicted in the figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. Additionally, other steps may be provided, or steps may be eliminated from the illustrated flows, and other components may be added to or removed from the illustrated systems. Accordingly, other embodiments are within the scope of the appended claims.
101:輸入影像 102:特徵圖組 103:特徵圖組 104:特徵圖組 105:特徵圖組 106:特徵圖組 107:特徵圖組 108:特徵圖組 109:特徵圖組 110:特徵圖組 111:特徵圖組 112:特徵圖組 113:特徵圖組 114:特徵圖組 115:特徵圖組 116:特徵圖組 117:特徵圖組 118:特徵圖組 119:特徵圖組 120:特徵圖組 121:特徵圖組 122:特徵圖組 123:特徵圖組 124:特徵圖組 201:輸出影像 203:輸出影像 205:輸出影像 207:輸出影像 209:輸出影像 211:輸出影像 300:影像處理程序 231:體積 233:體積 235:體積 237:體積 239:影像參數 301:操作 302:操作 303:操作 304:操作 305:操作 306:操作 307:操作 308:操作 309:操作 310:操作 311:操作 312:操作 313:操作 314:操作 315:操作 401:操作 402:操作 403:操作 501:操作 502:操作 503:操作 601:操作 602:操作 603:操作 701:操作 702:操作 703:操作 800:分級程序 900:分級結果 101: Input image 102: Feature map group 103: Feature Map Group 104: Feature Map Group 105: Feature Map Group 106: Feature Map Group 107: Feature Map Group 108: Feature map group 109: Feature Map Group 110: Feature map group 111: Feature map group 112: Feature map group 113: Feature map group 114: Feature map group 115: Feature map group 116: Feature map group 117: Feature map group 118: Feature map group 119: Feature map group 120: Feature map group 121: Feature map group 122: Feature map group 123: Feature map group 124: Feature map group 201: Output image 203: output image 205: Output image 207: Output image 209: Output image 211: output image 300: Image processing program 231: Volume 233: Volume 235: volume 237: Volume 239: Image parameters 301: Operation 302: Operation 303: Operation 304: Operation 305: Operation 306: Operation 307: Operation 308: Operation 309:Operation 310: Operation 311: Operation 312: Operation 313: Operation 314: Operation 315: Operation 401: Operation 402: Operation 403: Operation 501: Operation 502: Operation 503: Operation 601: Operation 602: Operation 603: Operation 701: Operation 702: Operation 703: Operation 800: Grading Procedures 900: Grading Results
圖1繪示根據本揭露之一些實施例的影像辨識流程圖。FIG. 1 illustrates a flowchart of image recognition according to some embodiments of the present disclosure.
圖2繪示根據本揭露之一些實施例的分級流程圖。FIG. 2 illustrates a hierarchical flow diagram according to some embodiments of the present disclosure.
圖3A繪示根據本揭露之一些實施例的影像處理程序。FIG. 3A illustrates an image processing procedure according to some embodiments of the present disclosure.
圖3B繪示根據本揭露之一些實施例的影像處理程序。FIG. 3B illustrates an image processing procedure according to some embodiments of the present disclosure.
圖4繪示根據本揭露之一些實施例的影像處理程序。FIG. 4 illustrates an image processing procedure according to some embodiments of the present disclosure.
圖5A至圖5E展示根據本揭露之一些實施例的腦部影像。5A-5E show brain images according to some embodiments of the present disclosure.
圖6A至圖6E展示根據本揭露之一些實施例的腦部影像。6A-6E show brain images according to some embodiments of the present disclosure.
圖7A及圖7B展示根據本揭露之一些實施例的統計結果。7A and 7B show statistical results according to some embodiments of the present disclosure.
圖8A及圖8B展示根據本揭露之一些實施例的統計結果。8A and 8B show statistical results according to some embodiments of the present disclosure.
圖9A及圖9B展示根據本揭露之一些實施例的統計結果。9A and 9B show statistical results according to some embodiments of the present disclosure.
圖10A及圖10B展示根據本揭露之一些實施例的統計結果。10A and 10B show statistical results according to some embodiments of the present disclosure.
圖11A至圖11E展示根據本揭露之一些實施例的腦部影像。11A-11E show brain images according to some embodiments of the present disclosure.
圖12A至圖12E展示根據本揭露之一些實施例的腦部影像。12A-12E show brain images according to some embodiments of the present disclosure.
圖13展示根據本揭露之一些實施例的統計結果。13 shows statistical results according to some embodiments of the present disclosure.
圖14繪示根據本揭露之一些實施例的系統。14 illustrates a system according to some embodiments of the present disclosure.
為更好地理解本揭露之前述態樣以及其額外態樣及實施例,應結合以上圖式參考下文實施方式。在各個圖式中,相似參考符號指示相似元件。For a better understanding of the foregoing aspects of the present disclosure, as well as additional aspects and embodiments thereof, reference should be made to the following description in conjunction with the above drawings. In the various figures, like reference characters indicate similar elements.
1410:客戶端裝置 1410: Client Device
1411:處理器 1411: Processor
1412:記憶體 1412: Memory
1420:資料庫裝置 1420: Database Device
1421:處理器 1421: Processor
1422:記憶體 1422: Memory
1430:伺服器端裝置 1430: Server side device
1431:處理器 1431: Processor
1432:記憶體 1432: Memory
Claims (14)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW109133700A TWI768483B (en) | 2020-09-28 | 2020-09-28 | Method and apparatus for identifying white matter hyperintensities |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW109133700A TWI768483B (en) | 2020-09-28 | 2020-09-28 | Method and apparatus for identifying white matter hyperintensities |
Publications (2)
Publication Number | Publication Date |
---|---|
TW202213378A TW202213378A (en) | 2022-04-01 |
TWI768483B true TWI768483B (en) | 2022-06-21 |
Family
ID=82197118
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW109133700A TWI768483B (en) | 2020-09-28 | 2020-09-28 | Method and apparatus for identifying white matter hyperintensities |
Country Status (1)
Country | Link |
---|---|
TW (1) | TWI768483B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109300531A (en) * | 2018-08-24 | 2019-02-01 | 深圳大学 | A kind of cerebral disease method of early diagnosis and device |
US20190246904A1 (en) * | 2016-10-20 | 2019-08-15 | Jlk Inspection | Stroke diagnosis and prognosis prediction method and system |
CN110797123A (en) * | 2019-10-28 | 2020-02-14 | 大连海事大学 | Graph convolution neural network evolution method of dynamic brain structure |
TW202027028A (en) * | 2018-08-15 | 2020-07-16 | 美商超精細研究股份有限公司 | Deep learning techniques for suppressing artefacts in magnetic resonance images |
-
2020
- 2020-09-28 TW TW109133700A patent/TWI768483B/en active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190246904A1 (en) * | 2016-10-20 | 2019-08-15 | Jlk Inspection | Stroke diagnosis and prognosis prediction method and system |
TW202027028A (en) * | 2018-08-15 | 2020-07-16 | 美商超精細研究股份有限公司 | Deep learning techniques for suppressing artefacts in magnetic resonance images |
CN109300531A (en) * | 2018-08-24 | 2019-02-01 | 深圳大学 | A kind of cerebral disease method of early diagnosis and device |
CN110797123A (en) * | 2019-10-28 | 2020-02-14 | 大连海事大学 | Graph convolution neural network evolution method of dynamic brain structure |
Also Published As
Publication number | Publication date |
---|---|
TW202213378A (en) | 2022-04-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Farid et al. | A novel approach of CT images feature analysis and prediction to screen for corona virus disease (COVID-19) | |
CN110236543B (en) | Alzheimer disease multi-classification diagnosis system based on deep learning | |
CN112101451B (en) | Breast cancer tissue pathological type classification method based on generation of antagonism network screening image block | |
Ortiz-Ramón et al. | Identification of the presence of ischaemic stroke lesions by means of texture analysis on brain magnetic resonance images | |
Zhang et al. | Automated semantic segmentation of red blood cells for sickle cell disease | |
US10621307B2 (en) | Image-based patient profiles | |
CN111415324B (en) | Classification and identification method for brain disease focus image space distribution characteristics based on magnetic resonance imaging | |
Alksas et al. | A novel computer-aided diagnostic system for accurate detection and grading of liver tumors | |
US10706534B2 (en) | Method and apparatus for classifying a data point in imaging data | |
JP7187244B2 (en) | Medical image processing device, medical image processing system and medical image processing program | |
US9811904B2 (en) | Method and system for determining a phenotype of a neoplasm in a human or animal body | |
WO2023241031A1 (en) | Deep learning-based three-dimensional intelligent diagnosis method and system for osteoarthritis | |
US11373309B2 (en) | Image analysis in pathology | |
CN111640095B (en) | Quantification method of cerebral micro hemorrhage and computer readable storage medium | |
WO2020097100A1 (en) | Systems and methods for semi-automatic tumor segmentation | |
WO2022247573A1 (en) | Model training method and apparatus, image processing method and apparatus, device, and storage medium | |
CN110738702B (en) | Three-dimensional ultrasonic image processing method, device, equipment and storage medium | |
Saiviroonporn et al. | Cardiothoracic ratio measurement using artificial intelligence: observer and method validation studies | |
CN113764101B (en) | Novel auxiliary chemotherapy multi-mode ultrasonic diagnosis system for breast cancer based on CNN | |
Agarwala et al. | A-UNet: Attention 3D UNet architecture for multiclass segmentation of Brain Tumor | |
Das et al. | A study on MANOVA as an effective feature reduction technique in classification of childhood medulloblastoma and its subtypes | |
CN109214451A (en) | A kind of classification method and equipment of brain exception | |
TWI768483B (en) | Method and apparatus for identifying white matter hyperintensities | |
Mirchandani et al. | Comparing the Architecture and Performance of AlexNet Faster R-CNN and YOLOv4 in the Multiclass Classification of Alzheimer Brain MRI Scans | |
TWI785390B (en) | Method and apparatus for identifying brain tissues and determining brain age |