TW202307863A - Brain extraction and coil shimming correction model establishment method for brain image capable of significantly reducing manpower and time costs with high accuracy - Google Patents

Brain extraction and coil shimming correction model establishment method for brain image capable of significantly reducing manpower and time costs with high accuracy Download PDF

Info

Publication number
TW202307863A
TW202307863A TW110129132A TW110129132A TW202307863A TW 202307863 A TW202307863 A TW 202307863A TW 110129132 A TW110129132 A TW 110129132A TW 110129132 A TW110129132 A TW 110129132A TW 202307863 A TW202307863 A TW 202307863A
Authority
TW
Taiwan
Prior art keywords
brain
image
model
training data
extraction
Prior art date
Application number
TW110129132A
Other languages
Chinese (zh)
Other versions
TWI771141B (en
Inventor
翁駿程
莊凱翔
吳佩寰
Original Assignee
長庚大學
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 長庚大學 filed Critical 長庚大學
Priority to TW110129132A priority Critical patent/TWI771141B/en
Application granted granted Critical
Publication of TWI771141B publication Critical patent/TWI771141B/en
Publication of TW202307863A publication Critical patent/TW202307863A/en

Links

Images

Abstract

A brain extraction and coil shimming correction model establishment method for brain images is provided. A computer device builds a generator model and a discriminator model. For each training data, the discriminator model generates a first prediction result and a second prediction result according to a three-dimensional brain image before image processing and a three-dimensional brain image after image processing of the training data, and a first brain extraction and correction image generated by the generator model. The discriminator model is adjusted based on the first prediction result and the second prediction result. The adjusted discriminator model generates a third prediction result according to the three-dimensional brain image before image processing and the first brain extraction and correction image. Finally, the adjusted generator model is obtained according to the third prediction result.

Description

腦部影像之腦部提取與線圈勻場校正模型的建立方法Brain Extraction and Coil Shimming Correction Model Establishment Method for Brain Image

本發明是有關於一種模型的建立方法,特別是指一種腦部影像之腦部提取與線圈勻場校正模型的建立方法。The present invention relates to a method for establishing a model, in particular to a method for establishing a brain image extraction and coil shimming correction model.

磁振造影(Magnetic Resonance Imaging)已被廣泛用於獲取大腦的結構和功能信息。通過構建腦圖譜和模板,這些從不同時間且不同受試者獲得的資訊豐富的多模態資料可以影像融合到一個共同的參考空間,用於體素和區域分析,以了解衰老、疾病和治療的影響。為了實現準確的影像融合,影像需要針對頭部運動、幾何失真和線圈B1場非均勻進行校正。由於較差的非均勻校正(inhomogeneity correction)和腦部提取(brain extraction/skull stripping)會導致錯位或有偏差,因此需要相關人員進行手動校正以及手動提取大腦部分。MRI (Magnetic Resonance Imaging) has been widely used to obtain structural and functional information of the brain. By constructing brain atlases and templates, these informative multimodal data from different subjects at different times can be image-fused into a common reference space for voxel-wise and regional analysis to understand aging, disease, and treatment Impact. For accurate image fusion, images need to be corrected for head motion, geometric distortion, and coil B1 field non-uniformity. Since poor inhomogeneity correction and brain extraction/skull stripping can lead to misalignment or bias, manual correction and manual extraction of brain parts is required.

現有雖已有各種用於線圈非均勻校正的方法,但這些方法還是需要針對不同的影像對比度(例如,自旋回訊(spin-echo)與梯度回訊(gradient-echo))、線圈設計和頭部位置進行手動調整。同樣地,腦部提取亦需要手動編輯以獲得準確的腦部影像的感興趣區域。Although there are various methods for coil non-uniformity correction, these methods still need to be specific for different image contrast (for example, spin-echo (spin-echo) vs. gradient-echo (gradient-echo)), coil design and head. Adjust the position manually. Likewise, brain extraction also requires manual editing to obtain accurate regions of interest for brain images.

然而,若手動調整及編輯仍需耗費大量的時間成本及相關的人力資源。However, manual adjustment and editing still consume a lot of time and related human resources.

舉例來說,若整批未處理的資料集擁有400個三維影像(影像尺寸為64×64×64),以選取腦部感興趣區域的任務為例,假設人為選取一個二維切面花費時間約一分鐘,64個切面的影像則須約64分鐘,則處理一個資料集約花費426.67小時。For example, if the entire batch of unprocessed data sets has 400 3D images (image size is 64×64×64), taking the task of selecting the region of interest in the brain as an example, assuming that it takes about For one minute, it takes about 64 minutes for images of 64 slices, and it takes about 426.67 hours to process one data set.

因此,本發明的目的,即在提供一種能自動提取感興趣區域及進行非均勻校正的腦部影像之腦部提取與線圈勻場校正模型的建立方法。Therefore, the object of the present invention is to provide a brain extraction and coil shimming correction model establishment method that can automatically extract the region of interest and perform non-uniform correction of the brain image.

於是,本發明腦部影像之腦部提取與線圈勻場校正模型的建立方法,由:一電腦裝置執行,該電腦裝置儲存多筆訓練資料,每一筆訓練資料包括一影像處理前腦部三維影像及一經非均勻校正的影像處理後腦部三維影像,該影像處理後腦部三維影像包括一感興趣區域,該方法包含一步驟(A)、一步驟(B)、一步驟(C)、一步驟(D)、一步驟(E)、一步驟(F)、一步驟(G)、一步驟(H),及一步驟(I)。Therefore, the method for establishing the brain extraction and coil shimming correction model of the brain image of the present invention is performed by: a computer device, the computer device stores a plurality of training data, and each training data includes a three-dimensional image of the brain before image processing And a three-dimensional image of the brain after image processing after non-uniform correction, the three-dimensional image of the brain after image processing includes a region of interest, the method includes a step (A), a step (B), a step (C), a Step (D), a step (E), a step (F), a step (G), a step (H), and a step (I).

在該步驟(A)中,該電腦裝置建立一用以產生一腦部提取與線圈勻場校正後的影像的生成器模型,該生成器模型包括一生成輸入端口、至少一生成編碼器、一卷積層、至少一分別對應該至少一編碼器的生成解碼器,及一生成輸出端口。In the step (A), the computer device establishes a generator model for generating an image after brain extraction and coil shimming correction, the generator model includes a generating input port, at least one generating encoder, a The convolutional layer, at least one generating decoder respectively corresponding to the at least one encoder, and a generating output port.

在該步驟(B)中,該電腦裝置建立一用以預測二輸入影像之真偽的鑑別器模型,該鑑別器模型包括二鑑別輸入端口、至少一鑑別解碼器,及一鑑別輸出端口。In the step (B), the computer device builds a discriminator model for predicting the authenticity of two input images, and the discriminator model includes two discriminative input ports, at least one discriminative decoder, and a discriminative output port.

在該步驟(C)中,對於每一訓練資料,該電腦裝置將該訓練資料的該影像處理前腦部三維影像輸入至該生成器模型,以產生一第一腦部提取及校正影像。In the step (C), for each training data, the computer device inputs the three-dimensional brain image of the training data before image processing to the generator model to generate a first brain extraction and correction image.

在該步驟(D)中,對於每一訓練資料,該電腦裝置將該訓練資料的該影像處理前腦部三維影像及該影像處理後腦部三維影像輸入至該鑑別器模型,以產生一相關於該訓練資料的該影像處理前腦部三維影像及該影像處理後腦部三維影像是否為真的第一預測結果。In the step (D), for each training data, the computer device inputs the three-dimensional images of the brain before image processing and the three-dimensional images of the brain after image processing of the training data into the discriminator model to generate a correlation A first prediction result of whether the three-dimensional brain image before the image processing and the three-dimensional brain image after the image processing of the training data are true.

在該步驟(E)中,該電腦裝置根據該等訓練資料對應的第一預測結果,調整該鑑別器模型。In the step (E), the computer device adjusts the discriminator model according to the first prediction results corresponding to the training data.

在該步驟(F)中,對於每一訓練資料,該電腦裝置將該訓練資料的該影像處理前腦部三維影像及對應該訓練資料的第一腦部提取及校正影像輸入至該鑑別器模型,以產生一相關於該訓練資料的該影像處理前腦部三維影像及對應該訓練資料的第一腦部提取及校正影像是否為真的第二預測結果。In the step (F), for each training data, the computer device inputs the three-dimensional image of the brain before the image processing of the training data and the first extracted and corrected image of the brain corresponding to the training data to the discriminator model , so as to generate a second prediction result related to whether the pre-processing three-dimensional brain image of the training data and the first brain extraction and correction image corresponding to the training data are true.

在該步驟(G)中,該電腦裝置根據該等訓練資料對應的第二預測結果,調整該鑑別器模型。In the step (G), the computer device adjusts the discriminator model according to the second prediction result corresponding to the training data.

在該步驟(H)中,對於每一訓練資料,該電腦裝置將該訓練資料的該影像處理前腦部三維影像及對應該訓練資料的第一腦部提取及校正影像輸入至調整後的該鑑別器模型,以產生一相關於該訓練資料的該影像處理前腦部三維影像及對應該訓練資料的第一腦部提取及校正影像是否為真的第三預測結果。In the step (H), for each training data, the computer device inputs the three-dimensional image of the brain before the image processing of the training data and the first extracted and corrected image of the brain corresponding to the training data into the adjusted A discriminator model for generating a third prediction result related to whether the pre-image-processed three-dimensional brain image of the training data and the first brain extraction and correction image corresponding to the training data are true.

在該步驟(I)中,該電腦裝置根據該等訓練資料對應的第三預測結果、該等訓練資料的影像處理後腦部三維影像,及該等訓練資料對應的第一腦部提取及校正影像調整該生成器模型,以獲得一調整後生成器模型。In the step (I), the computer device processes the three-dimensional images of the brain according to the third prediction results corresponding to the training data, the images of the training data, and the first brain extraction and correction corresponding to the training data The generator model is image adjusted to obtain an adjusted generator model.

本發明之功效在於:該調整後生成器模型能根據任意影像處理前腦部三維影像輸出具有腦部感興趣區域且非均勻校正的影像,大幅減少相關專業人員之人力及時間成本且兼具準確度。The effect of the present invention is that the adjusted generator model can output a non-uniformly corrected image with brain regions of interest according to any three-dimensional image of the brain before processing, greatly reducing the manpower and time costs of relevant professionals and having accurate Spend.

在本發明被詳細描述前,應當注意在以下的說明內容中,類似的元件是以相同的編號來表示。Before the present invention is described in detail, it should be noted that in the following description, similar elements are denoted by the same numerals.

參閱圖1,說明用來實施本發明腦部影像之腦部提取與線圈勻場校正模型的建立方法的一實施例之一電腦裝置1,該電腦系統1包含一儲存單元11及一電連接該儲存單元11的處理單元12。在本實施例中,該電腦系統1之實施態樣例如為一個人電腦、一伺服器或一雲端主機,但不以此為限。Referring to Fig. 1, it illustrates a computer device 1 of an embodiment for implementing the brain extraction of the brain image of the present invention and the establishment method of the coil shimming correction model, the computer system 1 includes a storage unit 11 and an electrical connection to the The processing unit 12 of the storage unit 11 . In this embodiment, the implementation of the computer system 1 is, for example, a personal computer, a server or a cloud host, but it is not limited thereto.

該儲存單元11儲存多筆訓練資料及多筆驗證資料,每一筆訓練資料或驗證資料包括一影像處理前腦部三維影像及一經非均勻校正的影像處理後腦部三維影像,該影像處理後腦部三維影像包括一感興趣區域。The storage unit 11 stores multiple pieces of training data and multiple pieces of verification data. Each piece of training data or verification data includes a three-dimensional image of the brain before image processing and a three-dimensional image of the brain after image processing after non-uniform correction. The 3D image includes a region of interest.

值得注意的是,該等訓練資料或驗證資料的影像處理前腦部三維影像及影像處理後腦部三維影像為(Magnetic Resonance Imaging, MRI),該等訓練資料或驗證資料可皆為大鼠(rat)腦部影像,或是皆為小鼠(mouse)腦部影像,或是混合,但不以此為限。It is worth noting that the three-dimensional images of the brain before and after image processing of the training data or verification data are (Magnetic Resonance Imaging, MRI), and the training data or verification data can all be rats ( rat) brain images, or all mouse (mouse) brain images, or mixed, but not limited thereto.

參閱圖1、2,以下說明本發明腦部影像之腦部提取與線圈勻場校正模型的建立方法的該實施例所包含的步驟。Referring to FIG. 1 and FIG. 2 , the steps included in this embodiment of the method for brain extraction and coil shimming correction model establishment of brain images of the present invention will be described below.

在步驟201中,該處理單元12建立一用以產生一腦部提取與線圈勻場校正後的影像的生成器模型,該生成器模型包括一生成輸入端口、至少一生成編碼器、一卷積層、至少一分別對應該至少一編碼器的生成解碼器,及一生成輸出端口。In step 201, the processing unit 12 establishes a generator model for generating an image after brain extraction and coil shimming correction, the generator model includes a generator input port, at least one generator encoder, and a convolutional layer , at least one generating decoder respectively corresponding to the at least one encoder, and a generating output port.

在步驟202中,該處理單元12建立一用以預測二輸入影像之真偽的鑑別器模型,該鑑別器模型包括二鑑別輸入端口、至少一鑑別解碼器。In step 202, the processing unit 12 establishes a discriminator model for predicting the authenticity of two input images, the discriminator model includes two discriminative input ports and at least one discriminative decoder.

值得注意的是,在本實施例中是使用Keras建立屬於pix2pix架構之對抗生成網路(generative adversarial network, GAN)的該生成器模型及該鑑別器模型,該生成器模型為u-net結構,包括6個生成編碼器及6個生成解碼器,且卷積核大小(kernel size)為4×4×4的三維大小,該生成輸入端口及該生成輸出端口尺寸為64×64×64;該鑑別器模型是為patchGAN結構,包括6個鑑別解碼器,且卷積核大小為4×4×4的三維大小,該鑑別輸入端口尺寸為64×64×64,該鑑別輸出端口尺寸為1×1×1,但不以此為限。該鑑別器模型的預測結果在零與一之間,表示所輸入的影像皆為’真’的機率。若預測結果為一,表示判定為’真’,亦即所輸入的影像皆來自該等訓練資料或該等驗證資料;反之,若預測結果為零,表示判定為’偽’,亦即所輸入的其中一影像並非來自該等訓練資料或該等驗證資料。It is worth noting that in this embodiment, Keras is used to establish the generator model and the discriminator model of the pix2pix architecture against the generation network (generative adversarial network, GAN). The generator model is a u-net structure, It includes 6 generation encoders and 6 generation decoders, and the convolution kernel size (kernel size) is a three-dimensional size of 4×4×4, and the size of the generation input port and the generation output port is 64×64×64; the The discriminator model is a patchGAN structure, including 6 discriminative decoders, and the size of the convolution kernel is 4×4×4, the size of the discriminator input port is 64×64×64, and the discriminator output port size is 1× 1×1, but not limited to this. The prediction result of the discriminator model is between zero and one, representing the probability that all the input images are 'true'. If the prediction result is one, it means that the judgment is 'true', that is, the input images are all from the training data or the verification data; otherwise, if the prediction result is zero, it means that the judgment is 'false', that is, the input One of the images of is not from the training data or the verification data.

要再特別注意的是,現有的對抗網路是基於2維影像之模型架構,雖可將三維影像拆解為數個二維之影像分別處理,但此舉動代表每一個二維切面皆為獨立之影像,因此原始三維影像中的兩相鄰之二維切面有機會在視覺上不連續;而在本實施例中卷積核大小為三維大小,與將三維影像拆解為數個二維切面不同,因此可改善相鄰兩二維切面之不連續性問題。It should be especially noted that the existing confrontation network is based on the model structure of 2D images. Although a 3D image can be disassembled into several 2D images and processed separately, this means that each 2D slice is independent. image, so two adjacent two-dimensional slices in the original three-dimensional image may be visually discontinuous; and in this embodiment, the size of the convolution kernel is three-dimensional, which is different from decomposing a three-dimensional image into several two-dimensional slices. Therefore, the problem of discontinuity between two adjacent two-dimensional slices can be improved.

在步驟203中,對於每一訓練資料,該處理單元12將該訓練資料的該影像處理前腦部三維影像輸入至該生成器模型,以產生一第一腦部提取及校正影像。In step 203, for each training data, the processing unit 12 inputs the pre-processing three-dimensional brain image of the training data into the generator model to generate a first brain extraction and correction image.

在步驟204中,對於每一訓練資料,該處理單元12將該訓練資料的該影像處理前腦部三維影像及該影像處理後腦部三維影像輸入至該鑑別器模型,以產生一相關於該訓練資料的該影像處理前腦部三維影像及該影像處理後腦部三維影像是否為真的第一預測結果。In step 204, for each training data, the processing unit 12 inputs the 3D brain image before image processing and the 3D brain image after image processing of the training data into the discriminator model to generate a correlation with the The first prediction result of whether the three-dimensional brain image of the training data before the image processing and the three-dimensional brain image after the image processing is true.

在步驟205中,該處理單元12根據該等訓練資料對應的第一預測結果,調整該鑑別器模型。In step 205, the processing unit 12 adjusts the discriminator model according to the first prediction result corresponding to the training data.

要特別注意的是,在本實施例中,由於在步驟204中輸入該鑑別器模型的影像皆來自該等訓練資料,而在步驟205中希望能增強該鑑別器模型鑑別’真’的能力,因此期望該等第一預測結果為一,故該處理單元12根據該等第一預測結果,以一第一損失函數計算出該等第一預測結果與「一」之間的差距,以根據其差距進行該鑑別器模型的調整,該第一損失函數例如為二元交叉熵(binary crossentropy),但不以此為限。It should be noted that in this embodiment, since the images input to the discriminator model in step 204 are all from the training data, in step 205 it is desired to enhance the ability of the discriminator model to identify 'true', Therefore, it is expected that the first prediction results are one, so the processing unit 12 uses a first loss function to calculate the distance between the first prediction results and "one" according to the first prediction results, so as to The gap adjusts the discriminator model, and the first loss function is, for example, binary crossentropy (binary crossentropy), but not limited thereto.

在步驟206中,對於每一訓練資料,該處理單元12將該訓練資料的該影像處理前腦部三維影像及對應該訓練資料的第一腦部提取及校正影像輸入至該鑑別器模型,以產生一相關於該訓練資料的該影像處理前腦部三維影像及對應該訓練資料的第一腦部提取及校正影像是否為真的第二預測結果。In step 206, for each training data, the processing unit 12 inputs the pre-processing three-dimensional brain image of the training data and the first extracted and corrected brain image corresponding to the training data to the discriminator model, so as to A second prediction result is generated as to whether the 3D brain image before image processing of the training data and the first brain extraction and correction image corresponding to the training data are true.

在步驟207中,該處理單元12根據該等訓練資料對應的第二預測結果,調整該鑑別器模型。In step 207, the processing unit 12 adjusts the discriminator model according to the second prediction result corresponding to the training data.

要特別注意的是,在本實施例中,由於在步驟206中輸入該鑑別器模型的影像並非皆來自該等訓練資料或該等驗證資料,而在步驟207中希望能增強該鑑別器模型鑑別’偽’的能力,因此期望該等第二預測結果為零,故該處理單元12根據該等第二預測結果,以該第一損失函數計算出該等第二預測結果與「零」之間的差距,以根據其差距進行該鑑別器模型的調整。It should be noted that, in this embodiment, since the images input to the discriminator model in step 206 are not all from the training data or the verification data, it is hoped to enhance the discriminator model in step 207 to identify Therefore, it is expected that the second prediction results are zero, so the processing unit 12 uses the first loss function to calculate the distance between the second prediction results and "zero" according to the second prediction results to adjust the discriminator model according to its gap.

在步驟208中,對於每一訓練資料,該處理單元12將該訓練資料的該影像處理前腦部三維影像及對應該訓練資料的第一腦部提取及校正影像輸入至調整後的該鑑別器模型,以產生一相關於該訓練資料的該影像處理前腦部三維影像及對應該訓練資料的第一腦部提取及校正影像之差距的第三預測結果。In step 208, for each training data, the processing unit 12 inputs the pre-processed 3D brain image of the training data and the first extracted and corrected brain image corresponding to the training data to the adjusted discriminator A model for generating a third prediction result related to the difference between the pre-processing three-dimensional brain image of the training data and the first brain extraction and correction image corresponding to the training data.

值得注意的是,在本實施例中,在步驟204、206、208中,該鑑別器模型先將輸入的兩影像合併,再經該至少一鑑別解碼器產生該第一預測結果、該第二預測結果,及該第三預測結果,但不以此為限。It should be noted that, in this embodiment, in steps 204, 206, and 208, the discriminator model first combines the two input images, and then generates the first prediction result, the second prediction result, and the third prediction result, but not limited thereto.

在步驟209中,該處理單元12根據該等訓練資料對應的第三預測結果、該等訓練資料的影像處理後腦部三維影像,及該等訓練資料對應的第一腦部提取及校正影像調整該生成器模型,以獲得一調整後生成器模型。In step 209, the processing unit 12 adjusts according to the third prediction result corresponding to the training data, the image-processed three-dimensional brain image of the training data, and the first brain extraction and correction image corresponding to the training data. the generator model to obtain an adjusted generator model.

要特別注意的是,在本實施例中,在步驟209中希望該生成器模型產生的該第一腦部提取及校正影像能接近該等訓練資料的影像,使該鑑別模型預測錯誤,因此期望該等第三預測結果為一,故該處理單元12根據該等第三預測結果,以該第一損失函數計算出該等第三預測結果與「一」之間的差距,並根據該等訓練資料的影像處理後腦部三維影像及對應的第一腦部提取及校正影像,以一第二損失函數計算出影像處理後腦部三維影像與第一腦部提取及校正影像之間的差距,以根據其二差距進行該生成器模型的調整,該第二損失函數例如為平均絕對誤差(mean absolute error, MAE),但不以此為限。It should be noted that in this embodiment, in step 209, it is hoped that the first brain extraction and correction image generated by the generator model can be close to the images of the training data, so that the discrimination model can make prediction errors. Therefore, it is desired The third prediction results are one, so the processing unit 12 uses the first loss function to calculate the distance between the third prediction results and "one" according to the third prediction results, and according to the training After the image processing of the data, the three-dimensional brain image and the corresponding first brain extraction and correction image are used to calculate the gap between the three-dimensional brain image after image processing and the first brain extraction and correction image by a second loss function, The generator model can be adjusted according to the second difference. The second loss function is, for example, mean absolute error (MAE), but not limited thereto.

要再注意的是,在調整該生成器模型時,該鑑別器模型的權重將被固定,反之在調整該鑑別器模型時,該生成器模型的權重將被固定。It should be noted again that when the generator model is tuned, the weights of the discriminator model will be fixed, whereas when the discriminator model is tuned, the weights of the generator model will be fixed.

在步驟210中,該處理單元12判定是否已獲得K+1個調整後生成器模型,K

Figure 02_image001
1。當判定出已獲得K+1個調整後生成器模型時,流程進行步驟211;而當定出未獲得K+1個調整後生成器模型時,流程進行重複步驟203~209。 In step 210, the processing unit 12 determines whether K+1 adjusted generator models have been obtained, K
Figure 02_image001
Figure 02_image001
1. When it is determined that K+1 adjusted generator models have been obtained, the process proceeds to step 211 ; and when it is determined that K+1 adjusted generator models have not been obtained, the process proceeds to repeat steps 203 - 209 .

在步驟211中,對於每一驗證資料及每一調整後生成器模型,該處理單元12將該驗證資料的該影像處理前腦部三維影像輸入至該調整後生成器模型,以產生一第二腦部提取及校正影像。In step 211, for each verification data and each adjusted generator model, the processing unit 12 inputs the pre-image-processed three-dimensional brain image of the verification data into the adjusted generator model to generate a second Brain extraction and rectification of images.

在步驟212中,對於每一驗證資料及每一調整後生成器模型,該處理單元12根據該驗證資料的影像處理後腦部三維影像及該驗證資料對應的第二腦部提取及校正影像,計算出一相似度指標。In step 212, for each verification data and each adjusted generator model, the processing unit 12 processes the three-dimensional brain image and the second brain extraction and correction image corresponding to the verification data according to the verification data, A similarity index is calculated.

值得注意的是,該相似度指標例如為餘弦角距離(cosine angle distance , CAD)、歐氏距離(Euclidean distance, L2 norm)、均方誤差(mean square error)、峰值訊噪比(peak signal-to-noise ratio)、平均結構相似度(mean structural similarity , MSSIM),其中歐氏距離、均方誤差、峰值訊噪比只是彼此的線性組合,因此以下僅舉例歐氏距離。It is worth noting that the similarity index is, for example, cosine angle distance (cosine angle distance, CAD), Euclidean distance (Euclidean distance, L2 norm), mean square error (mean square error), peak signal-to-noise ratio (peak signal-to-noise ratio) to-noise ratio) and mean structural similarity (MSSIM), among which Euclidean distance, mean square error, and peak signal-to-noise ratio are just linear combinations of each other, so only the Euclidean distance is exemplified below.

若該相似度指標為餘弦角距離,則該相似度指標CAD以下式表示:

Figure 02_image003
Figure 02_image005
, 其中, A為該驗證資料的影像處理後腦部三維影像, B為該驗證資料對應的第二腦部提取及校正影像, a 1, a 2,…, a N 為該驗證資料的影像處理後腦部三維影像的N個體素, b 1, b 2,…, b N 為該驗證資料對應的第二腦部提取及校正影像的N個體素,該相似度指標介於-1至1之間。 If the similarity index is cosine angular distance, then the similarity index CAD is expressed by the following formula:
Figure 02_image003
,
Figure 02_image005
, wherein, A is the three-dimensional image of the brain after image processing of the verification data, B is the second brain extraction and correction image corresponding to the verification data, a 1 , a 2 ,..., a N are the image processing of the verification data N voxels of the three-dimensional image of the hindbrain, b 1 , b 2 ,…, b N are the N voxels of the second brain extraction and correction image corresponding to the verification data, and the similarity index is between -1 and 1 between.

若該相似度指標為平均結構相似度,則該相似度指標 MSSIM以下式表示:

Figure 02_image007
, 其中,
Figure 02_image009
為該驗證資料的影像處理後腦部三維影像,
Figure 02_image011
為該驗證資料對應的第二腦部提取及校正影像, A
Figure 02_image009
的局部窗口, B
Figure 02_image011
的局部窗口,
Figure 02_image013
Figure 02_image015
分別位於
Figure 02_image009
Figure 02_image011
內,SSIM包括亮度 l、對比度 c和結構 sw i 為圓對稱高斯加權函數,
Figure 02_image017
1,
Figure 02_image019
為可變動權重,
Figure 02_image021
10 -3的純量, M為影像的窗口總數,該相似度指標介於-1至1之間。 If the similarity index is the average structural similarity, then the similarity index MSSIM is expressed by the following formula:
Figure 02_image007
, in,
Figure 02_image009
3D images of the brain after image processing for the validation data,
Figure 02_image011
is the extracted and corrected image of the second brain corresponding to the verification data, A is
Figure 02_image009
The local window of , B is
Figure 02_image011
the local window of
Figure 02_image013
and
Figure 02_image015
respectively located in
Figure 02_image009
and
Figure 02_image011
Inside, SSIM includes brightness l , contrast c and structure s , w i is a circularly symmetric Gaussian weighting function,
Figure 02_image017
1,
Figure 02_image019
is the variable weight,
Figure 02_image021
10 -3 scalar, M is the total number of image windows, and the similarity index is between -1 and 1.

值得注意的是,在本實施例中,

Figure 02_image019
為1,
Figure 02_image023
10 -4、9×10 -4、4.5×10 -4,但不以此為限。 It is worth noting that in this example,
Figure 02_image019
is 1,
Figure 02_image023
10 -4 , 9×10 -4 , 4.5×10 -4 , but not limited thereto.

若該相似度指標為歐氏距離,則該相似度指標 L2以下式表示:

Figure 02_image025
Figure 02_image005
, 其中, A為該驗證資料的影像處理後腦部三維影像, B為該驗證資料對應的第二腦部提取及校正影像, a 1, a 2,…, a N 為該驗證資料的影像處理後腦部三維影像的N個體素, b 1, b 2,…, b N 為該驗證資料對應的第二腦部提取及校正影像的N個體素,該相似度指標最小值為0。 If the similarity index is Euclidean distance, then the similarity index L2 is expressed by the following formula:
Figure 02_image025
,
Figure 02_image005
, wherein, A is the three-dimensional image of the brain after image processing of the verification data, B is the second brain extraction and correction image corresponding to the verification data, a 1 , a 2 ,..., a N are the image processing of the verification data The N voxels of the three-dimensional image of the hindbrain, b 1 , b 2 ,..., b N are the N voxels of the second brain extraction and correction image corresponding to the verification data, and the minimum value of the similarity index is 0.

在步驟213中,該處理單元12根據該等K+1個調整後生成器模型對應的相似度指標從該等K+1個調整後生成器模型選出一最佳調整後生成器模型。In step 213, the processing unit 12 selects an optimal adjusted generator model from the K+1 adjusted generator models according to the similarity indicators corresponding to the K+1 adjusted generator models.

值得注意的是,若該相似度指標為餘弦角距離或平均結構相似度,則該最佳調整後生成器模型對應的相似度指標之平均為相對最高;若該相似度指標為歐氏距離,則該最佳調整後生成器模型對應的相似度指標之平均為相對最低。It is worth noting that if the similarity index is cosine angular distance or average structural similarity, the average of the similarity index corresponding to the optimally adjusted generator model is relatively highest; if the similarity index is Euclidean distance, Then the average of the similarity indicators corresponding to the optimally adjusted generator model is relatively the lowest.

要再注意的是,在其他實施方式中,可僅執行步驟201~步驟209,直接將該調整後生成器模型作為腦部影像之腦部提取與線圈勻場校正模型。It should be noted that in other embodiments, only steps 201 to 209 may be performed, and the adjusted generator model may be directly used as the brain extraction and coil shimming correction model of the brain image.

綜上所述,本發明腦部影像之腦部提取與線圈勻場校正模型的建立方法,藉由該處理單元12建立該生成器模型及該鑑別器模型,對於每一訓練資料,該鑑別器模型根據該影像處理前腦部三維影像、該影像處理後腦部三維影像,及該生成模型產生的該第一腦部提取及校正影像產生該第一預測結果、該第二預測結果,並根據該第一預測結果及該第二預測結果調整該鑑別器模型,調整後的該鑑別器模型再根據該影像處理前腦部三維影像及該第一腦部提取及校正影像產生該第三預測結果,最後根據該第三預測結果調整該生成器模型,以獲得該調整後生成器模型,再重複步驟203~209後獲得多個調整後生成器模型,最後根據該等驗證資料對應的相似度指標選出該最佳調整後生成器模型,該最佳調整後生成器模型能根據任意影像處理前腦部三維影像輸出具有腦部感興趣區域且非均勻校正的影像,大幅減少相關專業人員之人力及時間成本且兼具準確度,故確實能達成本發明的目的。In summary, the method for establishing the brain extraction and coil shimming correction model of the brain image of the present invention uses the processing unit 12 to establish the generator model and the discriminator model. For each training data, the discriminator The model generates the first prediction result and the second prediction result according to the three-dimensional brain image before the image processing, the three-dimensional brain image after the image processing, and the first brain extraction and correction image generated by the generation model, and according to The first prediction result and the second prediction result adjust the discriminator model, and the adjusted discriminator model generates the third prediction result according to the three-dimensional image of the brain before the image processing and the first extracted and corrected image of the brain , and finally adjust the generator model according to the third prediction result to obtain the adjusted generator model, then repeat steps 203-209 to obtain multiple adjusted generator models, and finally according to the similarity index corresponding to the verification data Select the best adjusted generator model, the best adjusted generator model can output images with brain regions of interest and non-uniform correction according to the three-dimensional images of the brain before any image processing, greatly reducing the manpower and labor of relevant professionals Time cost and accuracy, so can really achieve the purpose of the present invention.

惟以上所述者,僅為本發明的實施例而已,當不能以此限定本發明實施的範圍,凡是依本發明申請專利範圍及專利說明書內容所作的簡單的等效變化與修飾,皆仍屬本發明專利涵蓋的範圍內。But the above-mentioned ones are only embodiments of the present invention, and should not limit the scope of the present invention. All simple equivalent changes and modifications made according to the patent scope of the present invention and the content of the patent specification are still within the scope of the present invention. Within the scope covered by the patent of the present invention.

1:電腦裝置 11:儲存單元 12:處理單元 201~213:步驟 1: computer device 11: storage unit 12: Processing unit 201~213: Steps

本發明的其他的特徵及功效,將於參照圖式的實施方式中清楚地呈現,其中: 圖1是一方塊圖,說明用來實施本發明腦部影像之腦部提取與線圈勻場校正模型的建立方法的一實施例之一電腦裝置;及 圖2是一流程圖,說明本發明腦部影像之腦部提取與線圈勻場校正模型的建立方法的該實施例。 Other features and effects of the present invention will be clearly presented in the implementation manner with reference to the drawings, wherein: Fig. 1 is a block diagram illustrating a computer device of an embodiment of the method for implementing the brain extraction and coil shimming correction model of the brain image of the present invention; and FIG. 2 is a flowchart illustrating the embodiment of the method for brain extraction and coil shimming correction model establishment of brain images of the present invention.

201~213:步驟 201~213: Steps

Claims (9)

一種腦部影像之腦部提取與線圈勻場校正模型的建立方法,由一電腦裝置執行,該電腦裝置儲存多筆訓練資料,每一筆訓練資料包括一影像處理前腦部三維影像及一經非均勻校正的影像處理後腦部三維影像,該影像處理後腦部三維影像包括一感興趣區域,該方法包含以下步驟: (A)建立一用以產生一腦部提取與線圈勻場校正後的影像的生成器模型,該生成器模型包括一生成輸入端口、至少一生成編碼器、一卷積層、至少一分別對應該至少一編碼器的生成解碼器,及一生成輸出端口; (B)建立一用以預測二輸入影像之真偽的鑑別器模型,該鑑別器模型包括二鑑別輸入端口、至少一鑑別解碼器,及一鑑別輸出端口; (C)對於每一訓練資料,將該訓練資料的該影像處理前腦部三維影像輸入至該生成器模型,以產生一第一腦部提取及校正影像; (D)對於每一訓練資料,將該訓練資料的該影像處理前腦部三維影像及該影像處理後腦部三維影像輸入至該鑑別器模型,以產生一相關於該訓練資料的該影像處理前腦部三維影像及該影像處理後腦部三維影像是否為真的第一預測結果; (E)根據該等訓練資料對應的第一預測結果,調整該鑑別器模型; (F)對於每一訓練資料,將該訓練資料的該影像處理前腦部三維影像及對應該訓練資料的第一腦部提取及校正影像輸入至該鑑別器模型,以產生一相關於該訓練資料的該影像處理前腦部三維影像及對應該訓練資料的第一腦部提取及校正影像是否為真的第二預測結果; (G)根據該等訓練資料對應的第二預測結果,調整該鑑別器模型; (H)對於每一訓練資料,將該訓練資料的該影像處理前腦部三維影像及對應該訓練資料的第一腦部提取及校正影像輸入至調整後的該鑑別器模型,以產生一相關於該訓練資料的該影像處理前腦部三維影像及對應該訓練資料的第一腦部提取及校正影像是否為真的第三預測結果;及 (I)根據該等訓練資料對應的第三預測結果、該等訓練資料的影像處理後腦部三維影像,及該等訓練資料對應的第一腦部提取及校正影像調整該生成器模型,以獲得一調整後生成器模型。 A method for establishing a brain extraction and coil shimming correction model of a brain image, executed by a computer device, the computer device stores multiple training data, each training data includes a three-dimensional image of the brain before image processing and a non-uniform The corrected image-processed three-dimensional image of the brain, the image-processed three-dimensional image of the brain includes a region of interest, the method includes the following steps: (A) Establish a generator model for generating an image after brain extraction and coil shimming correction, the generator model includes a generator input port, at least one generator encoder, a convolutional layer, and at least one corresponding to at least one encoder generating decoder, and a generating output port; (B) establishing a discriminator model for predicting the authenticity of two input images, the discriminator model comprising two discriminative input ports, at least one discriminative decoder, and a discriminative output port; (C) for each training data, input the pre-image processed three-dimensional brain image of the training data into the generator model to generate a first brain extraction and correction image; (D) For each training data, input the three-dimensional image of the brain before the image processing and the three-dimensional image after the image processing of the training data into the discriminator model to generate a corresponding image processing of the training data The first prediction result of whether the three-dimensional image of the forebrain and the three-dimensional image of the brain after processing the image is true; (E) adjusting the discriminator model according to the first prediction result corresponding to the training data; (F) For each training data, input the three-dimensional image of the brain before the image processing of the training data and the first extracted and corrected image of the brain corresponding to the training data into the discriminator model to generate a Whether the three-dimensional image of the brain before the image processing of the data and the first brain extraction and correction image corresponding to the training data is true or not is the second prediction result; (G) adjusting the discriminator model according to the second prediction result corresponding to the training data; (H) For each training data, input the pre-processing three-dimensional brain image of the training data and the first extracted and corrected brain image corresponding to the training data into the adjusted discriminator model to generate a correlation a third prediction result of whether the three-dimensional brain image before the image processing of the training data and the first brain extraction and correction image corresponding to the training data are true; and (1) adjust the generator model according to the third prediction result corresponding to the training data, the three-dimensional image of the brain after image processing of the training data, and the first brain extraction and correction image corresponding to the training data, so as to Obtain an adjusted generator model. 如請求項1所述的腦部影像之腦部提取與線圈勻場校正模型的建立方法,其中,在步驟(D)中,對於每一訓練資料,該鑑別器模型經該等鑑別輸入端口將該訓練資料的該影像處理前腦部三維影像與該影像處理後腦部三維影像合併,再經該至少一鑑別解碼器產生該第一預測結果。The method for establishing a brain extraction and coil shimming correction model of a brain image as described in Claim 1, wherein, in step (D), for each training data, the discriminator model will be The 3D brain image before image processing and the 3D brain image after image processing of the training data are merged, and then the first prediction result is generated by the at least one discriminant decoder. 如請求項1所述的腦部影像之腦部提取與線圈勻場校正模型的建立方法,該電腦裝置還儲存多筆驗證資料,每一筆驗證資料包括一影像處理前腦部三維影像及一經非均勻校正的影像處理後腦部三維影像,該影像處理後腦部三維影像包括一感興趣區域,在步驟(I)後還包含以下步驟: (J)重複步驟(C)~(I)K次,以獲得K+1個調整後生成器模型,K
Figure 03_image001
1; (K) 對於每一驗證資料及每一調整後生成器模型,將該驗證資料的該影像處理前腦部三維影像輸入至該調整後生成器模型,以產生一第二腦部提取及校正影像; (L) 對於每一驗證資料及每一調整後生成器模型,根據該驗證資料的影像處理後腦部三維影像及該驗證資料對應的第二腦部提取及校正影像,計算出一相似度指標; (M)根據該等K+1個調整後生成器模型對應的相似度指標從該等K+1個調整後生成器模型選出一最佳調整後生成器模型。
According to the method of brain extraction and coil shimming correction model of brain image described in claim 1, the computer device also stores multiple pieces of verification data, each piece of verification data includes a three-dimensional image of the brain before image processing and a non-invasive The three-dimensional image of the brain after image processing of uniform correction, the three-dimensional image of the brain after image processing includes a region of interest, and the following steps are also included after step (I): (J) Repeat steps (C) to (I) K times , to obtain K+1 adjusted generator models, K
Figure 03_image001
1; (K) for each validation data and each adjusted generator model, input the pre-image-processed 3D brain image of the validation data into the adjusted generator model to generate a second brain extraction and Correcting the image; (L) For each verification data and each adjusted generator model, calculate a Similarity index; (M) Select an optimal adjusted generator model from the K+1 adjusted generator models according to the similarity indexes corresponding to the K+1 adjusted generator models.
如請求項3所述的腦部影像之腦部提取與線圈勻場校正模型的建立方法,其中,在步驟(L)中,該相似度指標CAD以下式表示:
Figure 03_image003
Figure 03_image005
, 其中, A為該驗證資料的影像處理後腦部三維影像, B為該驗證資料對應的第二腦部提取及校正影像, a 1, a 2,…, a N 為該驗證資料的影像處理後腦部三維影像的N個體素, b 1, b 2,…, b N 為該驗證資料對應的第二腦部提取及校正影像的N個體素。
The method for establishing the brain extraction and coil shimming correction model of the brain image as described in claim 3, wherein, in step (L), the similarity index CAD is represented by the following formula:
Figure 03_image003
,
Figure 03_image005
, wherein, A is the three-dimensional image of the brain after image processing of the verification data, B is the second brain extraction and correction image corresponding to the verification data, a 1 , a 2 ,..., a N are the image processing of the verification data N voxels of the three-dimensional image of the hindbrain, b 1 , b 2 ,..., b N are N voxels of the second brain extraction and correction image corresponding to the verification data.
請求項3所述的腦部影像之腦部提取與線圈勻場校正模型的建立方法,其中,在步驟(L)中,該相似度指標 MSSIM以下式表示:
Figure 03_image007
, 其中,
Figure 03_image027
為該驗證資料的影像處理後腦部三維影像,
Figure 03_image029
為該驗證資料對應的第二腦部提取及校正影像, A
Figure 03_image027
的局部窗口, B
Figure 03_image029
的局部窗口,
Figure 03_image031
Figure 03_image033
分別位於
Figure 03_image027
Figure 03_image029
內,SSIM包括亮度 l、對比度 c和結構 sw i 為圓對稱高斯加權函數,
Figure 03_image035
1,
Figure 03_image037
為可變動權重,
Figure 03_image039
-3的純量, M為影像的窗口總數。
The method for establishing the brain extraction and coil shimming correction model of the brain image described in claim 3, wherein, in step (L), the similarity index MSSIM is represented by the following formula:
Figure 03_image007
, in,
Figure 03_image027
3D images of the brain after image processing for the validation data,
Figure 03_image029
is the extracted and corrected image of the second brain corresponding to the verification data, A is
Figure 03_image027
The local window of , B is
Figure 03_image029
the local window of
Figure 03_image031
and
Figure 03_image033
respectively located in
Figure 03_image027
and
Figure 03_image029
Inside, SSIM includes brightness l , contrast c and structure s , w i is a circularly symmetric Gaussian weighting function,
Figure 03_image035
1,
Figure 03_image037
is the variable weight,
Figure 03_image039
The scalar of -3 , M is the total number of windows of the image.
請求項4或5所述的腦部影像之腦部提取與線圈勻場校正模型的建立方法,其中,在步驟(M)中,該最佳調整後生成器模型對應的相似度指標之平均為相對最高。The method for establishing the brain extraction and coil shimming correction model of the brain image described in claim 4 or 5, wherein, in step (M), the average of the similarity indicators corresponding to the optimally adjusted generator model is relatively highest. 請求項3所述的腦部影像之腦部提取與線圈勻場校正模型的建立方法,其中,在步驟(L)中,該相似度指標 L2以下式表示:
Figure 03_image025
Figure 03_image005
, 其中, A為該驗證資料的影像處理後腦部三維影像, B為該驗證資料對應的第二腦部提取及校正影像, a 1, a 2,…, a N 為該驗證資料的影像處理後腦部三維影像的N個體素, b 1, b 2,…, b N 為該驗證資料對應的第二腦部提取及校正影像的N個體素。
The method for establishing the brain extraction and coil shimming correction model of the brain image described in claim 3, wherein, in step (L), the similarity index L2 is represented by the following formula:
Figure 03_image025
,
Figure 03_image005
, wherein, A is the three-dimensional image of the brain after image processing of the verification data, B is the second brain extraction and correction image corresponding to the verification data, a 1 , a 2 ,..., a N are the image processing of the verification data N voxels of the three-dimensional image of the hindbrain, b 1 , b 2 ,..., b N are N voxels of the second brain extraction and correction image corresponding to the verification data.
請求項7所述的腦部影像之腦部提取與線圈勻場校正模型的建立方法,其中,在步驟(M)中,該最佳調整後生成器模型對應的相似度指標之平均為相對最低。The method for establishing the brain extraction and coil shimming correction model of the brain image described in claim 7, wherein, in step (M), the average of the similarity indicators corresponding to the optimally adjusted generator model is relatively the lowest . 如請求項1所述的腦部影像之腦部提取與線圈勻場校正模型的建立方法,其中,在步驟(A)中,該生成器模型的卷積核大小為三維,在步驟(B)中,該鑑別器模型的卷積核大小為三維。The method for establishing the brain extraction and coil shimming correction model of the brain image as described in claim 1, wherein, in step (A), the convolution kernel size of the generator model is three-dimensional, and in step (B) In , the convolution kernel size of the discriminator model is three-dimensional.
TW110129132A 2021-08-06 2021-08-06 Brain Extraction and Coil Shim Correction Model Establishment in Brain Imaging TWI771141B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW110129132A TWI771141B (en) 2021-08-06 2021-08-06 Brain Extraction and Coil Shim Correction Model Establishment in Brain Imaging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW110129132A TWI771141B (en) 2021-08-06 2021-08-06 Brain Extraction and Coil Shim Correction Model Establishment in Brain Imaging

Publications (2)

Publication Number Publication Date
TWI771141B TWI771141B (en) 2022-07-11
TW202307863A true TW202307863A (en) 2023-02-16

Family

ID=83439462

Family Applications (1)

Application Number Title Priority Date Filing Date
TW110129132A TWI771141B (en) 2021-08-06 2021-08-06 Brain Extraction and Coil Shim Correction Model Establishment in Brain Imaging

Country Status (1)

Country Link
TW (1) TWI771141B (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8942945B2 (en) * 2011-04-19 2015-01-27 General Electric Company System and method for prospective correction of high order eddy-current-induced distortion in diffusion-weighted echo planar imaging
AU2019268404A1 (en) * 2018-05-15 2020-12-03 Monash University Method and system of motion correction for magnetic resonance imaging
EP3903117A2 (en) * 2018-12-28 2021-11-03 Hyperfine, Inc. Correcting for hysteresis in magnetic resonance imaging
US20210156945A1 (en) * 2019-11-27 2021-05-27 Siemens Healthcare Gmbh Motion correction and motion reduction during dedicated magnetic resonance imaging

Also Published As

Publication number Publication date
TWI771141B (en) 2022-07-11

Similar Documents

Publication Publication Date Title
CN111539947B (en) Image detection method, related model training method, related device and equipment
CN111047629B (en) Multi-modal image registration method and device, electronic equipment and storage medium
US11633146B2 (en) Automated co-registration of prostate MRI data
CN112313668A (en) System and method for magnetic resonance imaging normalization using deep learning
CN110889852A (en) Liver segmentation method based on residual error-attention deep neural network
Zheng et al. No-reference quality assessment for screen content images based on hybrid region features fusion
CN109949349B (en) Multi-mode three-dimensional image registration and fusion display method
Zhang et al. Fine-grained quality assessment for compressed images
US11978146B2 (en) Apparatus and method for reconstructing three-dimensional image
KR20190139781A (en) CNN-based high resolution image generating apparatus for minimizing data acquisition time and method therefor
CN104700440B (en) Magnetic resonant part K spatial image reconstruction method
CN108550146A (en) A kind of image quality evaluating method based on ROI
CN114219719A (en) CNN medical CT image denoising method based on dual attention and multi-scale features
CN110827232A (en) Cross-modal MRI (magnetic resonance imaging) synthesis method based on morphological feature GAN (gain)
US10964074B2 (en) System for harmonizing medical image presentation
Song et al. Fs-ncsr: Increasing diversity of the super-resolution space via frequency separation and noise-conditioned normalizing flow
CN112785540B (en) Diffusion weighted image generation system and method
CN107492085B (en) Stereo image quality evaluation method based on dual-tree complex wavelet transform
WO2023216720A1 (en) Image reconstruction model training method and apparatus, device, medium, and program product
TWI771141B (en) Brain Extraction and Coil Shim Correction Model Establishment in Brain Imaging
Golestaneh et al. Reduced-reference quality assessment based on the entropy of DNT coefficients of locally weighted gradients
Qi et al. Blind image quality assessment for MRI with a deep three-dimensional content-adaptive hyper-network
CN116228520A (en) Image compressed sensing reconstruction method and system based on transform generation countermeasure network
CN113838161B (en) Sparse projection reconstruction method based on graph learning
Neubert et al. Automated intervertebral disc segmentation using probabilistic shape estimation and active shape models