TW202307863A - Brain extraction and coil shimming correction model establishment method for brain image capable of significantly reducing manpower and time costs with high accuracy - Google Patents
Brain extraction and coil shimming correction model establishment method for brain image capable of significantly reducing manpower and time costs with high accuracy Download PDFInfo
- Publication number
- TW202307863A TW202307863A TW110129132A TW110129132A TW202307863A TW 202307863 A TW202307863 A TW 202307863A TW 110129132 A TW110129132 A TW 110129132A TW 110129132 A TW110129132 A TW 110129132A TW 202307863 A TW202307863 A TW 202307863A
- Authority
- TW
- Taiwan
- Prior art keywords
- brain
- image
- model
- training data
- extraction
- Prior art date
Links
Images
Abstract
Description
本發明是有關於一種模型的建立方法,特別是指一種腦部影像之腦部提取與線圈勻場校正模型的建立方法。The present invention relates to a method for establishing a model, in particular to a method for establishing a brain image extraction and coil shimming correction model.
磁振造影(Magnetic Resonance Imaging)已被廣泛用於獲取大腦的結構和功能信息。通過構建腦圖譜和模板,這些從不同時間且不同受試者獲得的資訊豐富的多模態資料可以影像融合到一個共同的參考空間,用於體素和區域分析,以了解衰老、疾病和治療的影響。為了實現準確的影像融合,影像需要針對頭部運動、幾何失真和線圈B1場非均勻進行校正。由於較差的非均勻校正(inhomogeneity correction)和腦部提取(brain extraction/skull stripping)會導致錯位或有偏差,因此需要相關人員進行手動校正以及手動提取大腦部分。MRI (Magnetic Resonance Imaging) has been widely used to obtain structural and functional information of the brain. By constructing brain atlases and templates, these informative multimodal data from different subjects at different times can be image-fused into a common reference space for voxel-wise and regional analysis to understand aging, disease, and treatment Impact. For accurate image fusion, images need to be corrected for head motion, geometric distortion, and coil B1 field non-uniformity. Since poor inhomogeneity correction and brain extraction/skull stripping can lead to misalignment or bias, manual correction and manual extraction of brain parts is required.
現有雖已有各種用於線圈非均勻校正的方法,但這些方法還是需要針對不同的影像對比度(例如,自旋回訊(spin-echo)與梯度回訊(gradient-echo))、線圈設計和頭部位置進行手動調整。同樣地,腦部提取亦需要手動編輯以獲得準確的腦部影像的感興趣區域。Although there are various methods for coil non-uniformity correction, these methods still need to be specific for different image contrast (for example, spin-echo (spin-echo) vs. gradient-echo (gradient-echo)), coil design and head. Adjust the position manually. Likewise, brain extraction also requires manual editing to obtain accurate regions of interest for brain images.
然而,若手動調整及編輯仍需耗費大量的時間成本及相關的人力資源。However, manual adjustment and editing still consume a lot of time and related human resources.
舉例來說,若整批未處理的資料集擁有400個三維影像(影像尺寸為64×64×64),以選取腦部感興趣區域的任務為例,假設人為選取一個二維切面花費時間約一分鐘,64個切面的影像則須約64分鐘,則處理一個資料集約花費426.67小時。For example, if the entire batch of unprocessed data sets has 400 3D images (image size is 64×64×64), taking the task of selecting the region of interest in the brain as an example, assuming that it takes about For one minute, it takes about 64 minutes for images of 64 slices, and it takes about 426.67 hours to process one data set.
因此,本發明的目的,即在提供一種能自動提取感興趣區域及進行非均勻校正的腦部影像之腦部提取與線圈勻場校正模型的建立方法。Therefore, the object of the present invention is to provide a brain extraction and coil shimming correction model establishment method that can automatically extract the region of interest and perform non-uniform correction of the brain image.
於是,本發明腦部影像之腦部提取與線圈勻場校正模型的建立方法,由:一電腦裝置執行,該電腦裝置儲存多筆訓練資料,每一筆訓練資料包括一影像處理前腦部三維影像及一經非均勻校正的影像處理後腦部三維影像,該影像處理後腦部三維影像包括一感興趣區域,該方法包含一步驟(A)、一步驟(B)、一步驟(C)、一步驟(D)、一步驟(E)、一步驟(F)、一步驟(G)、一步驟(H),及一步驟(I)。Therefore, the method for establishing the brain extraction and coil shimming correction model of the brain image of the present invention is performed by: a computer device, the computer device stores a plurality of training data, and each training data includes a three-dimensional image of the brain before image processing And a three-dimensional image of the brain after image processing after non-uniform correction, the three-dimensional image of the brain after image processing includes a region of interest, the method includes a step (A), a step (B), a step (C), a Step (D), a step (E), a step (F), a step (G), a step (H), and a step (I).
在該步驟(A)中,該電腦裝置建立一用以產生一腦部提取與線圈勻場校正後的影像的生成器模型,該生成器模型包括一生成輸入端口、至少一生成編碼器、一卷積層、至少一分別對應該至少一編碼器的生成解碼器,及一生成輸出端口。In the step (A), the computer device establishes a generator model for generating an image after brain extraction and coil shimming correction, the generator model includes a generating input port, at least one generating encoder, a The convolutional layer, at least one generating decoder respectively corresponding to the at least one encoder, and a generating output port.
在該步驟(B)中,該電腦裝置建立一用以預測二輸入影像之真偽的鑑別器模型,該鑑別器模型包括二鑑別輸入端口、至少一鑑別解碼器,及一鑑別輸出端口。In the step (B), the computer device builds a discriminator model for predicting the authenticity of two input images, and the discriminator model includes two discriminative input ports, at least one discriminative decoder, and a discriminative output port.
在該步驟(C)中,對於每一訓練資料,該電腦裝置將該訓練資料的該影像處理前腦部三維影像輸入至該生成器模型,以產生一第一腦部提取及校正影像。In the step (C), for each training data, the computer device inputs the three-dimensional brain image of the training data before image processing to the generator model to generate a first brain extraction and correction image.
在該步驟(D)中,對於每一訓練資料,該電腦裝置將該訓練資料的該影像處理前腦部三維影像及該影像處理後腦部三維影像輸入至該鑑別器模型,以產生一相關於該訓練資料的該影像處理前腦部三維影像及該影像處理後腦部三維影像是否為真的第一預測結果。In the step (D), for each training data, the computer device inputs the three-dimensional images of the brain before image processing and the three-dimensional images of the brain after image processing of the training data into the discriminator model to generate a correlation A first prediction result of whether the three-dimensional brain image before the image processing and the three-dimensional brain image after the image processing of the training data are true.
在該步驟(E)中,該電腦裝置根據該等訓練資料對應的第一預測結果,調整該鑑別器模型。In the step (E), the computer device adjusts the discriminator model according to the first prediction results corresponding to the training data.
在該步驟(F)中,對於每一訓練資料,該電腦裝置將該訓練資料的該影像處理前腦部三維影像及對應該訓練資料的第一腦部提取及校正影像輸入至該鑑別器模型,以產生一相關於該訓練資料的該影像處理前腦部三維影像及對應該訓練資料的第一腦部提取及校正影像是否為真的第二預測結果。In the step (F), for each training data, the computer device inputs the three-dimensional image of the brain before the image processing of the training data and the first extracted and corrected image of the brain corresponding to the training data to the discriminator model , so as to generate a second prediction result related to whether the pre-processing three-dimensional brain image of the training data and the first brain extraction and correction image corresponding to the training data are true.
在該步驟(G)中,該電腦裝置根據該等訓練資料對應的第二預測結果,調整該鑑別器模型。In the step (G), the computer device adjusts the discriminator model according to the second prediction result corresponding to the training data.
在該步驟(H)中,對於每一訓練資料,該電腦裝置將該訓練資料的該影像處理前腦部三維影像及對應該訓練資料的第一腦部提取及校正影像輸入至調整後的該鑑別器模型,以產生一相關於該訓練資料的該影像處理前腦部三維影像及對應該訓練資料的第一腦部提取及校正影像是否為真的第三預測結果。In the step (H), for each training data, the computer device inputs the three-dimensional image of the brain before the image processing of the training data and the first extracted and corrected image of the brain corresponding to the training data into the adjusted A discriminator model for generating a third prediction result related to whether the pre-image-processed three-dimensional brain image of the training data and the first brain extraction and correction image corresponding to the training data are true.
在該步驟(I)中,該電腦裝置根據該等訓練資料對應的第三預測結果、該等訓練資料的影像處理後腦部三維影像,及該等訓練資料對應的第一腦部提取及校正影像調整該生成器模型,以獲得一調整後生成器模型。In the step (I), the computer device processes the three-dimensional images of the brain according to the third prediction results corresponding to the training data, the images of the training data, and the first brain extraction and correction corresponding to the training data The generator model is image adjusted to obtain an adjusted generator model.
本發明之功效在於:該調整後生成器模型能根據任意影像處理前腦部三維影像輸出具有腦部感興趣區域且非均勻校正的影像,大幅減少相關專業人員之人力及時間成本且兼具準確度。The effect of the present invention is that the adjusted generator model can output a non-uniformly corrected image with brain regions of interest according to any three-dimensional image of the brain before processing, greatly reducing the manpower and time costs of relevant professionals and having accurate Spend.
在本發明被詳細描述前,應當注意在以下的說明內容中,類似的元件是以相同的編號來表示。Before the present invention is described in detail, it should be noted that in the following description, similar elements are denoted by the same numerals.
參閱圖1,說明用來實施本發明腦部影像之腦部提取與線圈勻場校正模型的建立方法的一實施例之一電腦裝置1,該電腦系統1包含一儲存單元11及一電連接該儲存單元11的處理單元12。在本實施例中,該電腦系統1之實施態樣例如為一個人電腦、一伺服器或一雲端主機,但不以此為限。Referring to Fig. 1, it illustrates a
該儲存單元11儲存多筆訓練資料及多筆驗證資料,每一筆訓練資料或驗證資料包括一影像處理前腦部三維影像及一經非均勻校正的影像處理後腦部三維影像,該影像處理後腦部三維影像包括一感興趣區域。The
值得注意的是,該等訓練資料或驗證資料的影像處理前腦部三維影像及影像處理後腦部三維影像為(Magnetic Resonance Imaging, MRI),該等訓練資料或驗證資料可皆為大鼠(rat)腦部影像,或是皆為小鼠(mouse)腦部影像,或是混合,但不以此為限。It is worth noting that the three-dimensional images of the brain before and after image processing of the training data or verification data are (Magnetic Resonance Imaging, MRI), and the training data or verification data can all be rats ( rat) brain images, or all mouse (mouse) brain images, or mixed, but not limited thereto.
參閱圖1、2,以下說明本發明腦部影像之腦部提取與線圈勻場校正模型的建立方法的該實施例所包含的步驟。Referring to FIG. 1 and FIG. 2 , the steps included in this embodiment of the method for brain extraction and coil shimming correction model establishment of brain images of the present invention will be described below.
在步驟201中,該處理單元12建立一用以產生一腦部提取與線圈勻場校正後的影像的生成器模型,該生成器模型包括一生成輸入端口、至少一生成編碼器、一卷積層、至少一分別對應該至少一編碼器的生成解碼器,及一生成輸出端口。In step 201, the
在步驟202中,該處理單元12建立一用以預測二輸入影像之真偽的鑑別器模型,該鑑別器模型包括二鑑別輸入端口、至少一鑑別解碼器。In
值得注意的是,在本實施例中是使用Keras建立屬於pix2pix架構之對抗生成網路(generative adversarial network, GAN)的該生成器模型及該鑑別器模型,該生成器模型為u-net結構,包括6個生成編碼器及6個生成解碼器,且卷積核大小(kernel size)為4×4×4的三維大小,該生成輸入端口及該生成輸出端口尺寸為64×64×64;該鑑別器模型是為patchGAN結構,包括6個鑑別解碼器,且卷積核大小為4×4×4的三維大小,該鑑別輸入端口尺寸為64×64×64,該鑑別輸出端口尺寸為1×1×1,但不以此為限。該鑑別器模型的預測結果在零與一之間,表示所輸入的影像皆為’真’的機率。若預測結果為一,表示判定為’真’,亦即所輸入的影像皆來自該等訓練資料或該等驗證資料;反之,若預測結果為零,表示判定為’偽’,亦即所輸入的其中一影像並非來自該等訓練資料或該等驗證資料。It is worth noting that in this embodiment, Keras is used to establish the generator model and the discriminator model of the pix2pix architecture against the generation network (generative adversarial network, GAN). The generator model is a u-net structure, It includes 6 generation encoders and 6 generation decoders, and the convolution kernel size (kernel size) is a three-dimensional size of 4×4×4, and the size of the generation input port and the generation output port is 64×64×64; the The discriminator model is a patchGAN structure, including 6 discriminative decoders, and the size of the convolution kernel is 4×4×4, the size of the discriminator input port is 64×64×64, and the discriminator output port size is 1× 1×1, but not limited to this. The prediction result of the discriminator model is between zero and one, representing the probability that all the input images are 'true'. If the prediction result is one, it means that the judgment is 'true', that is, the input images are all from the training data or the verification data; otherwise, if the prediction result is zero, it means that the judgment is 'false', that is, the input One of the images of is not from the training data or the verification data.
要再特別注意的是,現有的對抗網路是基於2維影像之模型架構,雖可將三維影像拆解為數個二維之影像分別處理,但此舉動代表每一個二維切面皆為獨立之影像,因此原始三維影像中的兩相鄰之二維切面有機會在視覺上不連續;而在本實施例中卷積核大小為三維大小,與將三維影像拆解為數個二維切面不同,因此可改善相鄰兩二維切面之不連續性問題。It should be especially noted that the existing confrontation network is based on the model structure of 2D images. Although a 3D image can be disassembled into several 2D images and processed separately, this means that each 2D slice is independent. image, so two adjacent two-dimensional slices in the original three-dimensional image may be visually discontinuous; and in this embodiment, the size of the convolution kernel is three-dimensional, which is different from decomposing a three-dimensional image into several two-dimensional slices. Therefore, the problem of discontinuity between two adjacent two-dimensional slices can be improved.
在步驟203中,對於每一訓練資料,該處理單元12將該訓練資料的該影像處理前腦部三維影像輸入至該生成器模型,以產生一第一腦部提取及校正影像。In
在步驟204中,對於每一訓練資料,該處理單元12將該訓練資料的該影像處理前腦部三維影像及該影像處理後腦部三維影像輸入至該鑑別器模型,以產生一相關於該訓練資料的該影像處理前腦部三維影像及該影像處理後腦部三維影像是否為真的第一預測結果。In
在步驟205中,該處理單元12根據該等訓練資料對應的第一預測結果,調整該鑑別器模型。In
要特別注意的是,在本實施例中,由於在步驟204中輸入該鑑別器模型的影像皆來自該等訓練資料,而在步驟205中希望能增強該鑑別器模型鑑別’真’的能力,因此期望該等第一預測結果為一,故該處理單元12根據該等第一預測結果,以一第一損失函數計算出該等第一預測結果與「一」之間的差距,以根據其差距進行該鑑別器模型的調整,該第一損失函數例如為二元交叉熵(binary crossentropy),但不以此為限。It should be noted that in this embodiment, since the images input to the discriminator model in
在步驟206中,對於每一訓練資料,該處理單元12將該訓練資料的該影像處理前腦部三維影像及對應該訓練資料的第一腦部提取及校正影像輸入至該鑑別器模型,以產生一相關於該訓練資料的該影像處理前腦部三維影像及對應該訓練資料的第一腦部提取及校正影像是否為真的第二預測結果。In
在步驟207中,該處理單元12根據該等訓練資料對應的第二預測結果,調整該鑑別器模型。In
要特別注意的是,在本實施例中,由於在步驟206中輸入該鑑別器模型的影像並非皆來自該等訓練資料或該等驗證資料,而在步驟207中希望能增強該鑑別器模型鑑別’偽’的能力,因此期望該等第二預測結果為零,故該處理單元12根據該等第二預測結果,以該第一損失函數計算出該等第二預測結果與「零」之間的差距,以根據其差距進行該鑑別器模型的調整。It should be noted that, in this embodiment, since the images input to the discriminator model in
在步驟208中,對於每一訓練資料,該處理單元12將該訓練資料的該影像處理前腦部三維影像及對應該訓練資料的第一腦部提取及校正影像輸入至調整後的該鑑別器模型,以產生一相關於該訓練資料的該影像處理前腦部三維影像及對應該訓練資料的第一腦部提取及校正影像之差距的第三預測結果。In
值得注意的是,在本實施例中,在步驟204、206、208中,該鑑別器模型先將輸入的兩影像合併,再經該至少一鑑別解碼器產生該第一預測結果、該第二預測結果,及該第三預測結果,但不以此為限。It should be noted that, in this embodiment, in
在步驟209中,該處理單元12根據該等訓練資料對應的第三預測結果、該等訓練資料的影像處理後腦部三維影像,及該等訓練資料對應的第一腦部提取及校正影像調整該生成器模型,以獲得一調整後生成器模型。In
要特別注意的是,在本實施例中,在步驟209中希望該生成器模型產生的該第一腦部提取及校正影像能接近該等訓練資料的影像,使該鑑別模型預測錯誤,因此期望該等第三預測結果為一,故該處理單元12根據該等第三預測結果,以該第一損失函數計算出該等第三預測結果與「一」之間的差距,並根據該等訓練資料的影像處理後腦部三維影像及對應的第一腦部提取及校正影像,以一第二損失函數計算出影像處理後腦部三維影像與第一腦部提取及校正影像之間的差距,以根據其二差距進行該生成器模型的調整,該第二損失函數例如為平均絕對誤差(mean absolute error, MAE),但不以此為限。It should be noted that in this embodiment, in
要再注意的是,在調整該生成器模型時,該鑑別器模型的權重將被固定,反之在調整該鑑別器模型時,該生成器模型的權重將被固定。It should be noted again that when the generator model is tuned, the weights of the discriminator model will be fixed, whereas when the discriminator model is tuned, the weights of the generator model will be fixed.
在步驟210中,該處理單元12判定是否已獲得K+1個調整後生成器模型,K
1。當判定出已獲得K+1個調整後生成器模型時,流程進行步驟211;而當定出未獲得K+1個調整後生成器模型時,流程進行重複步驟203~209。
In
在步驟211中,對於每一驗證資料及每一調整後生成器模型,該處理單元12將該驗證資料的該影像處理前腦部三維影像輸入至該調整後生成器模型,以產生一第二腦部提取及校正影像。In
在步驟212中,對於每一驗證資料及每一調整後生成器模型,該處理單元12根據該驗證資料的影像處理後腦部三維影像及該驗證資料對應的第二腦部提取及校正影像,計算出一相似度指標。In
值得注意的是,該相似度指標例如為餘弦角距離(cosine angle distance , CAD)、歐氏距離(Euclidean distance, L2 norm)、均方誤差(mean square error)、峰值訊噪比(peak signal-to-noise ratio)、平均結構相似度(mean structural similarity , MSSIM),其中歐氏距離、均方誤差、峰值訊噪比只是彼此的線性組合,因此以下僅舉例歐氏距離。It is worth noting that the similarity index is, for example, cosine angle distance (cosine angle distance, CAD), Euclidean distance (Euclidean distance, L2 norm), mean square error (mean square error), peak signal-to-noise ratio (peak signal-to-noise ratio) to-noise ratio) and mean structural similarity (MSSIM), among which Euclidean distance, mean square error, and peak signal-to-noise ratio are just linear combinations of each other, so only the Euclidean distance is exemplified below.
若該相似度指標為餘弦角距離,則該相似度指標CAD以下式表示: , , 其中, A為該驗證資料的影像處理後腦部三維影像, B為該驗證資料對應的第二腦部提取及校正影像, a 1, a 2,…, a N 為該驗證資料的影像處理後腦部三維影像的N個體素, b 1, b 2,…, b N 為該驗證資料對應的第二腦部提取及校正影像的N個體素,該相似度指標介於-1至1之間。 If the similarity index is cosine angular distance, then the similarity index CAD is expressed by the following formula: , , wherein, A is the three-dimensional image of the brain after image processing of the verification data, B is the second brain extraction and correction image corresponding to the verification data, a 1 , a 2 ,..., a N are the image processing of the verification data N voxels of the three-dimensional image of the hindbrain, b 1 , b 2 ,…, b N are the N voxels of the second brain extraction and correction image corresponding to the verification data, and the similarity index is between -1 and 1 between.
若該相似度指標為平均結構相似度,則該相似度指標 MSSIM以下式表示: , 其中, 為該驗證資料的影像處理後腦部三維影像, 為該驗證資料對應的第二腦部提取及校正影像, A為 的局部窗口, B為 的局部窗口, 和 分別位於 和 內,SSIM包括亮度 l、對比度 c和結構 s, w i 為圓對稱高斯加權函數, 1, 為可變動權重, 10 -3的純量, M為影像的窗口總數,該相似度指標介於-1至1之間。 If the similarity index is the average structural similarity, then the similarity index MSSIM is expressed by the following formula: , in, 3D images of the brain after image processing for the validation data, is the extracted and corrected image of the second brain corresponding to the verification data, A is The local window of , B is the local window of and respectively located in and Inside, SSIM includes brightness l , contrast c and structure s , w i is a circularly symmetric Gaussian weighting function, 1, is the variable weight, 10 -3 scalar, M is the total number of image windows, and the similarity index is between -1 and 1.
值得注意的是,在本實施例中, 為1, 10 -4、9×10 -4、4.5×10 -4,但不以此為限。 It is worth noting that in this example, is 1, 10 -4 , 9×10 -4 , 4.5×10 -4 , but not limited thereto.
若該相似度指標為歐氏距離,則該相似度指標 L2以下式表示: , , 其中, A為該驗證資料的影像處理後腦部三維影像, B為該驗證資料對應的第二腦部提取及校正影像, a 1, a 2,…, a N 為該驗證資料的影像處理後腦部三維影像的N個體素, b 1, b 2,…, b N 為該驗證資料對應的第二腦部提取及校正影像的N個體素,該相似度指標最小值為0。 If the similarity index is Euclidean distance, then the similarity index L2 is expressed by the following formula: , , wherein, A is the three-dimensional image of the brain after image processing of the verification data, B is the second brain extraction and correction image corresponding to the verification data, a 1 , a 2 ,..., a N are the image processing of the verification data The N voxels of the three-dimensional image of the hindbrain, b 1 , b 2 ,..., b N are the N voxels of the second brain extraction and correction image corresponding to the verification data, and the minimum value of the similarity index is 0.
在步驟213中,該處理單元12根據該等K+1個調整後生成器模型對應的相似度指標從該等K+1個調整後生成器模型選出一最佳調整後生成器模型。In
值得注意的是,若該相似度指標為餘弦角距離或平均結構相似度,則該最佳調整後生成器模型對應的相似度指標之平均為相對最高;若該相似度指標為歐氏距離,則該最佳調整後生成器模型對應的相似度指標之平均為相對最低。It is worth noting that if the similarity index is cosine angular distance or average structural similarity, the average of the similarity index corresponding to the optimally adjusted generator model is relatively highest; if the similarity index is Euclidean distance, Then the average of the similarity indicators corresponding to the optimally adjusted generator model is relatively the lowest.
要再注意的是,在其他實施方式中,可僅執行步驟201~步驟209,直接將該調整後生成器模型作為腦部影像之腦部提取與線圈勻場校正模型。It should be noted that in other embodiments, only steps 201 to 209 may be performed, and the adjusted generator model may be directly used as the brain extraction and coil shimming correction model of the brain image.
綜上所述,本發明腦部影像之腦部提取與線圈勻場校正模型的建立方法,藉由該處理單元12建立該生成器模型及該鑑別器模型,對於每一訓練資料,該鑑別器模型根據該影像處理前腦部三維影像、該影像處理後腦部三維影像,及該生成模型產生的該第一腦部提取及校正影像產生該第一預測結果、該第二預測結果,並根據該第一預測結果及該第二預測結果調整該鑑別器模型,調整後的該鑑別器模型再根據該影像處理前腦部三維影像及該第一腦部提取及校正影像產生該第三預測結果,最後根據該第三預測結果調整該生成器模型,以獲得該調整後生成器模型,再重複步驟203~209後獲得多個調整後生成器模型,最後根據該等驗證資料對應的相似度指標選出該最佳調整後生成器模型,該最佳調整後生成器模型能根據任意影像處理前腦部三維影像輸出具有腦部感興趣區域且非均勻校正的影像,大幅減少相關專業人員之人力及時間成本且兼具準確度,故確實能達成本發明的目的。In summary, the method for establishing the brain extraction and coil shimming correction model of the brain image of the present invention uses the
惟以上所述者,僅為本發明的實施例而已,當不能以此限定本發明實施的範圍,凡是依本發明申請專利範圍及專利說明書內容所作的簡單的等效變化與修飾,皆仍屬本發明專利涵蓋的範圍內。But the above-mentioned ones are only embodiments of the present invention, and should not limit the scope of the present invention. All simple equivalent changes and modifications made according to the patent scope of the present invention and the content of the patent specification are still within the scope of the present invention. Within the scope covered by the patent of the present invention.
1:電腦裝置 11:儲存單元 12:處理單元 201~213:步驟 1: computer device 11: storage unit 12: Processing unit 201~213: Steps
本發明的其他的特徵及功效,將於參照圖式的實施方式中清楚地呈現,其中: 圖1是一方塊圖,說明用來實施本發明腦部影像之腦部提取與線圈勻場校正模型的建立方法的一實施例之一電腦裝置;及 圖2是一流程圖,說明本發明腦部影像之腦部提取與線圈勻場校正模型的建立方法的該實施例。 Other features and effects of the present invention will be clearly presented in the implementation manner with reference to the drawings, wherein: Fig. 1 is a block diagram illustrating a computer device of an embodiment of the method for implementing the brain extraction and coil shimming correction model of the brain image of the present invention; and FIG. 2 is a flowchart illustrating the embodiment of the method for brain extraction and coil shimming correction model establishment of brain images of the present invention.
201~213:步驟 201~213: Steps
Claims (9)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW110129132A TWI771141B (en) | 2021-08-06 | 2021-08-06 | Brain Extraction and Coil Shim Correction Model Establishment in Brain Imaging |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW110129132A TWI771141B (en) | 2021-08-06 | 2021-08-06 | Brain Extraction and Coil Shim Correction Model Establishment in Brain Imaging |
Publications (2)
Publication Number | Publication Date |
---|---|
TWI771141B TWI771141B (en) | 2022-07-11 |
TW202307863A true TW202307863A (en) | 2023-02-16 |
Family
ID=83439462
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW110129132A TWI771141B (en) | 2021-08-06 | 2021-08-06 | Brain Extraction and Coil Shim Correction Model Establishment in Brain Imaging |
Country Status (1)
Country | Link |
---|---|
TW (1) | TWI771141B (en) |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8942945B2 (en) * | 2011-04-19 | 2015-01-27 | General Electric Company | System and method for prospective correction of high order eddy-current-induced distortion in diffusion-weighted echo planar imaging |
AU2019268404A1 (en) * | 2018-05-15 | 2020-12-03 | Monash University | Method and system of motion correction for magnetic resonance imaging |
EP3903117A2 (en) * | 2018-12-28 | 2021-11-03 | Hyperfine, Inc. | Correcting for hysteresis in magnetic resonance imaging |
US20210156945A1 (en) * | 2019-11-27 | 2021-05-27 | Siemens Healthcare Gmbh | Motion correction and motion reduction during dedicated magnetic resonance imaging |
-
2021
- 2021-08-06 TW TW110129132A patent/TWI771141B/en active
Also Published As
Publication number | Publication date |
---|---|
TWI771141B (en) | 2022-07-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111539947B (en) | Image detection method, related model training method, related device and equipment | |
CN111047629B (en) | Multi-modal image registration method and device, electronic equipment and storage medium | |
US11633146B2 (en) | Automated co-registration of prostate MRI data | |
CN112313668A (en) | System and method for magnetic resonance imaging normalization using deep learning | |
CN110889852A (en) | Liver segmentation method based on residual error-attention deep neural network | |
Zheng et al. | No-reference quality assessment for screen content images based on hybrid region features fusion | |
CN109949349B (en) | Multi-mode three-dimensional image registration and fusion display method | |
Zhang et al. | Fine-grained quality assessment for compressed images | |
US11978146B2 (en) | Apparatus and method for reconstructing three-dimensional image | |
KR20190139781A (en) | CNN-based high resolution image generating apparatus for minimizing data acquisition time and method therefor | |
CN104700440B (en) | Magnetic resonant part K spatial image reconstruction method | |
CN108550146A (en) | A kind of image quality evaluating method based on ROI | |
CN114219719A (en) | CNN medical CT image denoising method based on dual attention and multi-scale features | |
CN110827232A (en) | Cross-modal MRI (magnetic resonance imaging) synthesis method based on morphological feature GAN (gain) | |
US10964074B2 (en) | System for harmonizing medical image presentation | |
Song et al. | Fs-ncsr: Increasing diversity of the super-resolution space via frequency separation and noise-conditioned normalizing flow | |
CN112785540B (en) | Diffusion weighted image generation system and method | |
CN107492085B (en) | Stereo image quality evaluation method based on dual-tree complex wavelet transform | |
WO2023216720A1 (en) | Image reconstruction model training method and apparatus, device, medium, and program product | |
TWI771141B (en) | Brain Extraction and Coil Shim Correction Model Establishment in Brain Imaging | |
Golestaneh et al. | Reduced-reference quality assessment based on the entropy of DNT coefficients of locally weighted gradients | |
Qi et al. | Blind image quality assessment for MRI with a deep three-dimensional content-adaptive hyper-network | |
CN116228520A (en) | Image compressed sensing reconstruction method and system based on transform generation countermeasure network | |
CN113838161B (en) | Sparse projection reconstruction method based on graph learning | |
Neubert et al. | Automated intervertebral disc segmentation using probabilistic shape estimation and active shape models |