WO2013091078A1 - Method for phase unwrapping - Google Patents

Method for phase unwrapping Download PDF

Info

Publication number
WO2013091078A1
WO2013091078A1 PCT/CA2012/001181 CA2012001181W WO2013091078A1 WO 2013091078 A1 WO2013091078 A1 WO 2013091078A1 CA 2012001181 W CA2012001181 W CA 2012001181W WO 2013091078 A1 WO2013091078 A1 WO 2013091078A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
unwrapped
phase
cross
stack
Prior art date
Application number
PCT/CA2012/001181
Other languages
French (fr)
Inventor
Junmin LIU
Maria Drangova
Original Assignee
Liu Junmin
Maria Drangova
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liu Junmin, Maria Drangova filed Critical Liu Junmin
Publication of WO2013091078A1 publication Critical patent/WO2013091078A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R33/00Arrangements or instruments for measuring magnetic variables
    • G01R33/20Arrangements or instruments for measuring magnetic variables involving magnetic resonance
    • G01R33/44Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
    • G01R33/48NMR imaging systems
    • G01R33/54Signal processing systems, e.g. using pulse sequences ; Generation or control of pulse sequences; Operator console
    • G01R33/56Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution
    • G01R33/565Correction of image distortions, e.g. due to magnetic field inhomogeneities
    • G01R33/56545Correction of image distortions, e.g. due to magnetic field inhomogeneities caused by finite or discrete sampling, e.g. Gibbs ringing, truncation artefacts, phase aliasing artefacts
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5207Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of raw data to produce diagnostic data, e.g. for generating an image

Definitions

  • the present invention relates to imaging and in particular to image phase unwrapping.
  • Imaging techniques such as ultrasound and magnetic resonance (MR) imaging use phase information within received signals in frequency space to reconstruct complex images.
  • MR magnetic resonance
  • Such reconstructed complex images have been acquired for many purposes including, but not limited to, rigid-body motion determination [1-3], chemical shift mapping [4], blood flow measurement in MR angiography [5-7], MR venography [8], field-map calculation [9-10], phase-contrast imaging [11], susceptibility-weighted imaging [12], temperature mapping [13], perfusion measurement [14], and tissue segmentation [15].
  • phase wrapping occurs when phase values fall outside the range (- ⁇ , ⁇ ), thereby introducing 2 ⁇ discontinuities in the measured phase data, rendering the phase data unusable until the discontinuities are removed by phase unwrapping [16].
  • Phase unwrapping is often performed by computationally complex, off-line systems under user guidance.
  • Itoh discloses a one-dimensional (ID) phase unwrapping algorithm [17] where the true phase values of a complex sequence are retrieved by unwrapping neighboring differences and integrating the unwrapped phase differences.
  • the Itoh algorithm is well known and is typically referred to as the conventional ID phase unwrapping algorithm.
  • the Itoh algorithm has been extended to two-dimensional (2D) applications by Ghiglia et al. [18].
  • Improvements in imaging techniques are generally desired. It is therefore an at least to provide a novel phase unwrapping method for two-dimensional (2D) and three-dimensional (3D) images obtained from MR or other types of phase imaging.
  • a method for unwrapping a phase image comprising unwrapping the phase image along X and Y directions, identifying a horizontal processing strip in the unwrapped X image and a vertical processing strip in the unwrapped Y image based on defined criterion, cross-referring the horizontal processing strip with the unwrapped Y image and the vertical processing strip with the unwrapped X image, dividing each column in the cross-referred unwrapped Y image into a plurality of line-segments and dividing each row in the cross-referred unwrapped X image into a plurality of line-segments, and cross-referring the column line-segments with the cross-referred unwrapped X image and cross-referring the row line-segments with the cross-referred unwrapped Y image, thereby to yield unwrapped images X.cr.Y and Y.cr.X.
  • unwrapping the phase image along the X and Y directions is performed using a one-dimensional (ID) Itoh algorithm.
  • Identifying the horizontal processing strip comprises calculating a quality ratio for each row in the unwrapped X image, grouping neighbor rows if the quality ratio of each of the rows is above a quality ratio threshold, thereby defining a strip and selecting the strip comprising the greatest number of rows as the horizontal processing strip.
  • Identifying the vertical processing strip comprises calculating a quality ratio for each column in the unwrapped Y image, grouping neighbor columns if the quality ratio of each of the columns is above a quality ratio threshold, thereby defining a strip and selecting the strip comprising the greatest number of columns as the vertical processing strip.
  • the method further comprises reconnecting the unwrapped X and Y images.
  • reconnecting the unwrapped X image comprises dividing a row in the unwrapped X image into n line-segments, calculating a phase difference between line-segment i and line-segment i + 1, comparing the phase difference to a threshold value and if the phase difference is greater than the threshold value shifting line-segment i + 1 to the line-segment n by a defined value and repeating the steps of dividing, calculating and comparing until all rows in the unwrapped X image have been reconnected.
  • reconnecting the unwrapped Y image comprises dividing a column in the unwrapped Y image into n line-segments, calculating a phase difference line-segment i and line-segment i + 1, comparing the phase difference to a threshold value, and if the phase difference is greater than the threshold value shifting line-segment i+1 to line-segment n by a defined value and repeating the steps of dividing, calculating and comparing until all columns in the unwrapped Y image have been reconnected.
  • a method for unwrapping a three-dimensional (3D) phase image comprising a plurality of phase images, the method comprising unwrapping each phase image according to the aforementioned method, and stacking the unwrapped images X.cr.Y and stacking the unwrapped phase images Y.cr.X, thereby to yield an X.cr.Y stack and an Y.cr.X stack, respectively.
  • a non-transitory, computer- readable medium embodying a computer program having computer program code for execution by a computer to unwrap a phase image, the program code comprising program code for unwrapping the phase image along X and Y directions, program code for identifying a horizontal processing strip in the unwrapped X image and a vertical processing strip in the unwrapped Y image based on defined criterion, program code for cross-referring the horizontal processing strip with the unwrapped Y image and the vertical processing strip with the unwrapped X image, program code for dividing each column in the cross-referred unwrapped Y image into a plurality of line-segments and for dividing each row in the cross-referred unwrapped X image into a plurality of line- segments, and program code for cross-referring the column line-segments with the cross- referred unwrapped X image and cross-referring the row line-segments with the cross- referred un
  • a non-transitory, computer-readable medium embodying a computer program having computer program code for execution by a computer to unwrap a three-dimensional phase image comprising a plurality of phase images, the program code comprising program code for unwrapping each phase image according to a method of unwrapping the phase image along X and Y directions, identifying a horizontal processing strip in the unwrapped X image and a vertical processing strip in the unwrapped Y image based on defined criterion, cross-referring the horizontal processing strip with the unwrapped Y image and the vertical processing strip with the unwrapped X image, dividing each column in the cross-referred unwrapped Y image into a plurality of line-segments and dividing each row in the cross-referred unwrapped X image into a plurality of line-segments, and cross-referring the column line-segments with the cross-referred unwrapped X image and cross-referring the
  • an apparatus for unwrapping a phase image comprising memory for storing the phase image, and processing structure communicating with the memory and being configured to unwrap the phase image along X and Y directions, identify a horizontal processing strip in the unwrapped X image and a vertical processing strip in the unwrapped Y image based on defined criterion, cross-refer the horizontal processing strip with the unwrapped Y image and the vertical processing strip with the unwrapped X image, divide each column in the cross-referred unwrapped Y image into a plurality of line-segments and divide each row in the cross-referred unwrapped X image into a plurality of line-segments, and cross- refer the column line-segments with the cross-referred unwrapped X image and cross- refer the row line-segments with the cross-referred unwrapped Y image, thereby to yield unwrapped images X.cr.Y and Y.c
  • an apparatus for unwrapping a three-dimensional phase image comprising a plurality of phase images
  • the apparatus comprising memory for storing the three-dimensional phase image, and processing structure communicating with the memory and being configured to, for each phase image, unwrap the phase image along X and Y directions, identify a horizontal processing strip in the unwrapped X image and a vertical processing strip in the unwrapped Y image based on defined criterion, cross-refer the horizontal processing strip with the unwrapped Y image and the vertical processing strip with the unwrapped X image, divide each column in the cross-referred unwrapped Y image into a plurality of line-segments and divide each row in the cross-referred unwrapped X image into a plurality of line-segments, and cross-refer the column line-segments with the cross- referred unwrapped X image and cross-refer the row line-segments with the cross- referred
  • Figure 1 is a flowchart showing steps of a method for phase unwrapping a complex two dimensional (2D) image
  • Figure 2 is a flowchart showing steps carried out during the loading of the complex 2D phase image and the setting of the initialization values step of Figure 1;
  • Figures 3 A and 3B are flowcharts showing steps carried out during the unwrapping complex 2D image along the X and Y directions step of Figure 1 ;
  • Figures 4 A to 4C are flowcharts showing steps carried out during the cross-referring strips step of Figure 1;
  • Figures 5 A and 5B are flowcharts showing steps carried out during the cross-referring line segments step of Figure 1 ;
  • Figure 6 is a flowchart showing steps of a method for phase unwrapping a complex three-dimensional (3D) image
  • Figure 7A is a flowchart showing steps carried out during the stacking step of Figure 6;
  • Figure 7B is a flowchart showing steps carried out during the cross- referring 2D images in the X-Z and Y-Z planes step of Figure 6;
  • Figure 7C is a flowchart showing steps carried out during the cross- referring 2D images in the X-Y plane step of Figure 6;
  • Figures 8 A to 8 C show the 15 th slice of a saggital volume acquired with
  • TE 6ms pre-processed according to the method of Figure 1 ;
  • Figures 9A to 9F show unwrapped 2D phase images following each step of the method of Figure 1 ;
  • Figure 10 shows unwrapped 2D phase images obtained with different threshold levels according to the method of Figure 1 ;
  • Figures 12A and 12B show a comparison of unwrapped sagittal phase volumes processed using hybrid 3D PUROR, PRELUDE-2D, PRELUDE-HY algorithms;
  • Figures 13A and 13B show a comparison of unwrapped coronal phase volumes processed using hybrid 3D PUROR, PRELUDE-2D, PRELUDE-HY algorithms; [0033] Figures 14A and 14B show a comparison of unwrapped axial phase volumes processed using hybrid 3D PUROR, PRELUDE-2D, PRELUDE-HY algorithms; and
  • Figures 15A, 15B and 15C show a comparison of unwrapped sagittal phase images using the 2D PUROR and 2D PRELUDE algorithms.
  • FIG. 1 a method for phase unwrapping a two- dimensional (2D) phase image of a 2D complex image is shown and is generally identified by reference numeral 100.
  • the complex 2D image which comprises a 2D magnitude image and a 2D phase image, is loaded into a processor or other suitable computing/processing structure and initialization values are set (step 200).
  • the 2D phase image is then unwrapped along the X and Y directions to yield X and Y unwrapped images (step 300).
  • a processing strip satisfying defined criterion is identified in each of the X and Y unwrapped images, and each processing strip is cross- referred to the other of the X and Y unwrapped images (step 400).
  • Line-segments are identified in each of the cross-referred unwrapped X and Y images, and the line- segments are cross-referred to the other of the X and Y cross-referred unwrapped X and Y images (step 500).
  • step 205 a counter is set to a value of zero (step 210).
  • the background noise level ⁇ is then estimated (step 215).
  • the complex 2D image is filtered using an averaging filter and the background noise level ⁇ is estimated from the 2D magnitude image associated with the filtered complex 2D image by using maximum likelihood estimation [33].
  • the filtering is performed to avoid having to use a high thresholding level [21, 34].
  • a mask THmask is then generated (step 220) that is used to filter complex images, wherein only pixels with intensities above mask TH mas k are processed.
  • mask TH mas k is set to the background noise level ⁇ estimated from the 2D magnitude image.
  • a support mask TH support is also generated (step 225) that is used to filter complex images, wherein only pixels with intensities above support mask TH support are processed.
  • support mask TH supporl is set to a value of:
  • the background noise level estimated from the 2D magnitude image (step 215).
  • step 300 steps carried out during unwrapping of the complex images along X and Y directions are shown.
  • the complex 2D phase image is unwrapped in the X and Y directions, thereby creating images ⁇ ⁇ ⁇ and ⁇ ⁇ / ⁇ , respectively (step 305).
  • Images ⁇ and ⁇ /toh are then filtered using mask THmask, wherein pixels having an intensity below mask THmask are excluded from further processing (step 308) and the images ⁇ /toh and ⁇ ./toh are processed for the purpose of reconnection as will be described.
  • a line-segment is defined as a continuous set of at least three (3) pixels for which the phase gradient is less than a gradient threshold cp th ,LS, which in this
  • the embodiment is set to a value of 2 ⁇ (step 310).
  • the counter i is set to a value of one (1).
  • phase difference ⁇ between the end point LS en d of the i th line- segment and the start point L3 ⁇ 4 art of the (i + if 1 line-segment is then calculated as:
  • ⁇ Pi+i , s t ar t is the mean value of the first three (3) pixels of the (i+1)"' line-segment (step 315).
  • the method skips to step 330 to process the next line-segment.
  • phase difference ⁇ is then compared to a phase difference threshold, which in this embodiment is set to a value of ⁇ /2 (step 320). If the phase difference AtpLs is less than the phase difference threshold, the method continues to step 330. [0043] If the phase difference is greater than the phase difference threshold, a phase discontinuity is deemed to exist due to the presence of poles in between line-segments. As such, all remaining phase values, that is, phase values associated with the (i+l) th line-segment to the n th line-segment, are shifted according to the following equation:
  • step 330 2 ⁇ x oundiAq) ⁇ ⁇ 12 ⁇ ) [eq. 3] where round((7) rounds the value of tpisj 2 ⁇ to the nearest integer (step 325). The method then continues to step 330.
  • the counter is compared to the number of line-segments n in the row to check if all line-segments have been processed (step 330). If there are remaining line- segments to be processed, counter i is incremented by one (1) (step 335) and the method returns to step 315 to process the next line-segment.
  • step 340 a check is performed to determine if all rows in the image ⁇ , have been reconnected. If there are remaining rows to be reconnected, the next row is selected for processing (step 345) and the method returns to step 310.
  • a reconnected image Pxi toh .reconnect is output (step 350).
  • the method then returns to step 310, wherein image ⁇ is processed in a similar manner. It will however be appreciated that when the above- described reconnecting method is applied to image ⁇ , the columns of the image ⁇ are reconnected, as opposed to the rows. The resulting reconnected image
  • step 350 ⁇ . ⁇ ⁇ ⁇ . ⁇ is similarly output (step 350). Once the reconnected images have been output, the method continues to step 355.
  • ⁇ . ⁇ and ⁇ , ⁇ are loaded into the processor (step 355) and counter i is set to a value of one (1) (step 360).
  • the mean phase value of row i in the reconnected image is then calculated by averaging the phase value of each pixel in the row (step 365).
  • the mean phase value is added to a dataset containing the mean phase value for each row in the reconnected image ⁇ .
  • ⁇ - i this embodiment, only pixels having an intensity greater than support mask TH support (described above) are included in the calculation of mean phase value for each row. If none of the pixels in a row have an intensity greater than support mask TH suppor t, pixels in the row having an intensity greater than mask TH mas k are used to calculate the mean phase value of that row. If none of the pixels in the row have an intensity greater than mask THmask-, the mean phase value of a neighboring row (the neighboring row nearest the center of the image) is used.
  • the counter i is compared to the number of rows m in the reconnected image xi to h.reconnect to check if all rows have been processed (step 370). If there are remaining rows to be processed, counter i is incremented by one (1) (step 375) and the method returns to step 365 to calculate the mean phase value of the next row.
  • the dataset of mean phase values is unwrapped using the one-dimensional (ID) Itoh algorithm (step 380).
  • the set of unwrapped mean phase values is shifted by a multiple of 2 ⁇ to ensure the mean phase value of the center row is in the range of (- ⁇ , ⁇ ).
  • ⁇ Y.itokrecomect is processed in a similar manner. It will however be appreciated that when the above-described method is applied to the reconnected image Y.itoh.reconnect, the columns are processed, as opposed to the rows. The resultant image Y.i to h.reconnect.mean is similarly output (step 395).
  • the quality ratios for all rows in the image ⁇ Px.f t oh.reconnect.mean are compared to a threshold quality ratio Qthreskoid, which in this embodiment is set to a value of 0.90. If adjacent rows have quality ratios greater than the threshold quality ratio Qthreshoid, they are grouped together thereby creating a group of rows, hereinafter referred to as a horizontal strip.
  • a horizontal strip likely to be free of wrapping error is identified as the horizontal processing strip according to defined criterion.
  • the horizontal processing strip is the horizontal strip that contains the greatest number of rows.
  • the horizontal processing strip is identified by comparing the size of each horizontal strip identified in the image ⁇ Px.itoh.reconnect.mean, and the largest horizontal strip (the strip comprising the greatest number of rows) is identified as the horizontal processing strip (step 420).
  • the quality ratios for all columns in the image u to h.reconnect.mean are compared to a threshold quality ratio Qthreshoid- If adjacent columns have quality ratios greater than the threshold quality ratio Qthreshoid they are grouped together, thereby creating a group of columns, hereinafter referred to as a vertical strip.
  • a vertical strip likely to be free of wrapping error is identified as the vertical processing strip according to defined criterion.
  • the vertical processing strip is the vertical strip that contains the greatest number of columns.
  • the vertical processing strip is identified by comparing the size of each vertical strip identified in the image ⁇ pY.itoh.reconnect.me ⁇ m, and the largest strip (the strip comprising the greatest number of columns) is identified as the vertical processing strip (step 420).
  • the size of the horizontal processing strip and the size of the vertical processing strip are then compared (step 425). If the horizontal processing strip is bigger than the vertical processing strip, that is, it comprises a greater number of rows than the vertical processing strip comprises columns, the method continues to step 430. If the vertical processing strip is bigger than the horizontal processing strip, that is, it comprises a greater number of columns than the horizontal processing strip comprises rows, the method continues to step 450.
  • step 430 the mean phase difference A c between images ⁇ I>x.itoh.reconnect.mean and
  • ⁇ . ⁇ is calculated for all pixels in the horizontal processing strip (step 430).
  • the phase values of each column in the image ⁇ &Y.itoh.reconnect.mean are then shifted according to the following equation:
  • ⁇ -cr strip i calculated for all pixels in the vertical processing strip (step 440).
  • the phase values of each row i in the image ⁇ /toh.reconnect.mean are then shifted according to the following equation:
  • step 445 The method then continues to step 500.
  • step 450 the mean phase difference ⁇ ⁇ between images ⁇ . ⁇ ⁇ and
  • ⁇ . ⁇ is calculated for all pixels in the vertical processing strip (step 450).
  • the phase values of each row i in the image ⁇ . ⁇ . ⁇ are then shifted according to the following equation:
  • ⁇ . ⁇ . ⁇ ⁇ is calculated for all pixels in the horizontal processing strip (step 460).
  • the phase values of each column / in the image are then shifted according to the following equation:
  • step 465 The method then continues to step 500.
  • FIG. 5A and 5B steps carried out during cross- referring line segments are shown.
  • images are loaded into the processor (step 505).
  • An individual column in image ⁇ ⁇ . ⁇ ⁇ ⁇ is then divided into n line-segments, wherein a line-segment is defined as a continuous set of at least three (3) pixels for which the phase gradient is less than gradient threshold cp t h,Ls, similar to that described above with reference to step 300 (step 510).
  • a check is performed to determine if all columns have been processed
  • step 525 If there are remaining columns to be processed, the method continues to the next column (step 530) and returns to step 510. Once all columns have been processed, the resultant image is output (step 535).
  • An individual row in image ⁇ . ⁇ . ⁇ . s t rip is also divided into n line- segments, wherein a line-segment is defined as a continuous set of at least three (3) pixels for which the phase gradient is less than a gradient threshold i th,LS, similar to that described above with reference to step 300 (step 540).
  • step 560 the resultant image ⁇ Px cr .Y. t m p is output (step 565).
  • gradient threshold ⁇ th,LS is redefined as (pth,Ls/2, and counter i is incremented.
  • the counter i is then compared to a threshold value to determine how many iterations of the cross-referring line segment step 500 are to be performed (step 575).
  • the threshold value is set to a value of six (6), indicating that six (6) iterations are desired. If counter is below the threshold, the method returns to step 505 to perform another iteration of the cross-referring line segment step 500, using the new values established at step 570.
  • step 580 it is determined if all required 2D phase images have been processed (step 580). If multiple 2D phase images are to be processed, the method 100 repeats until all required 2D phase images have been unwrapped, wherein each unwrapped 2D phase image is stored into a set of unwrapped 2D phase images, hereinafter referred to as a stack.
  • images ⁇ ⁇ ⁇ and ⁇ ⁇ . ⁇ will each comprise a single unwrapped 2D phase image. If multiple 2D phase images have been processed, images ⁇ ⁇ . ⁇ and ⁇ ⁇ ⁇ will each comprise a stack of unwrapped 2D phase images.
  • Method 100 can be extended to unwrap a 3D phase image volume, wherein the 3D phase image volume comprises a stack of 2D phase images, each of which is hereinafter referred to as a slice.
  • each slice is processed according to method 100 described above.
  • the resultant output (unwrapped) slices are stored in stacks ⁇ ⁇ -, ⁇ and ⁇ ⁇ . ⁇ , as described above.
  • phase inconsistencies may exist across the plurality of unwrapped slices stored in stacks ⁇ . ⁇ and ⁇ ⁇ along the Z-direction.
  • an extension of method 100 is performed.
  • FIG. 6 a method for phase unwrapping a three-dimensional
  • 3D phase image volume is shown and is generally identified by reference numeral 600.
  • Each slice in the 3D phase image volume is unwrapped using method 100 (described above), and a stack of unwrapped slices is created (step 700).
  • Inter-slice phase discontinuities along the Z-direction are removed by cross-referring the slices in the X-Z and Y-Z planes (step 800).
  • the 2D cross-referring line-segment procedure (step 500 described above) is performed on the slices in the X-Y plane (step 900), and the method ends.
  • FIG. 7A steps carried out during the stacking of unwrapped slices (step 700) are shown.
  • stacks ⁇ . ⁇ and ⁇ are loaded into the processor (step 705).
  • a baseline index is then selected either automatically or by a user (step 710).
  • the baseline index identifies a particular row or column in each slice for processing (hereinafter referred to as the "baseline").
  • a baseline index of row 128 identifies row 128 in each slice for processing.
  • step 715 thereby creating a dataset of mean phase values.
  • the dataset of mean phase values is unwrapped using the ID Itoh algorithm (step 720).
  • the unwrapped mean phase values are then shifted to ensure the central slice is in the range of (- ⁇ , ⁇ ) (step 725).
  • the difference Abas.siice between the mean phase value and the unwrapped mean phase value of the baseline is calculated for each slice (step 730).
  • the phase values of each slice are then shifted according to the following equation:
  • step 800 steps carried out during the cross-referring of 2D images in the X-Z and Y-Z planes are shown. As can be seen, stacks
  • the unwrap quality ratio Q s is calculated for each X-Y slice in stacks ⁇ ⁇ .3 ⁇ . and ⁇ , ⁇ ⁇ (step 810).
  • the unwrap quality ratio Q s is calculated as:
  • Q s , s +i and Q s s -i are the total number of pixels having a phase difference less than 2 ⁇ between the s* and (s+l) th and s ⁇ and (s-l)* slices, respectively, divided by the total number of pixels of the s* slice with intensities > support mask ⁇ 5 ⁇ .
  • a check is then performed to determine if the unwrap quality ratio Q s is greater than an unwrap quality ratio threshold Q s , th for all slices (step 815).
  • the unwrap quality ratio threshold is set to a value of 0.90. If the unwrap quality ratio Q s is greater than the unwrap quality ratio threshold Q s , th for all slices, no further processing is required and the method ends.
  • step 820 If the unwrap quality ratio Q s is less than the unwrap quality ratio threshold Q s , th for at least one slice, the method continues to step 820.
  • Each slice in stacks Pxcr. y. stack an Y. cr .x.stack is then reformatted in the X-Z and Y-Z plane, respectively, resulting in output stacks ⁇ & ⁇ Y z x(ste ⁇ 820).
  • Each X-Z slice in stack ⁇ is cross-referred to a respective X-Z slice in stack ⁇ , ⁇ , similar to method step 400 described above and the resultant stacks are ⁇ . ⁇ . ⁇ and ⁇ . cont ⁇ ⁇ ⁇ ⁇ are output.
  • each Y-Z slice in stack ⁇ , ⁇ is cross- referred to a respective Y-Z slice in stack ⁇ similar to method 400 described above and the resultant stacks and ⁇ -. ⁇ ⁇ are output (step 830, 835).
  • Each X-Z slice in stack is further processed to cross-refer line segments with a respective X-Z slice in stack ⁇ - strip, similar to method step 500 described above (step 840, 845).
  • each Y-Z slice in stack ⁇ , ⁇ . ⁇ is further processed to cross-refer line segments with a respective Y-Z slice in stack
  • step 900 steps carried out during the cross-referring of 2D images in the X-Y plane (step 900) are shown. Stacks ⁇ z.cr.(x.cr. Y.stack) and ⁇ Pz.cr.(Y.cr.x.stack) are loaded into the processor. The unwrap quality ratio Q s is calculated for each X-Y slice in stacks ⁇ Pz. cr .(x. C r.
  • step 910 A check is performed to determine if the unwrap quality ratio Q s is greater than an unwrap quality ratio threshold Q Si th for all slices (step 915). If the unwrap quality ratio Q s is greater than the unwrap quality ratio threshold Q S t h for all slices, no further processing is required and the method ends.
  • step 920 each X-Y slice in stack ⁇ . ⁇ .( ⁇ . ⁇ . ⁇ . stack) is processed to cross-refer line-segments with a respective X-Y slice in stack z.cr.(Y.c f .x.stack) , according to method step 500 described above (step 920) and the resultant unwrapped stacks ⁇ ⁇ . ⁇ . . ⁇ and ⁇ ⁇ ⁇ . ⁇ are output.
  • the counter i is used to count the number of iterations method 600 has been performed.
  • method 600 returns to step 700, using the most recently output stacks ⁇ ⁇ ⁇ . ⁇ . ⁇ . ⁇ and ⁇ . ⁇ ⁇ . . ⁇ Once the desired number of iterations have been performed, the method ends.
  • a threshold value which in this embodiment is set to a value of six (6)
  • TR 45 ms
  • BW 42.0k Hz
  • FOV/thickness 256 mm x 256 mm 1mm
  • spacing 0 mm
  • NEX 1, matrix 256x256x30.
  • Data was acquired using a single channel (birdcage) radio frequency coil.
  • Q s ,s+i (Qs,s-i) is the total number of pixels with less than 2 ⁇ phase difference between the and (s+l)* and s* and (s-l)* slices, divided by the total number of pixels of the s* slice with intensities > support mask TH SU pp 0 i f 2D PUROR phase unwrapping
  • FIG. 8 A displays the original magnitude (left) and phase (right) images.
  • a mask generated by the more commonly used method is shown, where the same threshold was applied on the original magnitude image Figure 8B (right). As can be seen, a large number of noise pixels remain and thus the threshold would have to be increased to achieve a similar result as the mask shown in Figure 8B (left).
  • the complex 2D image is a phase image of a human brain generated by a magnetic resonance imaging (MRI) scan.
  • MRI magnetic resonance imaging
  • the 2D phase image was unwrapped using the ID Itoh algorithm along the R/L (X) and S/I (Y) directions (step 300).
  • the resulting images ⁇ /toh and ⁇ . ⁇ are shown in Figure 9A. As can be seen, a number of streaks are present.
  • This typical failure indicator of the conventional 2D unwrapping algorithm is the reason most prior art algorithms have focused on prevention strategies to solve the unwrapping problem.
  • Images ⁇ /toh and ⁇ 0 ⁇ were processed according to method steps 305 to 350 (described above), and the resultant reconnected images ⁇ . ⁇ 0 ⁇ and are shown in Figure 9B.
  • a difference image between images x.i to h.recomect and ⁇ . ⁇ ⁇ is shown for illustrative purposes in the range of (- ⁇ , ⁇ ).
  • parts of the highly dense streaks shown in Figure 9A are removed, particularly in the brain region.
  • Images ⁇ . ⁇ and ( Pmokreconnect were processed according to method steps 355 to 395 (described above), and the resultant output images
  • FIG. 9C A difference image between images ⁇ I>x.itoh.reconnect.mean and ⁇ i>Y.itoh.recormect.mean is shown for illustrative purposes. Images ⁇ Px.itoh.Keconmct.mean and Y.itoh.reconnect.mean were further processed according to method steps 405 to 420 to identify the corresponding horizontal and vertical processing strips, as shown in Figure 9C.
  • the horizontal processing strip is bigger than the vertical processing strip.
  • the method continued to step 430 (described above) wherein the horizontal processing strip was cross-referred to image
  • ⁇ .ito .reconnect.mea output as image ⁇ . ⁇ . 5 ⁇ , which was then used to cross-reference the vertical processing strip to image ⁇ t>x.i t oh.reconnect.mean, output as image ⁇ Px. cr . Y.stri P , according to method steps 430 to 445 (described above).
  • Images x cr . Y.strip and ⁇ . ⁇ , ⁇ are shown in Figure 9D.
  • a difference image between images is shown for illustrative purposes in the range of (- ⁇ , ⁇ ). As can be seen, most of the streaks within the brain region have been removed. Phase offsets are still observable in other regions, such as the tongue, neck and skull regions.
  • method steps 500 to 565 were repeated according to a threshold value, which in this example was six (6) iterations. As such, method steps 500 to 565 were repeated six (6) times.
  • the phase inconsistencies between the unwrapped slices from two axial slices (1 and 2) and two coronal slices (3 and 4) are shown in Figure 1 IB, which were obtained from the thirty (30) sagittal slices shown in Figure 1 1 A.
  • the sagittal slices were stacked using the 64 th row as the baseline.
  • Figure 11C the inter-slice phase offsets in the brain region were removed, while the phase offsets in the tongue and neck regions remained.
  • the results of the PUROR and PRELUDE algorithms can be compared qualitatively in Figures 12A-12B, 13A-13B, and 14A-14B (sagittal, coronal, and axial stacks respectively).
  • the hybrid 3D PUROR images shown are from the ⁇ Psiice.cr.(x. cr . Y) volume. It is noted that the display of the phase images in Figures 12A to 14B are in the range of [-4 ⁇ , 4 ⁇ ].
  • Table 1 also shows the results of the computation time for unwrapping the different phase volumes.
  • the PUROR algorithm is able to unwrap all phase images in approximately the same processing time, while PRELUDE took significantly different computation times for unwrapping the phase images at different SNR levels as well as for phase images acquired in different orientations.
  • the dependency of the PRELUDE computation time on the complexity of the phase volume and SNR has been reported by other publications [21, 22, 28].
  • Table 1 Comparison of the computation times and unwrap-qualitv ratio of phase volumes when using the PUROR STACKING.
  • the unwrap-quality ratio of the unwrapped phase volumes is presented as mean ⁇ standard deviation from the 30 slices per phase volume.
  • threshold values are described, those skilled in the art will appreciate that the threshold values may be configured depending on the application of the PUROR algorithm.
  • step 800 may be omitted, such that step 800 proceeds to steps 820 and 825 unconditionally.
  • steps 910 and 915 of step 900 may be omitted.
  • the PUROR algorithm described above requires four (4) user input parameters ( ⁇ , mask TH ma sk support mask TH SU p por t and baseline index), those skilled in the art will appreciate that these parameters may be estimated.
  • noise level ⁇ a conventional method of calculating the standard-deviation of a noise-only region may be used.
  • Otsu's method [35] or a histogram-based method [36] may be used.
  • any known advanced segmentation methods may be used.
  • a smaller region location in the volume of interest may be used.
  • the entire region location in the volume of interest may be used.
  • line-segments described above comprise a continuous set of at least three (3) pixels for which the phase gradient is less than a gradient threshold ⁇ P th ,LS, those skilled in the art will appreciate that the location of the poles [18] may be used to divide a row or column into line-segments.
  • Liang ZP A model-based method for phase unwrapping. IEEE transactions on medical imaging 1996;15(6):893-897.

Landscapes

  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Condensed Matter Physics & Semiconductors (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

A method for unwrapping a phase image, comprises unwrapping the phase image along X and Y directions; identifying a horizontal processing strip in the unwrapped X image and a vertical processing strip in the unwrapped Y image based on defined criterion; cross-referring the horizontal processing strip with the unwrapped Y image and the vertical processing strip with the unwrapped X image; dividing each column in the cross-referred unwrapped Y image into a plurality of line-segments and dividing each row in the cross-referred unwrapped X image into a plurality of line-segments; and cross-referring the column line-segments with the cross-referred unwrapped X image and cross-referring the row line-segments with the cross-referred unwrapped Y image, thereby to yield unwrapped images X.cr.Y and Y.cr.X.

Description

METHOD FOR PHASE UNWRAPPING
Cross-Reference to Related Application
[0001] This application claims the benefit of U.S. Provisional Application No.
61/580,008 to Liu et al. filed on December 23, 2011, entitled "METHOD FOR PHASE UNWRAPPING", the content of which is incorporated herein by reference in its entirety.
Field of the Invention
[0002] The present invention relates to imaging and in particular to image phase unwrapping.
Background of the Invention
[0003] Imaging techniques such as ultrasound and magnetic resonance (MR) imaging use phase information within received signals in frequency space to reconstruct complex images. Such reconstructed complex images have been acquired for many purposes including, but not limited to, rigid-body motion determination [1-3], chemical shift mapping [4], blood flow measurement in MR angiography [5-7], MR venography [8], field-map calculation [9-10], phase-contrast imaging [11], susceptibility-weighted imaging [12], temperature mapping [13], perfusion measurement [14], and tissue segmentation [15].
[0004] One of the common problems encountered during reconstruction of complex images is phase wrapping. Phase wrapping occurs when phase values fall outside the range (-π, π), thereby introducing 2π discontinuities in the measured phase data, rendering the phase data unusable until the discontinuities are removed by phase unwrapping [16]. Phase unwrapping is often performed by computationally complex, off-line systems under user guidance.
[0005] Itoh discloses a one-dimensional (ID) phase unwrapping algorithm [17] where the true phase values of a complex sequence are retrieved by unwrapping neighboring differences and integrating the unwrapped phase differences. The Itoh algorithm is well known and is typically referred to as the conventional ID phase unwrapping algorithm. The Itoh algorithm has been extended to two-dimensional (2D) applications by Ghiglia et al. [18].
[0006] Both ID and 2D conventional phase unwrapping algorithms require a necessary condition to be satisfied; that is the values of the true phase differences must be in the range of (-π, π). The requirement that the values of the true phase differences must be in range of (-π, π) is often not satisfied due to the presence of noise and temporal or spatial under-sampling of the phase. In two or more dimensions, pixels that do not satisfy the condition are referred to as poles [19]. Ghiglia et al. [18] identifies that possible path-dependent effects may arise from the poles. Integrating across a pole causes errors to propagate along the remainder of the unwrapping path, appearing as horizontal or vertical streaks in reconstructed complex images for paths along rows and columns, respectively.
[0007] To avoid the introduction of streaks, a number of prevention strategies have been proposed, aimed specifically at avoiding the crossing of poles along the integration path, such as for example, the region-growing [20-25], branch-cut, and cutline [19, 26, 27] methods. Alternative prevention strategies, which avoid path integration all together, have been adopted by other phase unwrapping algorithms, such as for example, the region-merging [28], model-based [29-32], and minimizationJl- norm [16] unwrapping algorithms.
[0008] Improvements in imaging techniques are generally desired. It is therefore an at least to provide a novel phase unwrapping method for two-dimensional (2D) and three-dimensional (3D) images obtained from MR or other types of phase imaging.
Summary of the Invention
[0009] Accordingly, in one aspect there is provided a method for unwrapping a phase image, comprising unwrapping the phase image along X and Y directions, identifying a horizontal processing strip in the unwrapped X image and a vertical processing strip in the unwrapped Y image based on defined criterion, cross-referring the horizontal processing strip with the unwrapped Y image and the vertical processing strip with the unwrapped X image, dividing each column in the cross-referred unwrapped Y image into a plurality of line-segments and dividing each row in the cross-referred unwrapped X image into a plurality of line-segments, and cross-referring the column line-segments with the cross-referred unwrapped X image and cross-referring the row line-segments with the cross-referred unwrapped Y image, thereby to yield unwrapped images X.cr.Y and Y.cr.X.
[0010] In one embodiment, unwrapping the phase image along the X and Y directions is performed using a one-dimensional (ID) Itoh algorithm. Identifying the horizontal processing strip comprises calculating a quality ratio for each row in the unwrapped X image, grouping neighbor rows if the quality ratio of each of the rows is above a quality ratio threshold, thereby defining a strip and selecting the strip comprising the greatest number of rows as the horizontal processing strip. Identifying the vertical processing strip comprises calculating a quality ratio for each column in the unwrapped Y image, grouping neighbor columns if the quality ratio of each of the columns is above a quality ratio threshold, thereby defining a strip and selecting the strip comprising the greatest number of columns as the vertical processing strip.
[0011] In one embodiment, the method further comprises reconnecting the unwrapped X and Y images. In one form, reconnecting the unwrapped X image comprises dividing a row in the unwrapped X image into n line-segments, calculating a phase difference between line-segment i and line-segment i + 1, comparing the phase difference to a threshold value and if the phase difference is greater than the threshold value shifting line-segment i + 1 to the line-segment n by a defined value and repeating the steps of dividing, calculating and comparing until all rows in the unwrapped X image have been reconnected. In one form, reconnecting the unwrapped Y image comprises dividing a column in the unwrapped Y image into n line-segments, calculating a phase difference line-segment i and line-segment i + 1, comparing the phase difference to a threshold value, and if the phase difference is greater than the threshold value shifting line-segment i+1 to line-segment n by a defined value and repeating the steps of dividing, calculating and comparing until all columns in the unwrapped Y image have been reconnected.
[0012] According to another aspect there is provided a method for unwrapping a three-dimensional (3D) phase image comprising a plurality of phase images, the method comprising unwrapping each phase image according to the aforementioned method, and stacking the unwrapped images X.cr.Y and stacking the unwrapped phase images Y.cr.X, thereby to yield an X.cr.Y stack and an Y.cr.X stack, respectively.
[0013] According to another aspect there is provided a non-transitory, computer- readable medium embodying a computer program having computer program code for execution by a computer to unwrap a phase image, the program code comprising program code for unwrapping the phase image along X and Y directions, program code for identifying a horizontal processing strip in the unwrapped X image and a vertical processing strip in the unwrapped Y image based on defined criterion, program code for cross-referring the horizontal processing strip with the unwrapped Y image and the vertical processing strip with the unwrapped X image, program code for dividing each column in the cross-referred unwrapped Y image into a plurality of line-segments and for dividing each row in the cross-referred unwrapped X image into a plurality of line- segments, and program code for cross-referring the column line-segments with the cross- referred unwrapped X image and cross-referring the row line-segments with the cross- referred unwrapped Y image, thereby to yield unwrapped images X.cr.Y and Y.cr.X.
[0014] According to yet another aspect there is provided a non-transitory, computer-readable medium embodying a computer program having computer program code for execution by a computer to unwrap a three-dimensional phase image comprising a plurality of phase images, the program code comprising program code for unwrapping each phase image according to a method of unwrapping the phase image along X and Y directions, identifying a horizontal processing strip in the unwrapped X image and a vertical processing strip in the unwrapped Y image based on defined criterion, cross-referring the horizontal processing strip with the unwrapped Y image and the vertical processing strip with the unwrapped X image, dividing each column in the cross-referred unwrapped Y image into a plurality of line-segments and dividing each row in the cross-referred unwrapped X image into a plurality of line-segments, and cross-referring the column line-segments with the cross-referred unwrapped X image and cross-referring the row line-segments with the cross-referred unwrapped Y image, thereby to yield unwrapped images X.cr.Y and Y.cr.X, and program code for stacking the unwrapped images X.cr.Y and for stacking the unwrapped images Y.cr.X, thereby to yield an X.cr.Y stack and an Y.cr.X stack, respectively. [0015] According to yet another aspect there is provided an apparatus for unwrapping a phase image comprising memory for storing the phase image, and processing structure communicating with the memory and being configured to unwrap the phase image along X and Y directions, identify a horizontal processing strip in the unwrapped X image and a vertical processing strip in the unwrapped Y image based on defined criterion, cross-refer the horizontal processing strip with the unwrapped Y image and the vertical processing strip with the unwrapped X image, divide each column in the cross-referred unwrapped Y image into a plurality of line-segments and divide each row in the cross-referred unwrapped X image into a plurality of line-segments, and cross- refer the column line-segments with the cross-referred unwrapped X image and cross- refer the row line-segments with the cross-referred unwrapped Y image, thereby to yield unwrapped images X.cr.Y and Y.cr.X.
[0016] According to still yet another aspect there is provided an apparatus for unwrapping a three-dimensional phase image comprising a plurality of phase images, the apparatus comprising memory for storing the three-dimensional phase image, and processing structure communicating with the memory and being configured to, for each phase image, unwrap the phase image along X and Y directions, identify a horizontal processing strip in the unwrapped X image and a vertical processing strip in the unwrapped Y image based on defined criterion, cross-refer the horizontal processing strip with the unwrapped Y image and the vertical processing strip with the unwrapped X image, divide each column in the cross-referred unwrapped Y image into a plurality of line-segments and divide each row in the cross-referred unwrapped X image into a plurality of line-segments, and cross-refer the column line-segments with the cross- referred unwrapped X image and cross-refer the row line-segments with the cross- referred unwrapped Y image, thereby creating unwrapped images X.cr.Y and Y.cr.X, and stack the unwrapped images X.cr.Y and stack the unwrapped images Y.cr.X, thereby to yield an X.cr.Y stack and an Y.cr.X stack, respectively.
Brief Description of the Drawings
[0017] Embodiments will now be described more fully with reference to the accompanying drawings in which: [0018] Figure 1 is a flowchart showing steps of a method for phase unwrapping a complex two dimensional (2D) image;
[0019] Figure 2 is a flowchart showing steps carried out during the loading of the complex 2D phase image and the setting of the initialization values step of Figure 1;
[0020] Figures 3 A and 3B are flowcharts showing steps carried out during the unwrapping complex 2D image along the X and Y directions step of Figure 1 ;
[0021] Figures 4 A to 4C are flowcharts showing steps carried out during the cross-referring strips step of Figure 1;
[0022] Figures 5 A and 5B are flowcharts showing steps carried out during the cross-referring line segments step of Figure 1 ;
[0023] Figure 6 is a flowchart showing steps of a method for phase unwrapping a complex three-dimensional (3D) image;
[0024] Figure 7A is a flowchart showing steps carried out during the stacking step of Figure 6;
[0025] Figure 7B is a flowchart showing steps carried out during the cross- referring 2D images in the X-Z and Y-Z planes step of Figure 6;
[0026] Figure 7C is a flowchart showing steps carried out during the cross- referring 2D images in the X-Y plane step of Figure 6;
[0027] Figures 8 A to 8 C show the 15th slice of a saggital volume acquired with
TE = 6ms pre-processed according to the method of Figure 1 ;
[0028] Figures 9A to 9F show unwrapped 2D phase images following each step of the method of Figure 1 ;
[0029] Figure 10 shows unwrapped 2D phase images obtained with different threshold levels according to the method of Figure 1 ;
[0030] Figures 11 A to 11 D show a 3D view of a sagittal phase volume acquired with TE = 6ms processed according to the method of Figure 6;
[0031] Figures 12A and 12B show a comparison of unwrapped sagittal phase volumes processed using hybrid 3D PUROR, PRELUDE-2D, PRELUDE-HY algorithms;
[0032] Figures 13A and 13B show a comparison of unwrapped coronal phase volumes processed using hybrid 3D PUROR, PRELUDE-2D, PRELUDE-HY algorithms; [0033] Figures 14A and 14B show a comparison of unwrapped axial phase volumes processed using hybrid 3D PUROR, PRELUDE-2D, PRELUDE-HY algorithms; and
[0034] Figures 15A, 15B and 15C show a comparison of unwrapped sagittal phase images using the 2D PUROR and 2D PRELUDE algorithms.
Detailed Description of the Embodiments
[0035] Turning now to Figure 1 , a method for phase unwrapping a two- dimensional (2D) phase image of a 2D complex image is shown and is generally identified by reference numeral 100. Initially, the complex 2D image which comprises a 2D magnitude image and a 2D phase image, is loaded into a processor or other suitable computing/processing structure and initialization values are set (step 200). The 2D phase image is then unwrapped along the X and Y directions to yield X and Y unwrapped images (step 300). A processing strip satisfying defined criterion is identified in each of the X and Y unwrapped images, and each processing strip is cross- referred to the other of the X and Y unwrapped images (step 400). Line-segments are identified in each of the cross-referred unwrapped X and Y images, and the line- segments are cross-referred to the other of the X and Y cross-referred unwrapped X and Y images (step 500).
[0036] Turning to Figure 2, steps carried out during loading of the complex 2D image and setting of the initialization values are shown. After the complex 2D image has been loaded into the processor (step 205), a counter is set to a value of zero (step 210). The background noise level σ is then estimated (step 215). In this embodiment, the complex 2D image is filtered using an averaging filter and the background noise level σ is estimated from the 2D magnitude image associated with the filtered complex 2D image by using maximum likelihood estimation [33]. As will be appreciated, the filtering is performed to avoid having to use a high thresholding level [21, 34].
[0037] A mask THmask is then generated (step 220) that is used to filter complex images, wherein only pixels with intensities above mask THmask are processed. In this embodiment, mask THmask is set to the background noise level σ estimated from the 2D magnitude image. [0038J A support mask THsupport is also generated (step 225) that is used to filter complex images, wherein only pixels with intensities above support mask THsupport are processed. In this embodiment, support mask THsupporl is set to a value of:
THsupport = 4σ [eq. 1] where σ is the background noise level estimated from the 2D magnitude image (step 215).
[0039] Turning now to Figure 3 A, steps carried out during unwrapping of the complex images along X and Y directions (step 300) are shown. Initially, the complex 2D phase image is unwrapped in the X and Y directions, thereby creating images ^ΛΟΑ and Φγ ο/ι, respectively (step 305). Images ΦΧΜ and Φγ/toh are then filtered using mask THmask, wherein pixels having an intensity below mask THmask are excluded from further processing (step 308) and the images Φχ/toh and Φγ./toh are processed for the purpose of reconnection as will be described.
[0040] An individual row in image Φ οΛ is then divided into n line-segments, wherein a line-segment is defined as a continuous set of at least three (3) pixels for which the phase gradient is less than a gradient threshold cpth,LS, which in this
embodiment is set to a value of 2π (step 310). The counter i is set to a value of one (1).
[0041] Once a line-segment is identified, the start point {LSstart) and end point
{LSena) are stored. The phase difference Δφ^ between the end point LSend of the ith line- segment and the start point L¾art of the (i + if1 line-segment is then calculated as:
Figure imgf000010_0001
where
Figure imgf000010_0002
is the mean value of the last three (3) pixels of the ith line-segment and
<Pi+i, start is the mean value of the first three (3) pixels of the (i+1)"' line-segment (step 315). In this embodiment, if the distance between the last pixel LSi,end of the ith line- segment and the first pixel LSi+istart of the (i+l)'h line-segment is greater than five (5) pixels, the method skips to step 330 to process the next line-segment.
[0042] The phase difference Δφ^ is then compared to a phase difference threshold, which in this embodiment is set to a value of π/2 (step 320). If the phase difference AtpLs is less than the phase difference threshold, the method continues to step 330. [0043] If the phase difference
Figure imgf000011_0001
is greater than the phase difference threshold, a phase discontinuity is deemed to exist due to the presence of poles in between line-segments. As such, all remaining phase values, that is, phase values associated with the (i+l)th line-segment to the nth line-segment, are shifted according to the following equation:
2π x oundiAq)^ { 12π) [eq. 3] where round(...) rounds the value of tpisj 2π to the nearest integer (step 325). The method then continues to step 330.
[0044] The counter is compared to the number of line-segments n in the row to check if all line-segments have been processed (step 330). If there are remaining line- segments to be processed, counter i is incremented by one (1) (step 335) and the method returns to step 315 to process the next line-segment.
[0045] Once all line-segments have been processed, a check is performed to determine if all rows in the image ΦχΜοΐ, have been reconnected (step 340). If there are remaining rows to be reconnected, the next row is selected for processing (step 345) and the method returns to step 310.
[0046] Once all rows have been reconnected, a reconnected image Pxitoh.reconnect is output (step 350). The method then returns to step 310, wherein image ΦΥΜ is processed in a similar manner. It will however be appreciated that when the above- described reconnecting method is applied to image ΦΥΜ, the columns of the image ΦΥΜ are reconnected, as opposed to the rows. The resulting reconnected image
γ.ΐίοη.κεοοηηεα is similarly output (step 350). Once the reconnected images have been output, the method continues to step 355.
[0047] Turning now to Figure 3B, further steps carried out during unwrapping of the complex images along X and Y directions are shown. The reconnected images
χΜ.η∞ηηεα and ΦγΜ,κοοητκα are loaded into the processor (step 355) and counter i is set to a value of one (1) (step 360).
[0048] The mean phase value of row i in the reconnected image
Figure imgf000011_0002
is then calculated by averaging the phase value of each pixel in the row (step 365). The mean phase value is added to a dataset containing the mean phase value for each row in the reconnected image ΦχΜΗ. εοοηηεα- i this embodiment, only pixels having an intensity greater than support mask THsupport (described above) are included in the calculation of mean phase value for each row. If none of the pixels in a row have an intensity greater than support mask THsupport, pixels in the row having an intensity greater than mask THmask are used to calculate the mean phase value of that row. If none of the pixels in the row have an intensity greater than mask THmask-, the mean phase value of a neighboring row (the neighboring row nearest the center of the image) is used.
[0049] The counter i is compared to the number of rows m in the reconnected image xitoh.reconnect to check if all rows have been processed (step 370). If there are remaining rows to be processed, counter i is incremented by one (1) (step 375) and the method returns to step 365 to calculate the mean phase value of the next row.
[0050] Once the mean phase value of each row has been calculated, the dataset of mean phase values is unwrapped using the one-dimensional (ID) Itoh algorithm (step 380). In this embodiment, the set of unwrapped mean phase values is shifted by a multiple of 2π to ensure the mean phase value of the center row is in the range of (-π, π).
[0051] The difference 4mefl, , between the mean phase value and the unwrapped mean phase value is then calculated for each row i (step 385).
[0052] The phase values of each row x are shifted according to the following equation:
2π x round(Amean Ι2π) [eq. 4] where round(...) rounds the value of Amean,i 1 2π to the nearest integer (step 390) and the resultant image <t>xito .reconnect.mean is output (step 395).
[0053] The method then returns to step 360, wherein reconnected image
^Y.itokrecomect is processed in a similar manner. It will however be appreciated that when the above-described method is applied to the reconnected image Y.itoh.reconnect, the columns are processed, as opposed to the rows. The resultant image Y.itoh.reconnect.mean is similarly output (step 395).
[0054] Steps carried out during cross-referring processing strips (step 400) are shown in Figure 4A. Images itoh.reconnect.mean and ΦγΜοΗ., α>ηηεα. ιεαη are loaded into the processor (step 405). An unwrap quality ratio Qi,y is calculated for each row i in the image xM.reconnect.mean (step 410). In this embodiment, the unwrap quality ratio Qiy for row i is calculated as: Qi.y = 0>,«>+! X Qiyjy-X > 5] where Qiy,iy+i and Qiy,iy-i are the total number of pixels having a phase difference less than 2π between the ith and ( + l)th and the ith and (i-l)th rows, respectively, divided by the total number of pixels in the ith row. In this embodiment, only pixels having an intensity greater than support mask THsupport (described above) are included in the calculation of the unwrap quality ratio. Similarly, an unwrap quality ratio Qi x is calculated for each column i in the image ΦΥΜ. reconnect, mean ·
[0055] The quality ratios for all rows in the image <Px.ftoh.reconnect.mean are compared to a threshold quality ratio Qthreskoid, which in this embodiment is set to a value of 0.90. If adjacent rows have quality ratios greater than the threshold quality ratio Qthreshoid, they are grouped together thereby creating a group of rows, hereinafter referred to as a horizontal strip. A horizontal strip likely to be free of wrapping error is identified as the horizontal processing strip according to defined criterion. In this embodiment, the horizontal processing strip is the horizontal strip that contains the greatest number of rows. The horizontal processing strip is identified by comparing the size of each horizontal strip identified in the image <Px.itoh.reconnect.mean, and the largest horizontal strip (the strip comprising the greatest number of rows) is identified as the horizontal processing strip (step 420).
[0056] Similarly, the quality ratios for all columns in the image utoh.reconnect.mean are compared to a threshold quality ratio Qthreshoid- If adjacent columns have quality ratios greater than the threshold quality ratio Qthreshoid they are grouped together, thereby creating a group of columns, hereinafter referred to as a vertical strip. A vertical strip likely to be free of wrapping error is identified as the vertical processing strip according to defined criterion. In this embodiment, the vertical processing strip is the vertical strip that contains the greatest number of columns. The vertical processing strip is identified by comparing the size of each vertical strip identified in the image <pY.itoh.reconnect.me<m, and the largest strip (the strip comprising the greatest number of columns) is identified as the vertical processing strip (step 420).
[0057] The size of the horizontal processing strip and the size of the vertical processing strip are then compared (step 425). If the horizontal processing strip is bigger than the vertical processing strip, that is, it comprises a greater number of rows than the vertical processing strip comprises columns, the method continues to step 430. If the vertical processing strip is bigger than the horizontal processing strip, that is, it comprises a greater number of columns than the horizontal processing strip comprises rows, the method continues to step 450.
[0058] Turning to Figure 4B, further steps carried out during cross-referring processing strips are shown. As mentioned previously, if the horizontal processing strip is bigger than the vertical processing strip, the method continues to step 430. During step 430, the mean phase difference Ac between images <I>x.itoh.reconnect.mean and
ΦγΜ.η∞ηηεαηκαη is calculated for all pixels in the horizontal processing strip (step 430). The phase values of each column in the image <&Y.itoh.reconnect.mean are then shifted according to the following equation:
2π x round(Ac Ι2π) [eq. 6] where round(...) rounds the value of Ac 12π to the nearest integer and the resultant image Φγ.α-.x.strip is output (step 435).
[0059] The mean phase difference Δτ between images
Figure imgf000014_0001
and
Φγ-cr strip i calculated for all pixels in the vertical processing strip (step 440). The phase values of each row i in the image Φχ/toh.reconnect.mean are then shifted according to the following equation:
2π x round{Ar Ι2π) [eq. 7] where round(...) rounds the value of Ar 1 2π to the nearest integer and the resultant image χ.ο-. ΥΜηρ is output (step 445). The method then continues to step 500.
[0060] Turning to Figure 4C, further steps carried out during cross-referring processing strips are shown. As mentioned previously, if the vertical processing strip is bigger than the horizontal processing strip, the method continues to step 450. During step 450, the mean phase difference ΔΤ between images γ.ΗοΚ εεοηηεεΐηιεαη and
ΦχΜητεοοηηεεΐ.ηιεαη is calculated for all pixels in the vertical processing strip (step 450). The phase values of each row i in the image ΦχΜ.κ∞ηηβα.ι∞αη are then shifted according to the following equation:
2π x round(Ar Ι2π) [eq. 8] where round(...) rounds the value of Ar 12π to the nearest integer and the resultant image εΓ. ΥΜΠρ is output (step 455). [0061] The mean phase difference AC between images ΦγΜ.η∞ηηεα.ηκαη and
Φχ.οΓ. Υ ρ is calculated for all pixels in the horizontal processing strip (step 460). The phase values of each column / in the image
Figure imgf000015_0001
are then shifted according to the following equation:
2π x round(Ac Ι2π) [eq. 9] where round(...) rounds the value of Ac 1 2π to the nearest integer and the resultant image Φγ.α-Xsirip is output (step 465). The method then continues to step 500.
[0062] Turning now to Figures 5A and 5B, steps carried out during cross- referring line segments are shown. Initially, images
Figure imgf000015_0002
are loaded into the processor (step 505). An individual column in image Φγ ΓΜ Ρ is then divided into n line-segments, wherein a line-segment is defined as a continuous set of at least three (3) pixels for which the phase gradient is less than gradient threshold cpth,Ls, similar to that described above with reference to step 300 (step 510).
[0063] The mean phase difference ^LS between images ΦγεΓ.ΧΜηΡ and Φ ΟΓ. Υ ΠΡ is calculated for each line-segment (step 515). The phase values of each line-segment are then shifted according to the following equation:
2π x round( LS Ι2π) [eq. 10] where round(. ..) rounds the value of Ais 1 2π to the nearest integer (step 520).
[0064] A check is performed to determine if all columns have been processed
(step 525). If there are remaining columns to be processed, the method continues to the next column (step 530) and returns to step 510. Once all columns have been processed, the resultant image
Figure imgf000015_0003
is output (step 535).
[0065] An individual row in image Φχ.^. γ. strip is also divided into n line- segments, wherein a line-segment is defined as a continuous set of at least three (3) pixels for which the phase gradient is less than a gradient threshold i th,LS, similar to that described above with reference to step 300 (step 540).
[0066] The mean phase difference Z\LS between images Φχ„ Y.striP and Φγ..χ.ΐηιρ is calculated for each line-segment (step 545). The phase values of each line-segment are then shifted according to the following equation:
2π Γοηηά( 15 /2π) [eq. 1 1] where round(...) rounds the value of ALS 1 2π to the nearest integer (step 550). [0067] A check is performed to determine if all rows have been processed (step
555). If there are remaining rows to be processed, the method continues to the next column (step 560) and returns to step 540. Once all rows have been processed, the resultant image <Pxcr.Y.tmp is output (step 565).
[0068] The images ΦγΧΓ.ΧΛτηΡ an xcr.r.tmp are then designated as images
Φγ ηΧ ΐ ρ and Φχ^. ΥΜρ, respectively, gradient threshold < th,LS is redefined as (pth,Ls/2, and counter i is incremented. The counter i is then compared to a threshold value to determine how many iterations of the cross-referring line segment step 500 are to be performed (step 575). In this embodiment, the threshold value is set to a value of six (6), indicating that six (6) iterations are desired. If counter is below the threshold, the method returns to step 505 to perform another iteration of the cross-referring line segment step 500, using the new values established at step 570.
[0069] Once the cross-referring line segment step 500 has been repeated for the desired number of iterations, the method continues to step 580 where it is determined if all required 2D phase images have been processed (step 580). If multiple 2D phase images are to be processed, the method 100 repeats until all required 2D phase images have been unwrapped, wherein each unwrapped 2D phase image is stored into a set of unwrapped 2D phase images, hereinafter referred to as a stack. Once all required 2D phase images have been unwrapped, the method 100 ends with the resultant output of the method being identified as images Φχχκγ and ΦγΧ .χ· As will be appreciated, if only one 2D phase image has been processed, images Φχχ γ and Φγ .χ will each comprise a single unwrapped 2D phase image. If multiple 2D phase images have been processed, images Φχχν.γ and Φγχκχ will each comprise a stack of unwrapped 2D phase images.
[0070] Method 100 can be extended to unwrap a 3D phase image volume, wherein the 3D phase image volume comprises a stack of 2D phase images, each of which is hereinafter referred to as a slice. In this embodiment, each slice is processed according to method 100 described above. The resultant output (unwrapped) slices are stored in stacks Φχα -,γ and ΦγΧΓ.χ, as described above. As will be appreciated, phase inconsistencies may exist across the plurality of unwrapped slices stored in stacks Φχ^.γ and Φγχ Χ along the Z-direction. To remove the phase inconsistencies between the unwrapped slices in the stacks Φχ„.γ and Φγ.^.χ, an extension of method 100 is performed. [0071] Turning to Figure 6, a method for phase unwrapping a three-dimensional
(3D) phase image volume is shown and is generally identified by reference numeral 600. Each slice in the 3D phase image volume is unwrapped using method 100 (described above), and a stack of unwrapped slices is created (step 700). Inter-slice phase discontinuities along the Z-direction are removed by cross-referring the slices in the X-Z and Y-Z planes (step 800). The 2D cross-referring line-segment procedure (step 500 described above) is performed on the slices in the X-Y plane (step 900), and the method ends.
[0072] Turning to Figure 7A, steps carried out during the stacking of unwrapped slices (step 700) are shown. As can be seen, stacks Φχ^.γ and Φγ^χ are loaded into the processor (step 705). A baseline index is then selected either automatically or by a user (step 710). The baseline index identifies a particular row or column in each slice for processing (hereinafter referred to as the "baseline"). For example, a baseline index of row 128 identifies row 128 in each slice for processing.
[0073] The mean phase value of the baseline is calculated for each slice in stacks
Φχο Y and Φγ Γ.χ (step 715), thereby creating a dataset of mean phase values. Once the mean phase value of each slice has been calculated, the dataset of mean phase values is unwrapped using the ID Itoh algorithm (step 720).
[0074] The unwrapped mean phase values are then shifted to ensure the central slice is in the range of (-π, π) (step 725). The difference Abas.siice between the mean phase value and the unwrapped mean phase value of the baseline is calculated for each slice (step 730). The phase values of each slice are then shifted according to the following equation:
2π x round(A b e s ce Ι2π) [eq. 12] where round(...) rounds the value of Abas.siwe 12π to the nearest integer (step 735). The resultant slices <¾;«·. Kstac* and
Figure imgf000017_0001
are stored and output (step 735) and the method then continues to step 800.
[0075] Turning to Figure 7B, steps carried out during the cross-referring of 2D images in the X-Z and Y-Z planes (step 800) are shown. As can be seen, stacks
Φχ .γ. stack and
Figure imgf000017_0002
are loaded into the processor (step 805). The unwrap quality ratio Qs is calculated for each X-Y slice in stacks Φχ γ.3ίαεΐ<. and Φγ ,ΧΜΜΐί (step 810). In this embodiment, the unwrap quality ratio Qs is calculated as:
Figure imgf000018_0001
where Qs,s+i and Qs s-i are the total number of pixels having a phase difference less than 2π between the s* and (s+l)th and s^ and (s-l)* slices, respectively, divided by the total number of pixels of the s* slice with intensities > support mask ΤΗ5ΧΛρροη.
[0076] A check is then performed to determine if the unwrap quality ratio Qs is greater than an unwrap quality ratio threshold Qs,th for all slices (step 815). In this embodiment, the unwrap quality ratio threshold is set to a value of 0.90. If the unwrap quality ratio Qs is greater than the unwrap quality ratio threshold Qs,th for all slices, no further processing is required and the method ends.
[0077] If the unwrap quality ratio Qs is less than the unwrap quality ratio threshold Qs,th for at least one slice, the method continues to step 820. Each slice in stacks (Pxcr. y. stack an Y.cr.x.stack is then reformatted in the X-Z and Y-Z plane, respectively, resulting in output stacks Φχζχγ&ηά Yz x(ste^ 820).
[0078] The initial 3D phase image volume is separately processed using the ID
Itoh algorithm to unwrap each phase image along the Z-direction. The result is a 3D matrix Φζ/ίοΑ having a size M x N x Total-number-of-slices, wherein each X-Z, Y-Z and X-Y slice is reformatted by fixing the index of Y, X and Z, respectively. The reformatted X-Z and Y-Z slices from matrix ΦΖΜ introduce information along the Z- direction. The reformatted X-Z and Y-Z slices are stored as stacks Φχζζι and Φγζ,ζι, respectively, which are used to remove the phase-offset along the Z-direction (step 825).
[0079] Each X-Z slice in stack Φχζχγ is cross-referred to a respective X-Z slice in stack Φχζ,ζι, similar to method step 400 described above and the resultant stacks are χ.π.ΖΜήρ and Φ .„ΧΜΗΡ are output. Similarly, each Y-Z slice in stack Φγζ,γχ is cross- referred to a respective Y-Z slice in stack Φγζζ similar to method 400 described above and the resultant stacks
Figure imgf000018_0002
and Φζα-.Υ ήρ are output (step 830, 835).
[0080] Each X-Z slice in stack
Figure imgf000018_0003
is further processed to cross-refer line segments with a respective X-Z slice in stack Φζα- strip, similar to method step 500 described above (step 840, 845). Similarly, each Y-Z slice in stack Φγ,ΐΓ.ΖΜ is further processed to cross-refer line segments with a respective Y-Z slice in stack
Figure imgf000018_0004
similar to method step 500 described above (step 840, 845). The resultant stacks
Figure imgf000018_0005
respectively, are output (step 850) and the method continues to step 900. [0081] Turning to Figure 7C, steps carried out during the cross-referring of 2D images in the X-Y plane (step 900) are shown. Stacks < z.cr.(x.cr. Y.stack) and <Pz.cr.(Y.cr.x.stack) are loaded into the processor. The unwrap quality ratio Qs is calculated for each X-Y slice in stacks <Pz.cr.(x.Cr. Y.stack) and <PzCr.(Y.cr.x.stack) (step 910). A check is performed to determine if the unwrap quality ratio Qs is greater than an unwrap quality ratio threshold QSith for all slices (step 915). If the unwrap quality ratio Qs is greater than the unwrap quality ratio threshold QS th for all slices, no further processing is required and the method ends.
[0082] If the unwrap quality ratio Qs is less than the unwrap quality ratio threshold Qs th for at least one X-Y slice, the method continues to step 920. During step 920, each X-Y slice in stack Φζ.π.(χ.π.γ. stack) is processed to cross-refer line-segments with a respective X-Y slice in stack z.cr.(Y.cf.x.stack) , according to method step 500 described above (step 920) and the resultant unwrapped stacks ΦχΕΓ.γ. .ζ and ΦγΧΓχ .ζ are output. The counter i is used to count the number of iterations method 600 has been performed. If the counter i is below a threshold value, which in this embodiment is set to a value of six (6), method 600 returns to step 700, using the most recently output stacks ΦχΧκ.γ.π.ζ and Φγ.α χ. .ζ· Once the desired number of iterations have been performed, the method ends.
[0083] In the following, examples of using methods 100 and 600 to unwrap 2D and 3D phase images, respectively, are described. For simplicity, the application of method 100 to a 2D phase image is hereinafter referred to as a "2D phase unwrapping recursive orthogonal referring (PUROR) algorithm" and the application of method 600 to a 3D phase volume is hereinafter referred to as a "3D PUROR algorithm".
EXAMPLE 1
[0084] The 2D and 3D PUROR algorithms were evaluated using human brain images. The human subject study protocol was approved by the local Research Ethics Board and informed consent was obtained from the volunteers. Shimming was achieved using the manufacturer's auto-shimming tools (a 3.0-T whole-body scanner, GE Signa, GE Medical Systems, Milwaukee, WI). Multi-slice axial, coronal and sagittal brain images were acquired using 3D SPGR with first-order flow compensation at two echo times (TE = 6 ms and 31 ms) using flip angle (FA) = 20°. To evaluate the performance of PUROR under low signal-to-noise (SNR) conditions, three (3) additional image sets were acquired with small FA (10°, 5° and 2°; TE = 16 ms). The remaining parameters were: TR = 45 ms, BW = 42.0k Hz, FOV/thickness = 256 mm x 256 mm 1mm, spacing = 0 mm, NEX = 1, matrix 256x256x30. Data was acquired using a single channel (birdcage) radio frequency coil.
Data processing
[0085] Phase unwrapping of the acquired images was performed using the
PUROR algorithm and the PRELUDE algorithm [28], for comparison. For both algorithms, the default parameters were used: for PRELUDE 2D unwrapping was done using the "slice by slice" mode and for 3D unwrapping, the default (hybrid) mode was used. For the PUROR algorithm, the parameters defined above in "Implementation" were used; for the hybrid-3D implementation the 64th row of the sagittal and coronal images, and the 128th row of the axial images, were used as the baselines.
[0086] The PUROR unwrapping algorithm was implemented using MATLAB
(MathWorks, Natick, MA). The PRELUDE algorithm implemented in C++ as part of the FSL analysis package (Oxford Center for Functional Magnetic Resonance Imaging of the Brain, United Kingdom) was used. All phase images were unwrapped off-line on the same personal computer with a 2-GHz Athlon processor and 2GB RAM.
Quality measurement of the unwrapped phase volume
[0087] The performance of the hybrid 3D PUROR algorithm and the PRELUDE algorithm was measured quantitatively by calculating the volume unwrap-quality ratio, Qs, using equation 13. As described above, equation 13 is defined as:
Figure imgf000020_0001
where Qs,s+i (Qs,s-i) is the total number of pixels with less than 2π phase difference between the and (s+l)* and s* and (s-l)* slices, divided by the total number of pixels of the s* slice with intensities > support mask THSUpp0if 2D PUROR phase unwrapping
[0088] The 15th slice of the thirty (30) sagittal images (TE = 6 ms) is used as an example to illustrate the 2D PUROR algorithm. Figure 8 A displays the original magnitude (left) and phase (right) images. The mask in Figure 8B (left) was generated by thresholding the magnitude profile of the filtered complex image (filter kernel = 3x3) with a threshold (mask THmask - r). For comparison, a mask generated by the more commonly used method is shown, where the same threshold was applied on the original magnitude image Figure 8B (right). As can be seen, a large number of noise pixels remain and thus the threshold would have to be increased to achieve a similar result as the mask shown in Figure 8B (left). A support mask generated using a higher threshold level (support mask THsupport = 4σ) is shown in Figure 8C. As can be seen, the support mask of Figure 8C excludes most noise-only pixels. The support mask shown in Figure 8C was used in calculating the mean of each line (step 300) and identifying the cross- referring strips (step 400).
[0089] An example of using method 100 to unwrap the phase image of Figure
8A will now be described with reference to Figures 9A to 9F. The MRI phase images are shown in the range of (-4π, 4π).
[0090] The 2D image was loaded (step 200), and unwrapped in both the X and
Y directions. In this embodiment, the complex 2D image is a phase image of a human brain generated by a magnetic resonance imaging (MRI) scan. As such, the 2D phase image was unwrapped using the ID Itoh algorithm along the R/L (X) and S/I (Y) directions (step 300). The resulting images Φ /toh and Φγ.ηοΐι are shown in Figure 9A. As can be seen, a number of streaks are present. This typical failure indicator of the conventional 2D unwrapping algorithm is the reason most prior art algorithms have focused on prevention strategies to solve the unwrapping problem.
[0091] Images Φ /toh and ΦΥ 0Η were processed according to method steps 305 to 350 (described above), and the resultant reconnected images Φχ.ΐι0 κσοηηεα and
Figure imgf000021_0001
are shown in Figure 9B. A difference image between images x.itoh.recomect and ΦγΜ.κ∞ηηε<Λ is shown for illustrative purposes in the range of (-π, π). As can be seen, parts of the highly dense streaks shown in Figure 9A are removed, particularly in the brain region. [0092] Images ΦχΜ.ηοηηεα and (Pmokreconnect were processed according to method steps 355 to 395 (described above), and the resultant output images
'Px.rtoh.reconnect.mean and <pYMo .recomect.mean are shown in Figure 9C. A difference image between images <I>x.itoh.reconnect.mean and <i>Y.itoh.recormect.mean is shown for illustrative purposes. Images <Px.itoh.Keconmct.mean and Y.itoh.reconnect.mean were further processed according to method steps 405 to 420 to identify the corresponding horizontal and vertical processing strips, as shown in Figure 9C.
[0093] As can be seen in Figure 9C, the horizontal processing strip is bigger than the vertical processing strip. As such, the method continued to step 430 (described above) wherein the horizontal processing strip was cross-referred to image
^ .ito .reconnect.mea output as image Φγ .χ.5ίήρ, which was then used to cross-reference the vertical processing strip to image <t>x.itoh.reconnect.mean, output as image <Px.cr. Y.striP, according to method steps 430 to 445 (described above). Images xcr. Y.strip and Φγ.α ,ΧΜήρ are shown in Figure 9D. A difference image between images
Figure imgf000022_0001
is shown for illustrative purposes in the range of (-π, π). As can be seen, most of the streaks within the brain region have been removed. Phase offsets are still observable in other regions, such as the tongue, neck and skull regions.
[0094] Images Φχ„. ΥΜ Ρ and Φγ χΜηΡ were processed according to method steps 500 to 565 (described above). The resultant images were output as images
and Φγ.α- Μρ, shown in Figure 9E. A difference image between images
Φχχ .γ.ΐηιρ and Φγ.ίηιρ is shown for illustrative purposes in the range of (-π, π). As described above, method steps 500 to 565 were repeated according to a threshold value, which in this example was six (6) iterations. As such, method steps 500 to 565 were repeated six (6) times.
[0095] The resultant difference images between images Φχ.αΓγ.ηηρ and Φγ .χ.οηΡ calculated after each iteration are shown in Figure 9F. The difference images are shown in the range of (-π, π). As can be seen, the pixels with a large difference (> π) between images Φγ„.χ and Φχ,α-.γ are mainly located along the tissue/air and tissue/tissue boundaries or in the background noisy regions. The performance of the 2D PUROR algorithm at different threshold levels
[0096] To evaluate the performance of the 2D PUROR algorithm at different noise threshold levels, the thirty (30) sagittal images (TE = 6ms) were unwrapped with THmask ~ 0, 0.5σ, 1σ and 2σ; in all cases the same support mask was used (support mask THsupp0rt = 4<T). The results of the 15th slice of the Φχ. . γ phase volume unwrapped using the different mask threshold levels are shown in Figure 10. The display of the phase images is in the range of [-4π, 4π]. Smoothly unwrapped phase images were achieved with all four threshold levels. In this example, the computation time was below fifty (50) seconds for all threshold levels (47, 37, 33 and 31 seconds for mask THmask = 0, 0.5σ, 1σ and 2σ, respectively).
3D PUROR phase unwrapping
[0097] To illustrate the 3D phase unwrapping method (method 600), the thirty
(30) sagittal slices (TE = 6 ms) are used. The thirty (30) sagittal slices were unwrapped using the 2D PUROR algorithm with filter kernel = 3x3, mask 7Hmaf/t = <rand support mask TH support = 4σ. The phase inconsistencies between the unwrapped slices from two axial slices (1 and 2) and two coronal slices (3 and 4) are shown in Figure 1 IB, which were obtained from the thirty (30) sagittal slices shown in Figure 1 1 A. To remove these phase offsets, the sagittal slices were stacked using the 64th row as the baseline. As shown in Figure 11C, the inter-slice phase offsets in the brain region were removed, while the phase offsets in the tongue and neck regions remained. Following the application of the cross-referring strip and line-segments methods described above, the remaining offsets were removed, resulting in a smooth phase volume as shown in Figure 1 ID. It is noted that the display of the phase images in Figures 11 A to 1 ID are in the range of [-4π, 4π].
Comparison of the PUROR and PRELUDE algorithms
[0098] The performance of the PUROR algorithm was evaluated by unwrapping the sagittal, coronal and axial head images acquired with two TEs (6ms and 31ms) and compared to the results of the PRELUDE algorithm. For the PUROR algorithm, the masks were generated by using a filter kernel = 3 3 and mask THmask ~ For the PRELUDE algorithm, the default masks generated by the software were used. While more pixels were excluded by the PRELUDE 2D mode, the differences between masks used by PUROR and PRELUDE hybrid are almost unnoticeable as shown in Figures 12A to 14B. Figures 12A and 12B show three (3) unwrapped sagittal slices separated by 12mm for TE=6ms and TE=31ms, respectively. Figures 13A and 13B show three (3) unwrapped coronal slices separated by 12mm for TE=6ms and TE=31ms, respectively. Figures 14A and 14B show three (3) unwrapped axial slices separated by 12mm for TE=6ms and TE=31ms, respectively. The results of the PUROR and PRELUDE algorithms can be compared qualitatively in Figures 12A-12B, 13A-13B, and 14A-14B (sagittal, coronal, and axial stacks respectively). In this example, the hybrid 3D PUROR images shown are from the <Psiice.cr.(x.cr. Y) volume. It is noted that the display of the phase images in Figures 12A to 14B are in the range of [-4π, 4π].
[0099] Results of the quantitative comparison of the two techniques (using Qs) are shown in Table 1 (below). While both PUROR and PRELUDE algorithms successfully unwrap the measured phase images (Qs > 99%), the PRELUDE default mode provided slightly better results than the 3D PUROR algorithm for the axial and coronal phase volumes (TE=6ms and TE=31ms), and for the sagittal phase volume (TE=31ms), and provided slightly worse results for the sagittal phase volume (TE = 6ms). It will be appreciated that these two algorithms perform differently at the boundary areas (tissue/air and tissue/tissue). Unlike the PUROR algorithm, phase offset as well as granulated pixels may appear on the unwrapped phase images when using PRELUDE, for example as shown in the regions of skull and tongue shown in Figures 12A and 12B.
[00100] Table 1 also shows the results of the computation time for unwrapping the different phase volumes. The PUROR algorithm is significantly faster than the PRELUDE default mode and faster than that the PRELUDE 2D mode when used to unwrap the phase images acquired at TE = 31ms (SNR « 1 1). As can also be seen in Table 1 , the PUROR algorithm is able to unwrap all phase images in approximately the same processing time, while PRELUDE took significantly different computation times for unwrapping the phase images at different SNR levels as well as for phase images acquired in different orientations. The dependency of the PRELUDE computation time on the complexity of the phase volume and SNR has been reported by other publications [21, 22, 28].
Table 1 : Comparison of the computation times and unwrap-qualitv ratio of phase volumes when using the PUROR STACKING. HYBRID PUROR. 2D PRELUDE and HYBRID PRELUDE algorithms to unwrap axial, sagittal and coronal multi-slice human brain images
Figure imgf000025_0001
'image matrix size was 256 x 256 x 30
2The time (~ 3 sec/per slice) for background noise estimation was not included.
3The unwrap-quality ratio of the unwrapped phase volumes is presented as mean ± standard deviation from the 30 slices per phase volume.
''The SNR calculated within the brain region (mean ± standard deviation) from the 30 slices per volume.
[00101] The performance of PUROR and PRELUDE algorithms was also evaluated by unwrapping phase maps with extremely low SNR. Turning now to Figures 15 A, 15B and 15C. In this example, the same masks were applied for both the PUROR algorithm and the PRELUDE algorithm. For Figures 15A and 15B, the masks were generated by using a filter kernel = 3 3 and mask THmask = 0.5σ. For Figure 15C, the masks were generated by using a filter kernel = 15x15 and mask THmask = 0.3 σ. The PRELUDE algorithm further excluded the isolated non-brain regions or pixels during processing (Figure 15C). As shown by the magnitude images, the SNR in the brain region decreases as the flip angle is reduced from 10° to 5° to 2°.
[00102] Figures 15 A and 15B show that both algorithms perform equally well under extremely low SNR conditions, where the SNR = 5.2 ± 0.4 and SNR = 2.4 ± 0.2 for FA = 10° and FA = 5°, respectively. In the extreme case, where the SNR is close to unity (Figure 15C, SNR = 1.1 ± 0.2) both algorithms fail. For PUROR, the average computational time per slice was less than 1.5 seconds for all values of FA, while the PRELUDE algorithm required 150 seconds and 213 seconds for values of FA = 10° and FA = 5°, respectively. The time required for the FA = 2° phase map was approximately 25 seconds as more pixels outside the brain were excluded by the mask in Figure 15C. It is noted that the display of the unwrapped phase images in Figures 15A, 15B and 15C are in the range of [-4π, 4π], and the display of the measured phase images and the difference phase image is in the range of [-π, π].
[00103] Although examples of the PUROR algorithm are described using acquired phase volumes of the human brain, those skilled in the art will appreciate that the PUROR algorithm can be used to unwrap phase volumes of other types of organs such as for example the heart, neck, knee, liver, lung, etc.
[00104] Although exemplary threshold values are described, those skilled in the art will appreciate that the threshold values may be configured depending on the application of the PUROR algorithm.
[00105] Those skilled in the art will appreciate that in another embodiment, steps
810 and 815 of step 800 may be omitted, such that step 800 proceeds to steps 820 and 825 unconditionally. Similarly, steps 910 and 915 of step 900 may be omitted.
[00106] Although the PUROR algorithm described above requires four (4) user input parameters (σ, mask THmask support mask THSUpport and baseline index), those skilled in the art will appreciate that these parameters may be estimated. For example, to estimate noise level σ, a conventional method of calculating the standard-deviation of a noise-only region may be used. To set the values for mask THmash and support mask THsupport, Otsu's method [35] or a histogram-based method [36] may be used.
Alternatively, any known advanced segmentation methods may be used. To set the location of the baseline for the purpose of stacking slices, a smaller region location in the volume of interest may be used. Alternatively, the entire region location in the volume of interest may be used.
[00107] Although the line-segments described above comprise a continuous set of at least three (3) pixels for which the phase gradient is less than a gradient threshold <Pth,LS, those skilled in the art will appreciate that the location of the poles [18] may be used to divide a row or column into line-segments.
[00108] The scope of the claims should not be limited by the preferred embodiments set forth in the examples, but should be given the broadest interpretation consistent with the description as a whole.
[00109] REFERENCES:
[ 1 ] Ehman RL, Felmlee JP. Adaptive technique for high-definition MR imaging of moving structures. Radiology 1989;173(l):255-263.
[2] Welch EB, Manduca A, Grimm RC, Ward HA, Jack CR, Jr. Spherical
navigator echoes for full 3D rigid body motion measurement in MRI. Magn
Reson Med 2002;47(1):32-41.
[3] Liu J, Drangova M. Phase-unwrapping algorithm for translation extraction from spherical navigator echoes. Magn Reson Med 2010;63(2):510-516.
[4] Szumowski J, Coshow WR, Li F, Quinn SF. Phase unwrapping in the three- point Dixon method for fat suppression MR imaging. Radiology
1994; 192(2):555-561.
[5] Bryant DJ, Payne JA, Firmin DN, Longmore DB. Measurement of flow with
NMR imaging using a gradient pulse and phase difference technique. Journal of computer assisted tomography 1984;8(4):588-593.
[6] Nayler GL, Firmin DN, Longmore DB. Blood flow imaging by cine magnetic resonance. Journal of computer assisted tomography 1986;10(5):715-722.
[7] Salfity MF, Huntley JM, Graves MJ, Marklund O, Cusack R, Beauregard DA.
Extending the dynamic range of phase contrast magnetic resonance velocity imaging using advanced higher-dimensional phase unwrapping algorithms. J
R Soc Interface 2006;3(8):415-427.
[8] Rauscher A, Barth M, Reichenbach JR, Stollberger R, Moser E. Automated unwrapping of MR phase images applied to BOLD MR- venography at 3 Tesla.
J Magn Reson Imaging 2003 ; 18(2) : 175- 180.
[9] Jezzard P, Balaban RS. Correction for geometric distortion in echo planar
images from B0 field variations. Magn Reson Med 1995;34(l):65-73.
[10] Cusack R, Papadakis N. New robust 3-D phase unwrapping algorithms:
application to magnetic field mapping and undistorting echoplanar images.
Neurolmage 2002; 16(3 Pt l):754-764.
[1 1] Duyn JH, van Gelderen P, Li TQ, de Zwart JA, Koretsky AP, Fukunaga M.
High- field MRI of brain cortical substructure based on signal phase. Proc Natl
Acad Sci U S A 2007; 104(28): 11796-1 1801.
[12] Haacke EM, Xu Y, Cheng YC, Reichenbach JR. Susceptibility weighted
imaging (SWI). Magn Reson Med 2004;52(3):612-618.
[13] Cline HE, Hynynen K, Hardy CJ, Watkins RD, Schenck JF, Jolesz FA. MR temperature mapping of focused ultrasound surgery. Magn Reson Med
1994;31(6):628-636.
[14] Foottit C, Cron GO, Hogan MJ, Nguyen TB, Cameron I. Determination of the venous output function from MR signal phase: feasibility for quantitative DCE-MRI in human brain. Magn Reson Med 2010;63(3):772-781.
[15] Bourgeat P, Fripp J, Stanwell P, Ramadan S, Ourselin S. MR image
segmentation of the knee bone using phase information. Med Image Anal 2007;l l(4):325-335.
[ 16] Ghiglia DC, Pritt MD. Two-dimensional Phase Unwrapping: Theory,
Algorithms, and Software. New York: Wiley 1998. [ 17] Itoh K. Analysis of the phase unwrapping problem. Applied Optics
1982;21(14):2470-2470.
[18] Ghiglia D, Mastin GA, Romero LA. Cellular-automata method for phase
unwrapping. J Opt Soc Am A 1987;4(l):267-280.
[19] Chavez S, Xiang QS, An L. Understanding phase maps in MRI: a new cutline phase unwrapping method. IEEE transactions on medical imaging
2002;21(8):966-977.
[20] Hedley M, Rosenfeld D. A new two-dimensional phase unwrapping algorithm for MRI images. Magn Reson Med 1 92;24( 1 ): 177- 181.
[21] Witoszynskyj S, Rauscher A, Reichenbach JR, Barth M. Phase unwrapping of
MR images using PhiUN - A fast and robust region growing algorithm. Med
Image Anal 2009;13:257-268.
[22] Zhou K, Zaitsev M, Bao S. Reliable two-dimensional phase unwrapping
method using region growing and local linear estimation. Magn Reson Med
2009;62(4): 1085-1090.
[23] An L, Xiang QS, Chavez S. A fast implementation of the minimum spanning tree method for phase unwrapping. IEEE transactions on medical imaging
2000;19(8):805-808.
[24] Abdul-Rahman HS, Gdeisat MA, Burton DR, Lalor MJ, Lilley F, Moore CJ.
Fast and robust three-dimensional best path phase unwrapping algorithm. Appl Opt 2007;46(26):6623-6635.
[25] Huntley JM. Three-dimensional noise-immune phase unwrapping algorithm.
Appl Opt 2001 ;40(23):3901-3908.
[26] Salfity MF, Ruiz PD, Huntley JM, Graves MJ, Cusack R, Beauregard DA.
Branch cut surface placement for unwrapping of undersampled three- dimensional phase data: application to magnetic resonance imaging arterial flow mapping. Appl Opt 2006;45(12):2711-2722.
[27] Arevalillo-Herraez M, Burton DR, Lalor MJ. Clustering-based robust three- dimensional phase unwrapping algorithm. Appl Opt 2010;49(10):1780-1788.
[28] Jenkinson M. Fast, automated, N-dimensional phase-unwrapping algorithm.
Magn Reson Med 2003 ;49(1): 193- 197.
[29] Liang ZP. A model-based method for phase unwrapping. IEEE transactions on medical imaging 1996;15(6):893-897.
[30] Moon-Ho Song S, Napel S, Pelc NJ, Glover GH. Phase unwrapping of MR phase images using Poisson equation. IEEE Trans Image Process
1995;4(5):667-676.
[31] Ying L, Liang ZP, Munson DC, Jr., Koetter R, Frey BJ. Unwrapping of MR phase images using a Markov random field model. IEEE transactions on medical imaging 2006;25(1):128-136.
[32] Langley J, Zhao Q. A model-based 3D phase unwrapping algorithm using
Gegenbauer polynomials. Physics in medicine and biology 2009;54(17):5237-
5252.
[33] Rajan J, Poot D, Juntu J, Sijbers J. Noise measurement from magnitude MRI using local estimates of variance and skewness. Physics in medicine and biology 2010;55(16):N441-449. [34] Pandian DS, Ciulla C, Haacke EM, Jiang J, Ayaz M. Complex threshold method for identifying pixels that contain predominantly noise in magnetic resonance images. J Magn Reson Imaging 2008;28(3):727-735.
[35] Otsu N. A threshold selection method from Gray-level histograms. IEEE Trans Systems, Man, and Cybernetics 1979;SMC-9(l):62-66.
[36] Smith SM. Fast robust automated brain extraction. Human brain mapping 2002;17(3): 143-155.
[00110] Documents cited in this specification are herein incorporated by reference in their entireties.

Claims

What is claimed is:
1. A method for unwrapping a phase image, comprising:
unwrapping the phase image along X and Y directions;
identifying a horizontal processing strip in the unwrapped X image and a vertical processing strip in the unwrapped Y image based on defined criterion;
cross-referring the horizontal processing strip with the unwrapped Y image and the vertical processing strip with the unwrapped X image;
dividing each column in the cross-referred unwrapped Y image into a plurality of line-segments and dividing each row in the cross-referred unwrapped X image into a plurality of line-segments; and
cross-referring the column line-segments with the cross-referred unwrapped X image and cross-referring the row line-segments with the cross-referred unwrapped Y image, thereby to yield unwrapped images X.cr.Y and Y.cr.X.
2. The method of claim 1 wherein unwrapping the phase image along the X and Y directions is performed using a one-dimensional (ID) Itoh algorithm.
3. The method of claim 1 or 2 wherein identifying the horizontal processing strip comprises:
calculating a quality ratio for each row in the unwrapped X image;
grouping neighbor rows if the quality ratio of each of the rows is above a quality ratio threshold, thereby defining a strip; and
selecting the strip comprising the greatest number of rows as the horizontal processing strip.
4. The method of any one of claims 1 to 3 wherein identifying the vertical processing strip comprises:
calculating a quality ratio for each column in the unwrapped Y image;
grouping neighbor columns if the quality ratio of each of the columns is above a quality ratio threshold, thereby defining a strip; and selecting the strip comprising the greatest number of columns as the vertical processing strip.
5. The method of any one of claims 1 to 4 further comprising:
reconnecting the unwrapped X and Y images.
6. The method of claim 5 wherein reconnecting the unwrapped X image comprises:
dividing a row in the unwrapped X image into n line-segments;
calculating a phase difference between line-segment i and line-segment comparing the phase difference to a threshold value, and if the phase difference is greater than the threshold value:
shifting line-segment i+1 to line-segment n by a defined value; and repeating the steps of dividing, calculating and comparing until all rows in the unwrapped X image have been reconnected.
7. The method of claim 6 wherein the threshold value is π/2.
8. The method of claim 6 or 7 wherein the defined value is a multiple of 2π.
9. The method of any one of claims 6 to 8 further comprising:
calculating a mean phase value for each row in the reconnected unwrapped X image thereby creating a dataset of mean phase values;
unwrapping the dataset of mean phase values;
calculating a difference between each mean phase value and a corresponding unwrapped mean phase value; and
shifting the phase value of each row by a value based on the difference.
10. The method of claim 9 wherein the value based on the difference is calculated as a multiple of 2π.
1 1. The method of claim 10 wherein the multiple of 2π is determined by a round function of a quotient of the difference divided by 2%.
12. The method of any one of claims 5 to 11 wherein reconnecting the unwrapped
Y image comprises:
dividing a column in the unwrapped Y image into n line-segments;
calculating a phase difference between line-segment and line-segment /+/; comparing the phase difference to a threshold value, and if the phase difference is greater than the threshold value:
shifting line-segment i+1 to line-segment n by a defined value; and repeating the steps of dividing, calculating and comparing until all columns in the unwrapped Y image have been reconnected.
13. The method of claim 12 wherein the threshold value is π/2.
14. The method of claim 12 or 13 wherein the defined value is a multiple of 2π.
15. The method of any one of claims 12 to 14 further comprising:
calculating a mean phase value for each column in the reconnected unwrapped
Y image thereby creating a dataset of mean phase values;
unwrapping the dataset of mean phase values;
calculating a difference between each mean phase value and a corresponding unwrapped mean phase value; and
shifting the phase value of each column by a value based on the difference.
16. The method of claim 15 wherein the value based on the difference is calculated as a multiple of 2π.
17. The method of claim 16 wherein the multiple of 2π is determined by a round function of a quotient of the difference divided by 2π.
18. The method of any one of claims 1 to 17 wherein cross-referring the line- segments is repeated a predefined number of times.
19. The method of claim 18 wherein the predefined number is 6.
20. A method for unwrapping a three-dimensional (3D) phase image comprising a plurality of phase images, the method comprising:
unwrapping each phase image according to the method of claim 1 ; and stacking the unwrapped images X.cr. Y and stacking the unwrapped phase images Y.cr.X, thereby to yield an X.cr.Y stack and an Y.cr.X stack, respectively.
21. The method of claim 20 further comprising:
calculating a mean phase value associated with a particular row and a particular column in each unwrapped phase image, thereby creating a dataset of mean phase values;
unwrapping the dataset of mean phase values;
calculating a difference between each mean phase value and a corresponding unwrapped mean phase value; and
shifting the phase value of each unwrapped phase image by a value based on the difference.
22. The method of claim 21 wherein prior to the calculating, each unwrapped mean phase value is shifted to ensure a central complex phase image is within a particular range.
23. The method of claim 21 or 22 wherein the particular row is input by a user.
24. The method of any one of claims 21 to 23 wherein the particular column is input by a user.
25. The method of claim 21 or 22 where in the particular row is automatically selected.
26. The method of claim 21 , 22 or 25 wherein the particular column is automatically selected.
27. The method of any one of claims 20 to 26 further comprising reformatting the X.cr.Y stack along the X-Z plane, thereby creating a X-Z stack of unwrapped phase images.
28. The method of claim 27 further comprising reformatting the Y.cr.X stack along the Y-Z plane, thereby creating a Y-Z stack of unwrapped phase images.
29. The method of claim 28 further comprising unwrapping each phase image in the 3D phase image along a Z direction, and stacking each of the Z-unwrapped phase images, thereby creating a Z-stack.
30. The method of claim 29 further comprising reformatting the Z-stack along the X-Z plane, thereby creating a Z-X-Z stack of unwrapped phase images.
31. The method of claim 30 further comprising reformatting the Z-stack along the Y-Z plane, thereby creating a Z-Y-Z stack of unwrapped phase images.
32. The method of claim 31 further comprising calculating an unwrap quality ratio for each unwrapped phase image, and if the unwrap quality ratio is less than an unwrap quality ratio threshold, cross-referring each phase image in the X-Z stack with a corresponding phase image in the Z-X-Z stack and cross-referring each phase image in the Y-Z stack with a corresponding phase image in the Z-Y-Z stack.
33. The method of claim 32 wherein cross-referring each phase image in the X-Z stack with a respective phase image in the Z-X-Z stack comprises:
identifying a processing strip in a phase image in the X-Z stack;
identifying a processing strip in a corresponding phase image in the Z-X-Z stack; and cross-referring the processing strip in the phase image in the X-Z stack with the corresponding image in the Z-X-Z stack, and cross-referring the processing strip in the corresponding phase image in the Z-X-Z stack with the image in the X-Z stack.
34. The method of claim 33 further comprising cross-referring line-segments of each phase image in cross-referred image stack X-Z with a corresponding phase image in cross-referred image stack Z-X-Z, thereby creating image stack Z-X-Y.
35. The method of claim 34 wherein cross-referring each phase image in the Y-Z stack with a respective phase image in the Z-Y-Z stack comprises:
identifying a processing strip in a phase image in the Y-Z stack;
identifying a processing strip in a corresponding phase image in the Z-Y-Z stack; and
cross-referring the processing strip in the phase image in the Y-Z stack with the corresponding image in the Z-Y-Z stack, and cross-referring the processing strip in the corresponding phase image in the Z-Y-Z stack with the image in the Y-Z stack.
36. The method of claim 35 further comprising cross-referring line-segments of each phase image in cross-referred image stack Y-Z with a corresponding phase image in cross-referred image stack Z-Y-Z, thereby creating image stack Z-Y-X.
37. The method of claim 36 further comprising calculating an unwrap quality ratio for each phase image in stacks Z-X-Y and Z-Y-X, and if the unwrap quality ratio is less than the unwrap quality ratio threshold for at least one phase image, cross- referring line-segments of each phase images in the Z-X-Y image stack with a corresponding phase image in the Z-Y-Z image stack.
38. A non-transitory, computer-readable medium embodying a computer program having computer program code for execution by a computer to unwrap a phase image, the program code comprising:
program code for unwrapping the phase image along X and Y directions; program code for identifying a horizontal processing strip in the unwrapped X image and a vertical processing strip in the unwrapped Y image based on defined criterion;
program code for cross-referring the horizontal processing strip with the unwrapped Y image and the vertical processing strip with the unwrapped X image; program code for dividing each column in the cross-referred unwrapped Y image into a plurality of line-segments and for dividing each row in the cross-referred unwrapped X image into a plurality of line-segments; and
program code for cross-referring the column line-segments with the cross- referred unwrapped X image and cross-referring the row line-segments with the cross- referred unwrapped Y image, thereby to yield unwrapped images X.cr.Y and Y.cr.X.
39. A non-transitory, computer-readable medium embodying a computer program having computer program code for execution by a computer to unwrap a three- dimensional phase image comprising a plurality of phase images, the program code comprising:
program code for unwrapping each phase image by:
unwrapping the phase image along X and Y directions; identifying a horizontal processing strip in the unwrapped X image and a vertical processing strip in the unwrapped Y image based on defined criterion;
cross-referring the horizontal processing strip with the unwrapped Y image and the vertical processing strip with the unwrapped X image;
dividing each column in the cross-referred unwrapped Y image into a plurality of line-segments and dividing each row in the cross-referred unwrapped X image into a plurality of line-segments; and
cross-referring the column line-segments with the cross-referred unwrapped X image and cross-referring the row line-segments with the cross-referred unwrapped Y image, thereby to yield unwrapped images X.cr.Y and Y.cr.X; and
program code for stacking the unwrapped images X.cr.Y and for stacking the unwrapped images Y.cr.X, thereby to yield an X.cr.Y stack and an Y.cr.X stack, respectively.
40. An apparatus for unwrapping a phase image comprising:
memory for storing the phase image; and
processing structure communicating with the memory and being configured to:
unwrap the phase image along X and Y directions;
identify a horizontal processing strip in the unwrapped X image and a vertical processing strip in the unwrapped Y image based on defined criterion;
cross-refer the horizontal processing strip with the unwrapped Y image and the vertical processing strip with the unwrapped X image;
divide each column in the cross-referred unwrapped Y image into a plurality of line-segments and divide each row in the cross-referred unwrapped X image into a plurality of line-segments; and
cross-refer the column line-segments with the cross-referred unwrapped X image and cross-refer the row line-segments with the cross-referred unwrapped Y image, thereby to yield unwrapped images X.cr.Y and Y.cr.X.
41. An apparatus for unwrapping a three-dimensional phase image comprising a plurality of phase images, the apparatus comprising:
memory for storing the three-dimensional phase image; and
processing structure communicating with the memory and being configured to:
for each phase image:
unwrap the phase image along X and Y directions, identifying a horizontal processing strip in the unwrapped X image and a vertical processing strip in the unwrapped Y image based on defined criterion;
cross-refer the horizontal processing strip with the unwrapped Y image and the vertical processing strip with the unwrapped X image;
divide each column in the cross-referred unwrapped Y image into a plurality of line-segments and divide each row in the cross-referred unwrapped X image into a plurality of line-segments; and cross-refer the column line-segments with the cross-referred unwrapped X image and cross-refer the row line-segments with the cross-referred unwrapped Y image, thereby creating unwrapped images X.cr.Y and Y.cr.X; and stack the unwrapped images X.cr.Y and stack the unwrapped images Y.cr.X, thereby to yield an X.cr.Y stack and an Y.cr.X stack, respectively.
PCT/CA2012/001181 2011-12-23 2012-12-21 Method for phase unwrapping WO2013091078A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201161580008P 2011-12-23 2011-12-23
US61/580,008 2011-12-23

Publications (1)

Publication Number Publication Date
WO2013091078A1 true WO2013091078A1 (en) 2013-06-27

Family

ID=48667564

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2012/001181 WO2013091078A1 (en) 2011-12-23 2012-12-21 Method for phase unwrapping

Country Status (1)

Country Link
WO (1) WO2013091078A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106462984A (en) * 2014-06-02 2017-02-22 皇家飞利浦有限公司 Biais-free regularization for spectral phase-unwrapping in differential phase contrast imaging.
CN109190690A (en) * 2018-08-17 2019-01-11 东北大学 The Cerebral microbleeds point detection recognition method of SWI image based on machine learning
CN114255160A (en) * 2022-02-28 2022-03-29 腾讯科技(深圳)有限公司 Data processing method, device, equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007061632A2 (en) * 2005-11-09 2007-05-31 Geometric Informatics, Inc. Method and apparatus for absolute-coordinate three-dimensional surface imaging

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007061632A2 (en) * 2005-11-09 2007-05-31 Geometric Informatics, Inc. Method and apparatus for absolute-coordinate three-dimensional surface imaging

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CHAVEZ ET AL.: "Understanding Phase Maps in MRI: A New Cutline Phase Unwrapping Method", IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 21, no. 8, August 2002 (2002-08-01), pages 966 - 977, XP001133448 *
JARLE STRAND ET AL.: "Two Dimensional Phase Unwrapping Using a Block Least-Squares Method", IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 8, no. 3, March 1999 (1999-03-01), pages 375 - 386, XP011026292 *
LIU ET AL.: "Intervention-based Multidimensional Phase Unwrapping using Recursive Orthogonal Referring", MAGNETIC RESONANCE IN MEDICINE, vol. 68, no. IS.4, 9 January 2012 (2012-01-09), pages 1303 - 1316, XP055073120 *
ZHONG ET AL.: "A Fast Phase Unwrapping Algorithm Based on Minimum Discontinuity by Blocking", 2ND INTERNATIONAL CONFERENCE ON FUTURE COMPUTER AND COMMUNICATION (ICFCC), vol. 1, May 2010 (2010-05-01), pages VI-717 - VI-721, XP031698732 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106462984A (en) * 2014-06-02 2017-02-22 皇家飞利浦有限公司 Biais-free regularization for spectral phase-unwrapping in differential phase contrast imaging.
US10037600B2 (en) 2014-06-02 2018-07-31 Koninklijke Philips N.V. Bias-free regularization for spectral phase-unwrapping in differential phase contrast imaging
CN109190690A (en) * 2018-08-17 2019-01-11 东北大学 The Cerebral microbleeds point detection recognition method of SWI image based on machine learning
CN109190690B (en) * 2018-08-17 2021-10-19 东北大学 Method for detecting and identifying cerebral microhemorrhage points based on SWI image of machine learning
CN114255160A (en) * 2022-02-28 2022-03-29 腾讯科技(深圳)有限公司 Data processing method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
Cheng et al. Comprehensive motion‐compensated highly accelerated 4D flow MRI with ferumoxytol enhancement for pediatric congenital heart disease
Godenschweger et al. Motion correction in MRI of the brain
Pang et al. Whole‐heart coronary MRA with 100% respiratory gating efficiency: self‐navigated three‐dimensional retrospective image‐based motion correction (TRIM)
Krishnam et al. Image quality and diagnostic accuracy of unenhanced SSFP MR angiography compared with conventional contrast-enhanced MR angiography for the assessment of thoracic aortic diseases
Carr et al. Cine MR angiography of the heart with segmented true fast imaging with steady-state precession
Veraart et al. Comprehensive framework for accurate diffusion MRI parameter estimation
Roy et al. Dynamic imaging of the fetal heart using metric optimized gating
Rutz et al. Accelerated whole‐heart 3D CSPAMM for myocardial motion quantification
Akçakaya et al. Accelerated isotropic sub‐millimeter whole‐heart coronary MRI: compressed sensing versus parallel imaging
US9797974B2 (en) Nonrigid motion correction in 3D using autofocusing with localized linear translations
Mehta et al. Image reconstruction algorithm for motion insensitive MR Fingerprinting (MRF): MORF
US10403007B2 (en) Registration-based motion tracking for motion-robust imaging
Krishnam et al. Noncontrast 3D steady-state free-precession magnetic resonance angiography of the whole chest using nonselective radiofrequency excitation over a large field of view: comparison with single-phase 3D contrast-enhanced magnetic resonance angiography
Odille et al. Motion-corrected, super-resolution reconstruction for high-resolution 3D cardiac cine MRI
Odille et al. Isotropic 3 D cardiac cine MRI allows efficient sparse segmentation strategies based on 3 D surface reconstruction
Liu et al. Intervention‐based multidimensional phase unwrapping using recursive orthogonal referring
Zun et al. Pseudocontinuous arterial spin labeling with prospective motion correction (PCASL‐PROMO)
Bush et al. Patient specific prospective respiratory motion correction for efficient, free‐breathing cardiovascular MRI
Vuissoz et al. Free‐breathing imaging of the heart using 2D cine‐GRICS (generalized reconstruction by inversion of coupled systems) with assessment of ventricular volumes and function
Madai et al. Correction for susceptibility distortions increases the performance of arterial spin labeling in patients with cerebrovascular disease
Hutter et al. Highly undersampled peripheral time-of-flight magnetic resonance angiography: optimized data acquisition and iterative image reconstruction
WO2013091078A1 (en) Method for phase unwrapping
Forman et al. Free-breathing whole-heart coronary MRA: motion compensation integrated into 3D cartesian compressed sensing reconstruction
Huynh et al. Noise mapping and removal in complex-valued multi-channel MRI via optimal shrinkage of singular values
Krishnam et al. Three-dimensional imaging of pulmonary veins by a novel steady-state free-precession magnetic resonance angiography technique without the use of intravenous contrast agent: initial experience

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12859950

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12859950

Country of ref document: EP

Kind code of ref document: A1