WO2023099569A1 - Analysis of apparent diffusion coefficient maps acquired using mri - Google Patents

Analysis of apparent diffusion coefficient maps acquired using mri Download PDF

Info

Publication number
WO2023099569A1
WO2023099569A1 PCT/EP2022/083857 EP2022083857W WO2023099569A1 WO 2023099569 A1 WO2023099569 A1 WO 2023099569A1 EP 2022083857 W EP2022083857 W EP 2022083857W WO 2023099569 A1 WO2023099569 A1 WO 2023099569A1
Authority
WO
WIPO (PCT)
Prior art keywords
adc
map
values
uncertainty
value
Prior art date
Application number
PCT/EP2022/083857
Other languages
French (fr)
Inventor
Konstantinos ZORMPAS-PETRIDIS
Matthew Blackledge
Original Assignee
Institute Of Cancer Research: Royal Cancer Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute Of Cancer Research: Royal Cancer Hospital filed Critical Institute Of Cancer Research: Royal Cancer Hospital
Publication of WO2023099569A1 publication Critical patent/WO2023099569A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R33/00Arrangements or instruments for measuring magnetic variables
    • G01R33/20Arrangements or instruments for measuring magnetic variables involving magnetic resonance
    • G01R33/44Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
    • G01R33/48NMR imaging systems
    • G01R33/54Signal processing systems, e.g. using pulse sequences ; Generation or control of pulse sequences; Operator console
    • G01R33/56Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution
    • G01R33/563Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution of moving material, e.g. flow contrast angiography
    • G01R33/56341Diffusion imaging

Definitions

  • WBDWI Whole-body diffusion-weighted MRI
  • ADC Apparent Diffusion Coefficient
  • the present invention has been devised in light of the above considerations.
  • the present disclosure provides a method of analysing magnetic resonance images of an object, the method including steps of: receiving an ADC (apparent diffusion coefficient) map of an object acquired by the performance of MRI (magnetic resonance imaging) of the object at respective and different b-values or receiving data from which the ADC map is derivable, the ADC map mapping values of ADC at respective positions across the object, the ADC value at a respective position being the negative gradient from a fitting to data points on a graph of the log of the intensities of the MRI signals acquired at that position against the b- values used to obtain the MRI signals; and using a neural network to calculate a predicted uncertainty map of the ADC values of the ADC map, the predicted uncertainty map mapping values of predicted uncertainty at the respective positions across the object, each predicted uncertainty value being a predicted measure, at a respective
  • the method is able to achieve these advantages because the neural network can apparently learn complex relationships between a given pixel in an image and its neighbouring regions, in order to arrive at robust estimation of the local noise field.
  • the predicted values of OADC from the neural network can provide clinicians with increased confidence when using WBDWI to monitor response of cancer to treatment.
  • the predicted uncertainty values of the predicted uncertainty map may, conveniently, be standard deviation values. However, this does not exclude that other values which relate to the standard deviation may be used.
  • the predicted uncertainty values of the predicted uncertainty map may be variance (standard deviation squared) values or precision (reciprocal of variance) values, such values being effectively measures of standard deviation.
  • the neural network may be a convolutional neural network.
  • the fitting to data points on a graph of the log of the intensities of the MRI signal intensities acquired at that position against the b-values used to obtain the MRI signals may be a linear least squares fitting.
  • the neural network may be trained on a set of training samples in which each training sample inputted into the neural network includes an ADC map or data from which the ADC map is derivable, and the output of the neural network from each inputted sample is compared with a corresponding actual uncertainty map, the actual uncertainty map mapping values of actual uncertainty at the respective positions across the object, each actual uncertainty value being an actual measure, at a respective position, of the standard deviation in the corresponding ADC value at that position.
  • the signal intensity image may map values of log MRI signal intensity, S, at the respective positions across the object at a predetermined b-value.
  • the values of log MRI signal intensity mapped by the signal intensity image may be intercept values at the respective positions across the object, the intercept value at a respective position being the intercept value at a predetermined b-value of the fitting to the data points on the graph for that position.
  • the method may include a preliminary step of: performing MRI of the object at the respective and different b-values to acquire the ADC map or the data from which the ADC map is derivable, and optionally the signal intensity image.
  • the object may be a human or animal subject.
  • the method may then include a further step of: analysing the predicted uncertainty map, or an image derived therefrom, for assessment of disease extent in the human or animal subject.
  • the present disclosure provides a method of training a neural network programmed to output a predicted uncertainty map from an input including an ADC map or data from which the ADC map is derivable, wherein: the ADC map maps values of ADC at respective positions across an object, the ADC value at a respective position being the negative gradient from a fitting to data points on a graph of the log of the intensities, S, of MRI signals acquired at that position against b-values used to obtain the MRI signals; and the predicted uncertainty map maps values of predicted uncertainty at the respective positions across an object, each predicted uncertainty value being a predicted measure, at a respective position, of the standard deviation in a corresponding ADC value at that position; the method including the steps of: providing a set of training samples, each sample including an ADC map or data from which the ADC map is derivable, and a corresponding actual uncertainty map which maps values of actual uncertainty at the respective positions across the object, each actual uncertainty value being an actual measure, at a respective position, of the standard deviation in the corresponding ADC value
  • the method of this other aspect may be used to train the neural network of the first aspect.
  • the cost function may include a parameter that measures a pixel-wise difference between the actual uncertainty map and the predicted uncertainty map.
  • the difference may be an absolute difference.
  • the cost function may include a parameter that measures visual similarity between the actual uncertainty map and the predicted uncertainty map.
  • the visual similarity may be a perception similarity measured by calculating an error (e.g. in the form of a means-squared-error cost function) between feature vectors extracted from layers of a pre-trained network, for which back-propagation has been used to train the layers to classify objects in a different dataset (e.g. to classify objects from natural images, such as images available from the ImageNet image database at https://www.image-net.org/).
  • Figure 1 B illustrates the deep-learning U-Net reconstruction of cr ADC measurements from input ADC maps and In(So) images, enabling 2-point estimation of cr ADC .
  • Figure 2A shows training (solid) and validation (dashed) curves depicting the change in mean absolute error over the training epochs that were investigated for the dual-channel input data.
  • Figure 2B shows corresponding training (solid) and validation (dashed) curves depicting the change in mean absolute error over the training epochs that were investigated for the single-channel input data. In both cases, a plateau in the validation curve is observed after epoch 50. However, dual-channel input results in markedly lower mean absolute error compared to the single-channel input for both training and test.
  • Figure 3 shows example axial slices from each of the test patient datasets collected from the prostate cancer cohort.
  • a b-value regression network was developed to estimate co from ground truth o“ ADC maps (illustrated in Figure 1 B).
  • the network consisted of four VGG-like blocks with 32, 64, 128, and 256 filters respectively (3x3 filter size in each case), with each block followed by a max-pooling operation and a 20% dropout layer to reduce overfitting.
  • the final layer consists of a fully-connected layer with 256 neurons (dropout 20%) followed by another dense layer consisting of a single neuron to generate the estimate of co.
  • a linear activation was used for this last layer whilst a ReLU activation function was used in all preceding layers.
  • a neural network may have many trainable parameters (>30 million) that are able to learn whether pixel differences are due to genuine noise or due to an object edge.
  • embodiments may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof.
  • the program code or code segments to perform the necessary tasks may be stored in a computer readable medium.
  • One or more processors may perform the necessary tasks.
  • a code segment may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements.
  • a code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents.
  • Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, etc. References

Landscapes

  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Vascular Medicine (AREA)
  • General Health & Medical Sciences (AREA)
  • Radiology & Medical Imaging (AREA)
  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Condensed Matter Physics & Semiconductors (AREA)
  • General Physics & Mathematics (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

A method of analysing magnetic resonance images of an object is provided. The method includes steps of: receiving an ADC (apparent diffusion coefficient) map of an object acquired by the performance of MRI (magnetic resonance imaging) of the object at respective and different b-values or receiving data from which the ADC map is derivable, the ADC map mapping values of ADC at respective positions across the object, the ADC value at a respective position being the negative gradient from a fitting to data points on a graph of the log of the intensities of the MRI signals acquired at that position against the b-values used to obtain the MRI signals; and using a neural network to calculate a predicted uncertainty map of the ADC values of the ADC map, the predicted uncertainty map mapping values of predicted uncertainty at the respective positions across the object, each predicted uncertainty value being a predicted measure, at a respective position, of the standard deviation in the corresponding ADC value at that position. The input to the neural network includes the ADC map or the data from which the ADC map is derivable, and the output is the predicted uncertainty map.

Description

ANALYSIS OF APPARENT DIFFUSION COEFFICIENT MAPS ACQUIRED USING MRI
This application claims priority from GR 20210100832 filed 30 November 2021 , the contents and elements of which are herein incorporated by reference for all purposes.
Field of the Invention
The present invention relates to analysing magnetic resonance (MR) images of an object.
Background
Whole-body diffusion-weighted MRI (WBDWI) [1] provides unprecedented visualisation for diagnosis and response assessment of metastatic bony disease from advanced prostate [9, 5, 14, 13] and breast [12] cancers (APC and ABC respectively), and has recently been incorporated into UK guidelines for the evaluation of myeloma related disease [6, 10], In part, its success lies in its ability to provide a surrogate measurement of tumour cellularity in the form of the Apparent Diffusion Coefficient (ADC), which may be calculated at every region throughout the body by sensitising images to water diffusion at two or more ‘b- values’ [9] (units s/mm2), which encapsulate the timing and strengths of the diffusion-weighting gradients into a single variable (typically in the range 0-5000 s/mm2 on clinical systems). As this methodology is non-invasive and non-ionising, increasing clinical evidence suggests that monitoring changes in ADC may provide the first surrogate marker of tumour response to novel anti-cancer therapies [2], An important criterion of any successful response imaging biomarker in oncology is that derived measurements be repeatable (i.e. precise) between successive imaging studies [11], A recently developed methodology has demonstrated that it is possible to statistically model ADC measurement uncertainty, OADC, at each voxel location using a minor alteration to the currently employed clinical WBDWI protocols [4], As WBDWI is a technique with inherently poor signal-to-noise ratio (SNR), images are typically acquired using 3 or more diffusion-sensitising gradient directions at each b-value, and this process is subsequently repeated 3 or more times (total number of excitations, NEX > 9), the average of these repeated acquisitions providing the final high SNR diffusion-weighted image [9], By retaining the individual excitations, it is possible to estimate OADC at each voxel location. However, conventionally, to reduce data storage costs, the individual excitations are discarded and only the average image is retained at each b-value. This convention therefore hinders the subsequent estimation of OADC. Indeed, for imaging centres that only acquire average images at two independent b-values, calculation of OADC using classical techniques (e.g. through maximum likelihood fitting approaches) is impossible.
The present invention has been devised in light of the above considerations.
Summary of the Invention
It would be advantageous therefore to provide an approach which improves the calculation of OADC from averaged images and/or and enables the calculation of OADC from only two independent b-values. Accordingly, in a first aspect, the present disclosure provides a method of analysing magnetic resonance images of an object, the method including steps of: receiving an ADC (apparent diffusion coefficient) map of an object acquired by the performance of MRI (magnetic resonance imaging) of the object at respective and different b-values or receiving data from which the ADC map is derivable, the ADC map mapping values of ADC at respective positions across the object, the ADC value at a respective position being the negative gradient from a fitting to data points on a graph of the log of the intensities of the MRI signals acquired at that position against the b- values used to obtain the MRI signals; and using a neural network to calculate a predicted uncertainty map of the ADC values of the ADC map, the predicted uncertainty map mapping values of predicted uncertainty at the respective positions across the object, each predicted uncertainty value being a predicted measure, at a respective position, of the standard deviation in the corresponding ADC value at that position; wherein the input to the neural network includes the ADC map or the data from which the ADC map is derivable, and the output is the predicted uncertainty map.
Advantageously, the predicted uncertainty map outputted by the neural network can provide values of OADC from averaged images. Accordingly, each data point on the graph of the log of the MRI signals obtained at a respective position against the b-values used to obtain the MRI signals may be derived from the average of plural MRI signal acquisitions at the respective b-value. Advantageously, the predicted uncertainty map can also provide values of OADC from only two or three independent b-values (although a greater number of independent b-values can be used if wanted). Accordingly, the ADC map of the object may be acquired by performing MRI of the object at just two or just three different b-values. The method is able to achieve these advantages because the neural network can apparently learn complex relationships between a given pixel in an image and its neighbouring regions, in order to arrive at robust estimation of the local noise field. The predicted values of OADC from the neural network can provide clinicians with increased confidence when using WBDWI to monitor response of cancer to treatment.
The predicted uncertainty values of the predicted uncertainty map may, conveniently, be standard deviation values. However, this does not exclude that other values which relate to the standard deviation may be used. For example, the predicted uncertainty values of the predicted uncertainty map may be variance (standard deviation squared) values or precision (reciprocal of variance) values, such values being effectively measures of standard deviation.
The neural network may be a convolutional neural network.
Conveniently, the fitting to data points on a graph of the log of the intensities of the MRI signal intensities acquired at that position against the b-values used to obtain the MRI signals may be a linear least squares fitting. The neural network may be trained on a set of training samples in which each training sample inputted into the neural network includes an ADC map or data from which the ADC map is derivable, and the output of the neural network from each inputted sample is compared with a corresponding actual uncertainty map, the actual uncertainty map mapping values of actual uncertainty at the respective positions across the object, each actual uncertainty value being an actual measure, at a respective position, of the standard deviation in the corresponding ADC value at that position.
The method may further include a step of: receiving a signal intensity image of the object acquired by the performance of MRI, the signal intensity image mapping values of MRI signal intensity, S, at the respective positions across the object at a predetermined b-value. In this case, the input to the neural network further includes the signal intensity image. Moreover, each of the above-mentioned training samples inputted into the neural network can then further include a signal intensity image corresponding to the ADC map. In general, inclusion of the signal intensity image helps to improve the accuracy of the predicted uncertainty map.
Conveniently, the signal intensity image may map values of log MRI signal intensity, S, at the respective positions across the object at a predetermined b-value.
Conveniently, the values of log MRI signal intensity mapped by the signal intensity image may be intercept values at the respective positions across the object, the intercept value at a respective position being the intercept value at a predetermined b-value of the fitting to the data points on the graph for that position. For example, the signal intensity image may be an So image in which the intercept value at a given position is the intercept value at the predetermined b-value b = 0 of the fitting to the data points on the graph for that position.
The method may include a preliminary step of: performing MRI of the object at the respective and different b-values to acquire the ADC map or the data from which the ADC map is derivable, and optionally the signal intensity image.
The object may be a human or animal subject. The method may then include a further step of: analysing the predicted uncertainty map, or an image derived therefrom, for assessment of disease extent in the human or animal subject.
The method is typically computer-implemented. Accordingly, further aspects of the present disclosure provide: a computer program comprising code which, when the code is executed on a computer, causes the computer to perform the method of the first aspect; a computer readable medium storing a computer program comprising code which, when the code is executed on a computer, causes the computer to perform the method of the first aspect; and a computer system programmed to perform the method of the first aspect. A further aspect of the present disclosure provides an imaging system including: a scanner for obtaining magnetic resonance images of an object at respective and different b-values; and the computer system of the previous aspect operatively connected to the scanner to acquire from the obtained images the ADC map or the data from which the ADC map is derivable, and optionally the signal intensity image.
In another aspect, the present disclosure provides a method of training a neural network programmed to output a predicted uncertainty map from an input including an ADC map or data from which the ADC map is derivable, wherein: the ADC map maps values of ADC at respective positions across an object, the ADC value at a respective position being the negative gradient from a fitting to data points on a graph of the log of the intensities, S, of MRI signals acquired at that position against b-values used to obtain the MRI signals; and the predicted uncertainty map maps values of predicted uncertainty at the respective positions across an object, each predicted uncertainty value being a predicted measure, at a respective position, of the standard deviation in a corresponding ADC value at that position; the method including the steps of: providing a set of training samples, each sample including an ADC map or data from which the ADC map is derivable, and a corresponding actual uncertainty map which maps values of actual uncertainty at the respective positions across the object, each actual uncertainty value being an actual measure, at a respective position, of the standard deviation in the corresponding ADC value at that position; and training the neural network to minimise a cost function that measures similarity between (i) the predicted uncertainty map outputted by the neural network when the ADC map of a given sample or the data from which the ADC map is derivable are inputted into the neural network, and (ii) the actual uncertainty map of that sample.
Thus the method of this other aspect may be used to train the neural network of the first aspect.
The cost function may include a parameter that measures a pixel-wise difference between the actual uncertainty map and the predicted uncertainty map. The difference may be an absolute difference.
The cost function may include a parameter that measures visual similarity between the actual uncertainty map and the predicted uncertainty map. For example, the visual similarity may be a perception similarity measured by calculating an error (e.g. in the form of a means-squared-error cost function) between feature vectors extracted from layers of a pre-trained network, for which back-propagation has been used to train the layers to classify objects in a different dataset (e.g. to classify objects from natural images, such as images available from the ImageNet image database at https://www.image-net.org/).
The cost function may include a parameter that measures the difference between a factor encoding the b- values used to acquire the actual uncertainty map and a corresponding factor derived from the predicted uncertainty map. The input to the neural network may further include a signal intensity image mapping values of MRI signal intensity, S, at the respective positions across the object at a predetermined b-value. In this case, each training sample further includes a signal intensity image corresponding to the ADC map; and the neural network is trained to minimise a cost function that measures similarity between (i) the predicted uncertainty map outputted by the neural network when the ADC map of a given sample or the data from which the ADC map is derivable, and the signal intensity image are inputted into the neural network, and (ii) a ground truth uncertainty map of that sample (e.g. derived using conventional statistical methods).
The ADC maps, data from which the ADC map is derivable, optional signal intensity images, and actual uncertainty maps of the set of training samples may be from MRI of human or animal subjects.
The invention includes the combination of the aspects and preferred features described except where such a combination is clearly impermissible or expressly avoided.
Summary of the Figures
Embodiments and experiments illustrating the principles of the invention will now be discussed with reference to the accompanying figures in which:
Figure 1 A illustrates the data processing pipeline. From nine single acquisition data (NEX=1 , black scatter points) the clinical standard was derived as the average at each b-value (NEX=9, open circle scatter points). From the log-transformed averaged data, estimates were obtained of (i) the ADC from the negative gradient of the linear fit, and (ii) the log-transformed So image from the y-axis intercept (triangular scatter point). crADC was calculated using the NEX=1 data. It is clear that 2-point estimation of ADC using b = 600 and 900 s/mm2 results in noisy ADC maps, a fact that is consistent with the general increase in °ADC for these b-values. Figure 1 B illustrates the deep-learning U-Net reconstruction of crADC measurements from input ADC maps and In(So) images, enabling 2-point estimation of crADC.
Figure 2A shows training (solid) and validation (dashed) curves depicting the change in mean absolute error over the training epochs that were investigated for the dual-channel input data. Figure 2B shows corresponding training (solid) and validation (dashed) curves depicting the change in mean absolute error over the training epochs that were investigated for the single-channel input data. In both cases, a plateau in the validation curve is observed after epoch 50. However, dual-channel input results in markedly lower mean absolute error compared to the single-channel input for both training and test.
Figure 3 shows example axial slices from each of the test patient datasets collected from the prostate cancer cohort. There is good agreement between gold standard <fADC and the corresponding deeplearning estimated maps, using dual-channel and single-channel (crADC) inputs, across all cases and b-value combinations. Surprisingly, the deep-learning approach was also able to reconstruct a characteristic ‘ringing’ artefact observed in the background of the crADC maps. Little difference is observed between the single-channel and dual-channel results, suggesting that the algorithm is able to learn how to deliver maps of ADC uncertainty from the ADC maps alone. In all images, windowing levels were kept identical.
Figure 4 shows kernel density distributions of ground-truth measurements of <fADC and deep-learned values using dual-channel (crADc) and single-channel (CTADC) inputs within regions of metastatic prostate disease. Distributions were observed to display similar characteristics indicating that the deep-learned estimation performed well in all cases.
Figure 5 shows example axial slices from each of the test patient datasets collected from the mesothelioma cancer cohort. There is good agreement between gold standard <JADCand the corresponding deep-learning estimated map using dual-channel (crADc) input, in all cases and for all b- value combinations. Again, the deep-learning approach was able to reconstruct the characteristic ‘ringing’ artefact observed in the background of the crADC maps. In all images, windowing levels were kept identical.
Figure 6 shows kernel density distributions of ground-truth measurements of <fADC and deep-learned values using dual-channel (crADc) input within regions of mesothelioma cancer disease. Distributions were observed to display similar characteristics indicating that the deep-learned estimation performed well.
Figure 7 shows representative examples of whole-body maps of ADC uncertainty generated from poorquality (NEX=1) data by the trained deep-learning neural network (left) and clinical quality (right).
Figure 8 shows two representative slices of maps of ADC uncertainty generated from poor-quality (NEX=1) data by the trained deep-learning neural network (left) and clinical quality (right)
Detailed Description of the Invention
Aspects and embodiments of the present invention will now be discussed with reference to the accompanying figures. Further aspects and embodiments will be apparent to those skilled in the art. All documents mentioned in this text are incorporated herein by reference.
The use of deep-learning convolutional neural networks, such as U-Net, enables voxel-wise estimates of OADC. In particular, uncertainty estimation is made possible through complex relationships derived from a given voxel and its neighbours. In the following we describe a study investigating the use of U-Net based models for acquiring joint estimates of ADC and OADC from clinical WBDWI exams obtained using just two b-values (although we demonstrate that the same network may be used for more than two b-values). Furthermore, we compare the results from our network when the input is (i) a combination of the calculated ADC map and logarithm of the estimated So (i.e. b = 0) image with (ii) the ADC map alone.
To train the algorithm, a WBDWI acquisition protocol was used that provided gold-standard measurements of OADC (hereafter ctadc). Results from 16 patients were analysed with APC (split into 10/3/3 fortrain/validation/test data respectively), and the accuracy of the deep-learning based approach for estimation of OADC was compared with gold standard measurements in the test patient cohort.
Material and Methods
Patient Population and Imaging Protocol
This retrospective study consisted of two patient populations: (i) a prostate cancer cohort consisted of 16 patients with metastatic prostate cancer and suspected disease in the skeleton, and (ii) a mesothelioma cohort consisted of 28 patients with mesothelioma. Both imaging studies were approved by an institutional review board. In both cases, axial diffusion-weighted imaging (DWI) was performed across each cohort over a number of sequential imaging stations on a 1 ,5T scanner. Within each cohort, the same protocol was used to ensure consistency of results, as shown in Table 1 .
Table 1. Diffusion-weighted imaging protocol parameters for both study cohorts
Figure imgf000009_0001
§ Values in square parentheses represent the image dimensions following interpolation by the scanner. /The resolution is presented following image interpolation by the scanner.
In both cohorts, for each b-value a 3-directional orthogonal diffusion encoding scheme was applied using bipolar gradients to mitigate the effects of eddy-current induced distortions. Only a single average was acquired per direction, and this was repeated 3 and 4 times respectively for the prostate cancer and mesothelioma cohort respectively. This led to a total of 9 and 12 acquisitions per b-value and per slice for the prostate cancer and mesothelioma cohorts respectively. For the prostate cancer cohort, patients were randomly split into training/validation/test groups according to 10/3/3; for the mesothelioma cohort, this random split was 20/4/4. Data Analysis
An illustration of the data-fitting approach is presented in Figure 1A: ADC maps were calculated as the negative gradient from a linear-least-squares (LLS) approximation of the averaged image signals (NEX=9 for the prostate cancer cohort and NEX=12 for the mesothelioma cohort) following a logarithmic transform. The y-axis intercept was also estimated, ln(S0), which represents the log-signal expected at b = 0 s/mm2 (no diffusion weighting). From the individually retained acquisitions at each b-value (NEX=1), maps of crADC were calculated using an iterative weighted linear least-squares (IWLS) approach; these °ADC maps acted as ground truth, ctadc. This analysis was repeated for each possible combination of b- values available in this dataset: 50/600/900, 50/900, 50/600, and 600/900 s/mm2. It is clear from Figure 1 A that crADC is much higher for the latter combination of b-values due to the inherently low image SNR; this is also evident in the apparently noisier estimated ADC map and ln(S0) image. A deep-learning architecture (Figure 1B) was then developed and trained to generate maps of crADC using as input only the ADC and ln(S0) maps estimated using the averaged data at each b-value (NEX=9/12).
To compare the accuracy of deep-learning estimates of crADC with the gold-standard, a radiologist delineated regions of metastatic bone disease on the prostate cancer cohort, using a semi-automated segmentation pipeline [3], In addition to this, a physicist with more than 10 years’ experience in body DWI semi-automatically delineated regions of disease in the mesothelioma cohort using the tools available within 3D Slicer; these regions were subsequently verified and (where needed) corrected by a radiologist. Resultant regions of interest (ROIs) were transferred onto both the gold standard <fADC maps, and onto those generated by the deep-learning algorithm, crADc anc* °ADC-
Deep Learning Architecture
The deep-learning network (Figure 1B) is a variation of the conventional U-Net architecture [15] in which the following channel depths were used at each subsequent layer of the network: 16, 32, 64, 128, 256, 128, 64, 32, 16. In addition, residual blocks were implemented within each of the encoder and decoder layers, and a linear activation was used for the final output layer. Rectified linear unit (ReLU) activation functions were used in all preceding layers, and two layers also used 50% dropout during training (Figure 1B) to reduce overfitting (He normal initialisation was used for all weights/biases prior to training). The network was trained using a batch size of 25 DWI slices for 100 epochs, using the Adam optimisation algorithm [8] with a learning rate of 0.001 . The networks were trained using a Tesla P100-PCIE-16GB GPU-card and the trained algorithm was applied using a MacBook Pro laptop (2.9GHz lntel-Core-i7-CPU, 16GB-2133-MHZ-LPDDR3-RAM).
Two such networks were trained for the prostate cancer: (i) using a two-channel input of calculated ADC map and In (So) image, and (ii) using a single channel input of the ADC map alone. Both networks resulted in output of estimated ADC uncertainty maps, denoted crADc anc* °ADC respectively. For the mesothelioma cohort, the single-channel network was not trained. Cost Function
A cost function was developed that improved the appearance and quantitative accuracy of derived crADC maps. Firstly, the mean-absolute-error (mae) was used as a cost function parameter Lmae such that the network minimises the pixel-wise absolute difference between the target (gold-standard) image and the network-derived estimate:
Figure imgf000011_0001
for each pixel location i,j within the maps of size Nc columns and Nr rows.
To improve the visual similarity of generated maps to ground-truth, a perceptual loss cost function parameter LPerc was also incorporated using features derived from a pre-trained classification network (VGG16) [16] with weights previously optimised using the ImageNet database (TensorFlow 2.0). Features from the first block of the first VGG16 layer were extracted from both the true crADC maps and the corresponding deep-learned estimates cr^DC, anc* the mean-squared-error between both feature vectors was subsequently minimised in back-propagation:
Figure imgf000011_0002
-> [Rwr represents the operation of extracting Nf features from crADC maps using the VGG16 network. The first VGG16 layer was selected to capture high-order features within crADC maps.
Lastly, the estimated magnitude of ADC uncertainty is mathematically linked [4] to the b-values from which it is derived according to the equation:
Figure imgf000011_0003
with b-value design matrix:
Figure imgf000011_0004
for N b-values, <JV representing the standard deviation (noise) of log-transformed data at each b-value, and W a square diagonal matrix representing the desired weighting for each b-value when performing ADC fitting. For the purposes of this study equal weighting were assumed for all b-values such that Wy = 1 if i = j and 0 otherwise, to subsequently derive:
Figure imgf000011_0005
<D: IRW IR1 encodes the combination of known b-values. Estimation of a> from accurate maps of crADC should thus be possible and were used to help regularise the full crADc estimation network during training.
In particular, a b-value regression network was developed to estimate co from ground truth o“ADC maps (illustrated in Figure 1 B). The network consisted of four VGG-like blocks with 32, 64, 128, and 256 filters respectively (3x3 filter size in each case), with each block followed by a max-pooling operation and a 20% dropout layer to reduce overfitting. The final layer consists of a fully-connected layer with 256 neurons (dropout 20%) followed by another dense layer consisting of a single neuron to generate the estimate of co. A linear activation was used for this last layer whilst a ReLU activation function was used in all preceding layers. The He technique [7] was used initialize all layers prior to training, and the network was trained using a batch size of 30 crADC maps for 70 epochs, a mean-squared-error loss function for co estimates, and the Adam algorithm [8] with a learning rate of 0.001 for optimisation.
This trained network was used to improve the cost function of our full crADC estimation network: during training the network was used to estimate co, but this time from the estimated
Figure imgf000012_0001
maPs- This cost function parameter may be characterised as:
Figure imgf000012_0002
with
Figure imgf000012_0003
representing the operation of the trained b-value regression network, such that
0(°ADC) = 6j1' is the value estimated from the crADC map generated by the deep-learning algorithm during training.
The final cost function Ltotal for estimating crADC was therefore:
Figure imgf000012_0004
where the weighting for each individual parameter was determined empirically.
A similar cost function was also used for estimating crADC.
Results
Training and validation curves for the prostate cancer cohort are presented in Figures 2A and 2B; clear stabilisation of the validation mae loss was observed after approximately 50 epochs for both network types. Figure 3 demonstrates exemplar slices from each of the three test patient datasets in this cohort. It is clear that good agreement is observed between the gold-standard <fADC and DL-estimated crADc anc* °ADC maPs- Importantly, this was achieved for all four b-value combinations tested, results that are corroborated in a quantitative comparison (Figure 4) where distributions of o“ADC, crADc anc* °ADC values within disease exhibit similar characteristics. Furthermore, it is clear that using ADC maps as input alone is also sufficient for good estimation of crADC , although our preference was for using a dual-channel approach as the images appeared slightly clearer than the single channel input. For the mesothelioma cohort the same level of agreement was observed (exemplar slices from 3/4 test patients are shown in Figure 5, and distributions of <JADC and o-ADc values within disease are shown in Figure 6 for all four test patients). This demonstrates the ability of the network to produce good estimates °f °ADC in regions with smaller fields of view.
Discussion
Estimation of ADC measurement reliability is an essential task if WBDWI is to be embraced as a cancer response imaging biomarker in the healthcare community. Whilst it may be possible to acquire specialist datasets that allow calculation of ADC uncertainty, crADC, at every anatomical location, the fact remains that most if not all clinical centers will not be able to acquire such data. This will likely be due to (i) an inherent delay by scanner manufacturers to implement such approaches within the clinical pathway, (ii) reluctance of centers for implementing non-standard-of-care sequences, and (iii) the increased data storage costs required to allow accurate assessment of crADC. Therefore, 2-point and 3-point measurements of crADC are highly sought after, but unfortunately 3-point measurements are statistically imprecise and 2-point measurements are typically thought to be impossible.
However, this study provides evidence that the use of deep-learning, for example with a U-Net architecture, is able to break such classical assumptions and provide robust estimation of crADC for DWI datasets acquired with only 2 or 3 unique b-values. Furthermore, we have demonstrated that good estimation of crADC is possible using either a combination of ADC and In (So) maps as input, or ADC maps alone. This is a somewhat surprising result given that previous results demonstrate a direct relationship between crADC and the image signal intensity. The success of our technique may be due to the fact that the U-Net is able to learn complex relationships between a given pixel and its neighbouring regions, in order to arrive at robust estimation of the local noise field. Traditional approaches to this task include spatial filtering and wavelet decomposition, but these techniques tend to perform poorly and can create artifactual edges in the resultant images. In contrast, a neural network may have many trainable parameters (>30 million) that are able to learn whether pixel differences are due to genuine noise or due to an object edge.
An important finding from the study is that the neural network could accurately estimate crADC independently of the b-value combination that produced the estimated ADC maps and ln(S0) image used as input.
In conclusion, deep-learned estimation of ADC uncertainties can provide clinicians with increased confidence when using WBDWI to monitor response of cancer to treatment. Moreover, the technique described above does not require modification of existing clinical protocols and can therefore be applied to existing datasets for retrospective evaluation of WBDWI. Further Work
Further work confirms that it is possible to generate robust ADC uncertainty maps from heavily subsampled but rapidly acquired diffusion weighted MRI (DWI) data using the deep-learning network. Single acquisition/single direction (NEX=1) DWI data was acquired for b-values 50/600/900 s/mm2 using a whole-body DWI acquisition protocol [17], Subsequently, the quality of those images was enhanced using a previously published approach (quickDWI) [17] and then the ADC and raw signal images (ln(S0)) were calculated.
These images were fed to the trained ADC uncertainty deep-learning network. The network-generated ADC uncertainty images were observed to be similar to those calculated from the clinical data, as shown in Figures 7 and 8.
No retraining of the deep-learning network took place despite the different imaging protocol used and the nature of the original poor-quality sub-sampled data. However, improvements to the quality of results by retraining the network may be expected with any one or a combination, any two or all three of the following approaches:
1 . Use the quickDWI enhanced images as training data from scratch or using transfer learning to fine-tune the network.
2. Simultaneously train the quickDWI and the ADC uncertainty networks (so the weights of both networks are updated at the same time) using a unified architecture in which the output of the quickDWI network is used as input to the ADC uncertainty network.
3. Freeze the weights of the ADC uncertainty network and incorporate it as an additional part of the loss function of the quickDWI network. During training, the trained ADC uncertainty network is applied on the output of the quickDWI network to compare its similarity to the ground-truth ADC uncertainty and to try to minimise their difference.
***
The features disclosed in the foregoing description, or in the following claims, or in the accompanying drawings, expressed in their specific forms or in terms of a means for performing the disclosed function, or a method or process for obtaining the disclosed results, as appropriate, may, separately, or in any combination of such features, be utilised for realising the invention in diverse forms thereof.
While the invention has been described in conjunction with the exemplary embodiments described above, many equivalent modifications and variations will be apparent to those skilled in the art when given this disclosure. Accordingly, the exemplary embodiments of the invention set forth above are considered to be illustrative and not limiting. Various changes to the described embodiments may be made without departing from the spirit and scope of the invention. For the avoidance of any doubt, any theoretical explanations provided herein are provided for the purposes of improving the understanding of a reader. The inventors do not wish to be bound by any of these theoretical explanations.
Any section headings used herein are for organizational purposes only and are not to be construed as limiting the subject matter described.
Throughout this specification, including the claims which follow, unless the context requires otherwise, the word “comprise” and “include”, and variations such as “comprises”, “comprising”, and “including” will be understood to imply the inclusion of a stated integer or step or group of integers or steps but not the exclusion of any other integer or step or group of integers or steps.
It must be noted that, as used in the specification and the appended claims, the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. Ranges may be expressed herein as from “about” one particular value, and/or to “about” another particular value. When such a range is expressed, another embodiment includes from the one particular value and/or to the other particular value. Similarly, when values are expressed as approximations, by the use of the antecedent “about,” it will be understood that the particular value forms another embodiment. The term “about” in relation to a numerical value is optional and means for example +/- 10%.
The term "computer readable medium" may represent one or more devices for storing data, including read only memory (ROM), random access memory (RAM), magnetic RAM, core memory, magnetic disk storage mediums, optical storage mediums, flash memory devices and/or other machine readable mediums for storing information. The term "computer-readable medium" includes, but is not limited to portable or fixed storage devices, optical storage devices, wireless channels and various other mediums capable of storing, containing or carrying instruction(s) and/or data.
Furthermore, embodiments may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks may be stored in a computer readable medium. One or more processors may perform the necessary tasks. A code segment may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, etc. References
A number of publications are cited above in order to more fully describe and disclose the invention and the state of the art to which the invention pertains. Full citations for these references are provided below. The entirety of each of these references is incorporated herein.
1. Barnes, A., Alonzi, R., Blackledge, M., Charles-Edwards, G., Collins, D., Cook, G., Coutts, G., Goh, V., Martin, G., Kelly, C., Others: UK Quantitative WB-DWI Technical Workgroup: consensus meeting recommendations on optimisation, quality control, processing and analysis of quantitative whole-body diffusion weighted imaging for cancer. The British journal of radiology p. 20170577 (2017)
2. Blackledge, M.D., Collins, D.J., Tunariu, N., Orton, M.R., Padhani, A.R., Leach, M.O., Koh, D.M.: Assessment of treatment response by total tumor volume and global apparent diffusion coefficient using diffusion-weighted MRI in patients with metastatic bone disease: A feasibility study. PLoS ONE 9(4) (2014). https://doi.org/10.1371/journal.pone.0091779
3. Blackledge, M.D., Collins, D.J., Tunariu, N., Orton, M.R., Padhani, A.R., Leach, M.O., Koh, D.M.: Assessment of treatment response by total tumor volume and global apparent diffusion coefficient using diffusion-weighted MRI in patients with metastatic bone disease: A feasibility study. PLoS ONE 9(4) (2014). https://doi.org/10.1371/journal.pone.0091779
4. Blackledge, M.D., Tunariu, N., Zugni, F., Holbrey, R., Orton, M.R., Ribeiro, A., Hughes, J.C., Scurr, E.D., Collins, D.J., Leach, M.O., Koh, D.M.: Noise-Corrected, Exponentially Weighted, Diffusion-Weighted MRI (niceDWI) Improves Image Signal Uniformity in Whole-Body Imaging of Metastatic Prostate Cancer.
Frontiers in Oncology (2020). https://doi.org/10.3389/fonc.2020.00704
5. Eiber, M., Holzapfel, K., Ganter, C., Epple, K., Metz, S., Geinitz, H., K ubler, H., Gaa, J., Rummeny, E.J., Beer, A.J.: Whole-body MRI including diffusion-weighted imaging (DWI) for patients with recurring prostate cancer: Technical feasibility and assessment of lesion conspicuity in DWI. Journal of Magnetic Resonance Imaging 33(5), 1160{1170 (2011). https://doi.org/10.1002/jmri.22542
6. Giles, S.L., Messiou, C., Collins, D.J., Morgan, V.A., Simpkin, C.J., West, S., Davies, F.E., Morgan, G.J., DeSouza, N.M.: Whole-Body Diffusion-weighted MR Imaging for Assessment of Treatment Response in Myeloma. Radiology 271 (3), 785{794 (2014). https://doi.org/10.1148/radiol.13131529, http://pubs. rsna.org/doi/10.1148/rad iol .13131529
7. He, K., Zhang, X., Ren, S., Sun, J.: Delving Deep into Rectfiers: Surpassing Human-Level Performance on ImageNet Classification
8. Kingma, D.P., Ba, J.: Adam: A Method for Stochastic Optimization. 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings (dec 2014), https://arxiv.0rg/abs/1412.6980v9 9. Koh, D.M., Blackledge, M., Padhani, A.R., Takahara, T., Kwee, T.C., Leach, M.O., Collins, D.J.: Wholebody diffusion-weighted mri: Tips, tricks, and pitfalls (2012). https://doi.Org/10.2214/AJR.1 1 .7866
10. Messiou, C., Hillengass, J., Delorme, S., Lecouvet, F.E., Moulopoulos, L.A., Collins, D.J., Blackledge, M.D., Abildgaard, N., 0stergaard, B., Schlemmer, H.P., Landgren, O., Asmussen, J.T., Kaiser, M.F., Padhani, A.: Guidelines for acquisition, interpretation, and reporting of whole-body MRI in myeloma: Myeloma response assessment and diagnosis system (MY-RADS). Radiology 291 (1), 5{13 (2019). https://doi.Org/10.1148/radiol.2019181949
11 . O'Connor, J.P.B., Aboagye, E.O., Adams, J.E., Aerts, H.J.W.L., Barrington, S.F., Beer, A.J., Boellaard, R., Bohndiek, S.E., Brady, M., Brown, G., Buckley, D.L., Chenevert, T.L., Clarke, L.P., Collette, S., Cook, G.J., DeSouza, N.M., Dickson, J.C., Dive, C., Evelhoch, J.L., Faivre-Finn, C., Gallagher, F.A., Gilbert, F.J., Gillies, R.J., Goh, V., Griffths, J.R., Groves, A.M., Halligan, S., Harris, A.L., Hawkes, D.J., Hoekstra, O.S., Huang, E.P., Hutton, B.F., Jackson, E.F., Jayson, G.C., Jones, A., Koh, D.M., Lacombe, D., Lambin, P., Lassau, N., Leach, M.O., Lee, T.Y., Leen, E.L., Lewis, J.S., Liu, Y., Lythgoe, M.F., Manoharan, P., Maxwell, R.J., Miles, K.A., Morgan, B., Morris, S., Ng, T., Padhani, A.R., Parker, G.J.M., Partridge, M., Pathak, A.P., Peet, A.C., Punwani, S., Reynolds, A.R., Robinson, S.P., Shankar, L.K., Sharma, R.A., Soloviev, D., Stroobants, S., Sullivan, D.C., Taylor, S.A., Tofts, P.S., Tozer, G.M., van Herk, M., Walker-Samuel, S., Wason, J., Williams, K.J., Workman, P., Yankeelov, T.E., Brindle, K.M., McShane, L.M., Jackson, A., Waterton, J.C.: Imaging biomarker roadmap for cancer studies. Nature reviews. Clinical oncology 14(3), 169{186 (2017). https://doi.org/10.1038/nrclinonc.2016.162, http: //www. ncbi.nlm.nih.gov/pubmed/27725679%0Ahttp://www. pubmedcentral. nih.gov/articlerender.fcgi?artid=PMC5378302
12. Padhani, A.R., Koh, D.M., Collins, D.J.: Whole-Body Diffusion-weighted MR Imaging in Cancer: Current Status and Research Directions. Radiology 261 (3), 700{718 (201 1). https://doi.Org/10.1148/radiol.1 1110474, http://pubs. rsna.org/doi/10.1148/radioL 1 1110474
13. Padhani, A.R., Lecouvet, F.E., Tunariu, N., Koh, D.M., De Keyzer, F., Collins, D.J., Sala, E., Schlemmer, H.P., Petralia, G., Vargas, H.A., Fanti, S., Tombal, H.B., de Bono, J.: METastasis Reporting and Data System for Prostate Cancer: Practical Guidelines for Acquisition, Interpretation, and Reporting of Whole-body Magnetic Resonance Imaging-based Evaluations of Multiorgan Involvement in Advanced Prostate Cancer [figure presente. European Urology 71 (1), 81{92 (2017). https://doi.Org/10.1016/j.eururo.2016.05.033
14. Padhani, A.R., Makris, A., Gall, P., Collins, D.J., Tunariu, N., De Bono, J.S.: Therapy monitoring of skeletal metastases with whole-body diffusion MRI. Journal of Magnetic Resonance Imaging 39(5), 1049{1078 (2014). https://doi.org/10.1002/jmri.24548 15. Ronneberger, O., Fischer, P., Brox, T.: U-Net: Convolutional Networks for Biomedical Image Segmentation. Miccai pp. 234{241 (2015). https://doi.org/10.1007/978- 3-319-24574-4 28
16. Simonyan, K„ Zisserman, A.: VERY DEEP CONVOLUTIONAL NETWORKS FOR LARGE-SCALE IMAGE RECOGNITION (2015), http://www.robots.ox. ac.uk/
17. Zormpas-Petridis, K., Tunariu, N., Curcean, A., Messiou, C., Curcean, S., Collins, D.J., Hughes, J.C., Jamin, Y., Koh, D.M., Blackledge, M.D.: Accelerating Whole-Body Diffusion-weighted MRI with
Deep Learning-based Denoising Image Filters. Radiology: Artificial Intelligence 3.5 (2021). Doi: 10.1148/ryai.2021200279

Claims

Claims:
1 . A method of analysing magnetic resonance images of an object, the method including steps of: receiving an ADC (apparent diffusion coefficient) map of an object acquired by the performance of MRI (magnetic resonance imaging) of the object at respective and different b-values or receiving data from which the ADC map is derivable, the ADC map mapping values of ADC at respective positions across the object, the ADC value at a respective position being the negative gradient from a fitting to data points on a graph of the log of the intensities, S, of the MRI signals acquired at that position against the b- values used to obtain the MRI signals; and using a neural network to calculate a predicted uncertainty map of the ADC values of the ADC map, the predicted uncertainty map mapping values of predicted uncertainty at the respective positions across the object, each predicted uncertainty value being a predicted measure, at a respective position, of the standard deviation in the corresponding ADC value at that position; wherein the input to the neural network includes the ADC map or the data from which the ADC map is derivable, and the output is the predicted uncertainty map.
2. The method of claim 1 , wherein the fitting to data points on a graph of the log of the intensities of the MRI signal intensities acquired at that position against the b-values used to obtain the MRI signals is a linear least squares fitting.
3. The method of claim 1 or 2, wherein the ADC map of the object or the data from which the ADC map is derivable is acquired by performing MRI of the object at just two or just three different b-values.
4. The method of any one of the previous claims, wherein each data point on the graph of the log of the MRI signals obtained at a respective position against the b-values used to obtain the MRI signals is an average of plural MRI signal acquisitions at the respective b-value.
5. The method of any one of the previous claims, wherein the neural network is trained on a set of training samples in which each training sample inputted into the neural network includes an ADC map or data from which the ADC map is derivable, and the output of the neural network from each inputted sample is compared with a corresponding actual uncertainty map, the actual uncertainty map mapping values of actual uncertainty at the respective positions across the object, each actual uncertainty value being an actual measure, at a respective position, of the standard deviation in the corresponding ADC value at that position.
6. The method of any one of the previous claims further including a step of: receiving a signal intensity image of the object acquired by the performance of MRI, the signal intensity image mapping values of MRI signal intensity, S, at the respective positions across the object at a predetermined b-value; wherein the input to the neural network further includes the signal intensity image.
7. The method of claim 6, wherein the signal intensity image maps values of log MRI signal intensity, S, at the respective positions across the object at a predetermined b-value.
8. The method of claim 7 or 8, wherein the values of MRI signal intensity mapped by the signal intensity image are intercept values at the respective positions across the object, the intercept value at a respective position being the intercept value at a predetermined b-value of the fitting to the data points on the graph for that position.
9. The method of any one of claims 6 to 8 as dependent on claim 5, wherein each training sample inputted into the neural network further includes a signal intensity image corresponding to the ADC map.
10. The method according to any one of the previous claims, including a preliminary step of: performing MRI of the object at the respective and different b-values to acquire the ADC map or the data from which the ADC map is derivable.
11. The method according to any one of the previous claims wherein the object is a human or animal subject.
12. The method according to claim 1110, including a further step of: analysing the predicted uncertainty map, or an image derived therefrom, for assessment of disease extent in the human or animal subject.
13. A computer system for analysing magnetic resonance images of an object, the computer system being programmed to perform the method of any one of claims 1 to 9.
14. An imaging system including: a scanner for obtaining magnetic resonance images of an object at respective and different b- values; and the computer system of claim 12 operatively connected to the scanner to acquire from the obtained images the ADC map or the data from which the ADC map is derivable.
15. A computer program comprising code which, when the code is executed on a computer, causes the computer to perform the method of any one of claims 1 to 9.
16. A computer readable medium storing the computer program of claim 15.
17. A method of training a neural network programmed to output a predicted uncertainty map from an input including an ADC map or data from which the ADC map is derivable, wherein: the ADC map maps values of ADC at respective positions across an object, the ADC value at a respective position being the negative gradient from a fitting to data points on a graph of the log of the intensities, S, of MRI signals acquired at that position against b-values used to obtain the MRI signals; and the predicted uncertainty map maps values of predicted uncertainty at the respective positions 19 across an object, each predicted uncertainty value being a predicted measure, at a respective position, of the standard deviation in a corresponding ADC value at that position; the method including the steps of: providing a set of training samples, each sample including an ADC map or data from which the ADC map is derivable, and a corresponding actual uncertainty map which maps values of actual uncertainty at the respective positions across the object, each actual uncertainty value being an actual measure, at a respective position, of the standard deviation in the corresponding ADC value at that position; and training the neural network to minimise a cost function that measures similarity between (i) the predicted uncertainty map outputted by the neural network when the ADC map of a given sample or the data from which the ADC map is derivable are inputted into the neural network, and (ii) the actual uncertainty map of that sample.
18. The method of claim 17, wherein the cost function includes a parameter that measures a pixelwise difference between the actual uncertainty map and the predicted uncertainty map, and/or visual similarity between the actual uncertainty map and the predicted uncertainty map.
19. The method of any one of claims 17 or 18, wherein the cost function includes a parameter that measures the difference between a factor encoding the b-values used to acquire the actual uncertainty map and a corresponding factor derived from the predicted uncertainty map.
20. The method of any one of claims 17 to 19, wherein: the input to the neural network further includes a signal intensity image mapping values of MRI signal intensity, S, at the respective positions across the object at a predetermined b-value; each training sample further includes a signal intensity image corresponding to the ADC map; and the neural network is trained to minimise a cost function that measures similarity between (i) the predicted uncertainty map outputted by the neural network when the ADC map of a given sample or the data from which the ADC map is derivable, and the signal intensity image are inputted into the neural network, and (ii) a ground truth uncertainty map of that sample.
PCT/EP2022/083857 2021-11-30 2022-11-30 Analysis of apparent diffusion coefficient maps acquired using mri WO2023099569A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GR20210100832 2021-11-30
GR20210100832 2021-11-30

Publications (1)

Publication Number Publication Date
WO2023099569A1 true WO2023099569A1 (en) 2023-06-08

Family

ID=84537384

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2022/083857 WO2023099569A1 (en) 2021-11-30 2022-11-30 Analysis of apparent diffusion coefficient maps acquired using mri

Country Status (1)

Country Link
WO (1) WO2023099569A1 (en)

Non-Patent Citations (19)

* Cited by examiner, † Cited by third party
Title
BARNES, A.ALONZI, R.BLACKLEDGE, M.CHARLES-EDWARDS, G.COLLINS, D., COOK, G.COUTTS, G.GOH, V.MARTIN, G.KELLY, C.: "Others: UK Quantitative WB-DWI Technical Workgroup: consensus meeting recommendations on optimisation, quality control, processing and analysis of quantitative whole-body diffusion weighted imaging for cancer", THE BRITISH JOURNAL OF RADIOLOGY, 2017, pages 20170577
BLACKLEDGE MATTHEW D. ET AL: "Noise-Corrected, Exponentially Weighted, Diffusion-Weighted MRI (niceDWI) Improves Image Signal Uniformity in Whole-Body Imaging of Metastatic Prostate Cancer", FRONTIERS IN ONCOLOGY, vol. 10, 8 May 2020 (2020-05-08), pages 1 - 12, XP093026566, DOI: 10.3389/fonc.2020.00704 *
BLACKLEDGE, M.D.TUNARIU, N.ZUGNI, F.HOLBREY, R.ORTON, M.R.RIBEIRO, A.HUGHES, J.C.SCURR, E.D.COLLINS, D.J.LEACH, M.O.: "Noise-Corrected, Exponentially Weighted, Diffusion-Weighted MRI (niceDWI) Improves Image Signal Uniformity in Whole-Body Imaging of Metastatic Prostate Cancer", FRONTIERS IN ONCOLOGY, 2020, Retrieved from the Internet <URL:https://doi.org/10.3389/fonc.2020.00704>
EIBER, M.HOLZAPFEL, K.GANTER, C.EPPLE, K.METZ, S.GEINITZ, H.K UBLER, H.GAA, J.RUMMENY, E.J.BEER, A.J.: "Whole-body MRI including diffusion-weighted imaging (DWI) for patients with recurring prostate cancer: Technical feasibility and assessment of lesion conspicuity in DWI", JOURNAL OF MAGNETIC RESONANCE IMAGING, vol. 33, no. 5, 2011, pages 1160, Retrieved from the Internet <URL:https://doi.org/10.1002/jmri.22542>
GILES, S.L.MESSIOU, C.COLLINS, D.J.MORGAN, V.A.SIMPKIN, C.JWEST, S.DAVIES, F.E.MORGAN, G.J.DESOUZA, N.M.: "Whole-Body Diffusion-weighted MR Imaging for Assessment of Treatment Response in Myeloma", RADIOLOGY, vol. 271, no. 3, 2014, pages 785, Retrieved from the Internet <URL:https://doi.org/10.1148/radiol.13131529>
HE, K.ZHANG, X.REN, S.SUN, J., DELVING DEEP INTO RECTFIERS: SURPASSING HUMAN-LEVEL PERFORMANCE ON IMAGENET CLASSIFICATION
IOANNIS DELAKIS ET AL: "Developing a quality control protocol for diffusion imaging on a clinical MRI system; Developing a quality control protocol for diffusion imaging on a clinical MRI system", PHYSICS IN MEDICINE AND BIOLOGY, INSTITUTE OF PHYSICS PUBLISHING, BRISTOL GB, vol. 49, no. 8, 21 April 2004 (2004-04-21), pages 1409 - 1422, XP020024081, ISSN: 0031-9155, DOI: 10.1088/0031-9155/49/8/003 *
KINGMA, D.P.BA, J.ADAM: "A Method for Stochastic Optimization", 3RD INTERNATIONAL CONFERENCE ON LEARNING REPRESENTATIONS, ICLR 2015 - CONFERENCE TRACK PROCEEDINGS, December 2014 (2014-12-01), Retrieved from the Internet <URL:https://arxiv.org/abs/1412.6980v9>
KOH, D.M.BLACKLEDGE, M.PADHANI, A.R.TAKAHARA, T.KWEE, T.C.LEACH, M.O.COLLINS, D.J., WHOLE-BODY DIFFUSION-WEIGHTED MRI: TIPS, TRICKS, AND PITFALLS, 2012, Retrieved from the Internet <URL:https://doi.org/10.2214/AJR.11.7866>
MESSIOU, C., HILLENGASS, J., DELORME, S., LECOUVET, F.E., MOULOPOULOS, L.A., COLLINS, D.J., BLACKLEDGE,M.D., ABILDGAARD, N., OSTER: "Guidelines for acquisition, interpretation, and reporting of whole-body MRI in myeloma:Myeloma response assessment and diagnosis system (MY-RADS)", RADIOLOGY, vol. 291, no. 1, 2019, pages 513, Retrieved from the Internet <URL:https://doi.org/10.1148/radiol.2019181949>
O'CONNOR, J.P.B.ABOAGYE, E.O.ADAMS, J.E.AERTS, H.J.W.L.BARRINGTON, S.F.BEER, A.J.BOELLAARD, R.BOHNDIEK, S.E.BRADY, M.BROWN, G.: "Imaging biomarker roadmap for cancer studies. Nature reviews", CLINICAL ONCOLOGY, vol. 14, no. 3, 2017, pages 169 - 186, Retrieved from the Internet <URL:https://doi.org/10.1038/nrclinonc.2016.162>
PADHANI, A.R., KOH, D.M., COLLINS, D.J.: "Whole-Body Diffusion-weighted MR Imaging in Cancer:Current Status and Research Directions", RADIOLOGY, vol. 261, no. 3, 2011, Retrieved from the Internet <URL:https://doi.org/10.1148/radiol.11110474>
PADHANI, A.R.LECOUVET, F.E.TUNARIU, N.KOH, D.M.DE KEYZER, F.COLLINS, D.J.SALA, E.SCHLEMMER, H.P.PETRALIA, G.VARGAS, H.A.: "METastasis Reporting and Data System for Prostate Cancer: Practical Guidelines for Acquisition, Interpretation, and Reporting of Whole-body Magnetic Resonance Imaging-based Evaluations of Multiorgan Involvement in Advanced Prostate Cancer [figure presente", EUROPEAN UROLOGY, vol. 71, no. 1, 2017, pages 81 - 92, Retrieved from the Internet <URL:https://doi.org/10.1016/j.eururo.2016.05.033>
PADHANI, A.R.MAKRIS, A.GALL, P.COLLINS, D.J.TUNARIU, N.DE BONO, J.S.: "Therapy monitoring of skeletal metastases with whole-body diffusion MRI", JOURNAL OF MAGNETIC RESONANCE IMAGING, vol. 39, no. 5, 2014, pages 1049 - 1078, Retrieved from the Internet <URL:https://doi.org/10.1002/jmri.24548>
PEÑA-NOGALES ÓSCAR ET AL: "Determination of optimized set of b-values for Apparent Diffusion Coefficient mapping in liver Diffusion-Weighted MRI", JOURNAL OF MAGNETIC RESONANCE, ACADEMIC PRESS, ORLANDO, FL, US, vol. 310, 31 October 2019 (2019-10-31), XP085970883, ISSN: 1090-7807, [retrieved on 20191031], DOI: 10.1016/J.JMR.2019.106634 *
RONNEBERGER, O.FISCHER, P.BROX, T.: "U-Net: Convolutional Networks for Biomedical Image Segmentation", MICCAI, 2015, pages 234 - 241, XP047565084, Retrieved from the Internet <URL:https://doi.org/10.1007/978-3-319-24574-428> DOI: 10.1007/978-3-319-24574-4_28
SIMONYAN, K.ZISSERMAN, A., VERY DEEP CONVOLUTIONAL NETWORKS FOR LARGE-SCALE IMAGE RECOGNITION, 2015, Retrieved from the Internet <URL:http://www.robots.ox.ac.uk>
XING DA ET AL: "OPTIMISED DIFFUSION-WEIGHTING FOR MEASUREMENT OF APPARENT DIFFUSION COEFFICIENT (ADC) IN HUMAN BRAIN", MAGNETIC RESONANCE IMAGING, vol. 15, no. 7, 1 January 1997 (1997-01-01), pages 771 - 784, XP093026915 *
ZORMPAS-PETRIDIS, K., TUNARIU, N., CURCEAN, A., MESSIOU, C., CURCEAN, S., COLLINS, D.J., HUGHES, J.C.,JAMIN, Y., KOH, D.M., BLACKL: "Accelerating Whole-Body Diffusion-weighted MRI with Deep Learning-based Denoising Image Filters", RADIOLOGY: ARTIFICIAL INTELLIGENCE, vol. 3, 2021, pages 5

Similar Documents

Publication Publication Date Title
US11967072B2 (en) Three-dimensional object segmentation of medical images localized with object detection
Daniel et al. Automated renal segmentation in healthy and chronic kidney disease subjects using a convolutional neural network
US9858665B2 (en) Medical imaging device rendering predictive prostate cancer visualizations using quantitative multiparametric MRI models
US11696701B2 (en) Systems and methods for estimating histological features from medical images using a trained model
Dupont et al. Fully-integrated framework for the segmentation and registration of the spinal cord white and gray matter
Vasilic et al. A novel local thresholding algorithm for trabecular bone volume fraction mapping in the limited spatial resolution regime of in vivo MRI
EP3703007B1 (en) Tumor tissue characterization using multi-parametric magnetic resonance imaging
Irving et al. Deep quantitative liver segmentation and vessel exclusion to assist in liver assessment
Yoo et al. Application of variable threshold intensity to segmentation for white matter hyperintensities in fluid attenuated inversion recovery magnetic resonance images
WO2019182520A1 (en) Method and system of segmenting image of abdomen of human into image segments corresponding to fat compartments
CN112470190A (en) System and method for improving low dose volume contrast enhanced MRI
GB2586119A (en) An improved medical scan protocol for in-scanner patient data acquisition analysis
WO2013086026A1 (en) System and method of automatically detecting tissue abnormalities
Hameeteman et al. Carotid wall volume quantification from magnetic resonance images using deformable model fitting and learning-based correction of systematic errors
Aja-Fernández et al. Validation of deep learning techniques for quality augmentation in diffusion MRI for clinical studies
Huang et al. Deep learning-based diffusion tensor cardiac magnetic resonance reconstruction: a comparison study
US11481934B2 (en) System, method, and computer-accessible medium for generating magnetic resonance imaging-based anatomically guided positron emission tomography reconstruction images with a convolutional neural network
Manikis et al. Diffusion modelling tool (DMT) for the analysis of diffusion weighted imaging (DWI) magnetic resonance imaging (MRI) data
Loizillon et al. Automatic motion artefact detection in brain T1-weighted magnetic resonance images from a clinical data warehouse using synthetic data
WO2023219963A1 (en) Deep learning-based enhancement of multispectral magnetic resonance imaging
Goldsmith et al. Nonlinear tube-fitting for the analysis of anatomical and functional structures
WO2023099569A1 (en) Analysis of apparent diffusion coefficient maps acquired using mri
Qin et al. Automated segmentation of the left ventricle from MR cine imaging based on deep learning architecture
Khademi et al. Multiscale partial volume estimation for segmentation of white matter lesions using flair MRI
Liang et al. Mouse brain MR super-resolution using a deep learning network trained with optical imaging data

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22826068

Country of ref document: EP

Kind code of ref document: A1