WO2020056196A1 - Fully automated personalized body composition profile - Google Patents

Fully automated personalized body composition profile Download PDF

Info

Publication number
WO2020056196A1
WO2020056196A1 PCT/US2019/050898 US2019050898W WO2020056196A1 WO 2020056196 A1 WO2020056196 A1 WO 2020056196A1 US 2019050898 W US2019050898 W US 2019050898W WO 2020056196 A1 WO2020056196 A1 WO 2020056196A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
segmented
mri image
mri
dixon
Prior art date
Application number
PCT/US2019/050898
Other languages
French (fr)
Inventor
Alexander M. GRAFF
Natalie Marie SCHENKER-AHMED
Christine Menking SWISHER
Santos II DOMINGUEZ
Jian Wu
Original Assignee
Human Longevity, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Human Longevity, Inc. filed Critical Human Longevity, Inc.
Publication of WO2020056196A1 publication Critical patent/WO2020056196A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the embodiments disclosed herein are generally directed towards systems and methods for predicting and managing risk of disorders and diseases for individuals. More specifically, there is a need for systems and methods for acquiring whole body MRI images suitable for segmentation, and quantifying aspects of the morphological features of fat/muscle in various areas of the body to develop disease risk models, identify biomarkers for disease progression, etc.
  • Body composition profiling allows a clinician to gain valuable information about a patient’ s health.
  • T2D type 2 diabetes
  • CHD coronary heart disease
  • FCNs Fully Convolutional Networks
  • U-NETs U-Net Convolutional Networks
  • the subject disclosure provides for an automated method for body composition profiling with MRI DIXON imaging.
  • the fully automated body composition method developed can be used for radiation-free MRI risk stratification without any manual processing steps making it more accessible clinically. This would most likely be used for risk prediction and risk stratification for diseases such as Type II Diabetes, cardiovascular disease, and obesity.
  • a computer-implemented method for automatically generating a body composition profile includes obtaining an MRI image of a patient.
  • the method further includes manually segmenting the MRI image to obtain a manually segmented MRI image, the manually segmented MRI image including delineated segments that correspond to targeted body components of the patient, the targeted body components comprising fat and/or muscle.
  • the method further includes performing a series of convolutions, pooling, and upsampling on the MRI image to obtain a segmented output, the segmented output comprising masks of the targeted body components.
  • the method further includes determining an amount of overlap between the segmented output and the manually segmented MRI image.
  • the method further includes validating the segmented output when the overlap is within a defined threshold.
  • the method further includes calculating volumes and statistical measurements from the segmented output to generate quantitative image biomarkers.
  • the method further includes generating the body composition profile from the generated biomarkers.
  • a system including a processor and a memory comprising instructions stored thereon, which when executed by the processor, causes the processor to perform a method for automatically generating a body composition profile.
  • the method includes obtaining an MRI image of a patient.
  • the method further includes manually segmenting the MRI image to obtain a manually segmented MRI image, the manually segmented MRI image including delineated segments that correspond to targeted body components of the patient, the targeted body components comprising fat and/or muscle.
  • the method further includes performing a series of convolutions, pooling, and upsampling on the MRI image to obtain a segmented output, the segmented output comprising masks of the targeted body components.
  • the method further includes determining an amount of overlap between the segmented output and the manually segmented MRI image.
  • the method further includes validating the segmented output when the overlap is within a defined threshold.
  • the method further includes calculating volumes and statistical measurements from the segmented output to generate quantitative image biomarkers.
  • the method further includes generating the body composition profile from the generated biomarkers.
  • a non-transitory computer- readable storage medium including instructions (e.g., stored sequences of instructions) that, when executed by a processor, cause the processor to perform a method for automatically generating a body composition profile.
  • the method includes obtaining an MRI image of a patient.
  • the method further includes manually segmenting the MRI image to obtain a manually segmented MRI image, the manually segmented MRI image including delineated segments that correspond to targeted body components of the patient, the targeted body components comprising fat and/or muscle.
  • the method further includes performing a series of convolutions, pooling, and upsampling on the MRI image to obtain a segmented output, the segmented output comprising masks of the targeted body components.
  • the method further includes determining an amount of overlap between the segmented output and the manually segmented MRI image.
  • the method further includes validating the segmented output when the overlap is within a defined threshold.
  • the method further includes calculating volumes and statistical measurements from the segmented output to generate quantitative image biomarkers.
  • the method further includes generating the body composition profile from the generated biomarkers.
  • a system includes means for storing instructions, and means for executing the stored instructions that, when executed by the means, cause the means to perform a method.
  • the method includes obtaining an MRI image of a patient.
  • the method further includes manually segmenting the MRI image to obtain a manually segmented MRI image, the manually segmented MRI image including delineated segments that correspond to targeted body components of the patient, the targeted body components comprising fat and/or muscle.
  • the method further includes performing a series of convolutions, pooling, and upsampling on the MRI image to obtain a segmented output, the segmented output comprising masks of the targeted body components.
  • the method further includes determining an amount of overlap between the segmented output and the manually segmented MRI image.
  • the method further includes validating the segmented output when the overlap is within a defined threshold.
  • the method further includes calculating volumes and statistical measurements from the segmented output to generate quantitative image biomarkers.
  • the method further includes generating the body composition profile from the generated biomarkers. DESCRIPTION OF THE DRAWINGS
  • FIG. 1 illustrates a U-Net architecture, according to certain aspects of the present disclosure.
  • FIG. 2 illustrates generation of a segmented image from an input of an MRI image, according to certain aspects of the present disclosure.
  • FIG. 3 illustrates a stitched image of a whole body, according to certain aspects of the present disclosure.
  • FIG. 4 illustrates an example flow diagram for automatically generating a body composition profile, according to certain aspects of the disclosure.
  • FIG. 5 is a block diagram illustrating an example computer system with which aspects of the subject technology can be implemented.
  • not all of the depicted components in each figure may be required, and one or more implementations may include additional components not shown in a figure. Variations in the arrangement and type of the components may be made without departing from the scope of the subject disclosure. Additional components, different components, or fewer components may be utilized within the scope of the subject disclosure.
  • T2D type II diabetes
  • Body Composition Profiles derived from whole body MRI is an important predictor of metabolic syndromes, diseases including T2D and Obesity, as well as Coronary Heart Disease (CHD), Ischemic Stroke, and cancer, particularly colorectal cancer. Findings from Body Composition Profiles are actionable as metabolic syndrome or any of its components, can be managed with lifestyle changes to delay or prevent the development of serious health problems.
  • CHD Coronary Heart Disease
  • Ischemic Stroke Ischemic Stroke
  • cancer particularly colorectal cancer.
  • aspects of the present disclosure describe a method using a pair convolutional neural networks (CNN) with ResNet style architectures for classification of the slices anatomical location within a 3D volume followed by a pair of fully convolutional networks (FCN) with a U- Net style architecture to automatically segment regions related to body composition using fat and water images derived from multi- station whole body Dixon MRI. The segmented data is then used to derive quantitative body composition profile.
  • CNN convolutional neural networks
  • FCN fully convolutional networks
  • a method for automatically generating a body composition profile includes obtaining an MRI image of a patient.
  • the method further includes manually segmenting the MRI image to obtain a manually segmented MRI image, the manually segmented MRI image including delineated segments that correspond to targeted body components of the patient, the targeted body components comprising fat and/or muscle.
  • the method further includes performing a series of convolutions, pooling, and upsampling on the MRI image to obtain a segmented output, the segmented output comprising masks of the targeted body components.
  • the method further includes determining an amount of overlap between the segmented output and the manually segmented MRI image.
  • the method further includes validating the segmented output when the overlap is within a defined threshold.
  • the method further includes calculating volumes and statistical measurements from the segmented output to generate quantitative image biomarkers.
  • the method further includes generating the body composition profile from the generated biomarkers.
  • a system for automatically generating a body composition profile includes a magnetic resonance imaging device configured to obtain an MRI image of a patient.
  • the system further includes a computing device communicatively connected to the magnetic resonance imaging device.
  • the computing device receives a manually segmented MRI image.
  • the manually segmented MRI image includes delineated segments that correspond to targeted body components of the patient.
  • the targeted body components include fat and/or muscle.
  • the computing device includes a mask generator configured to perform a series of convolutions, pooling, and upsampling on the MRI image to obtain a segmented output, the segmented output comprising masks of the targeted body components.
  • the mask generator is further configured to determine an amount of overlap between the segmented output and the manually segmented MRI image and validate the segmented output when the overlap is within a defined threshold.
  • the computing device further includes a profile generator configured to calculate volumes and statistical measurements from the segmented output to generate quantitative image biomarkers.
  • the profile generator is further configured to generate the body composition profile from the generated biomarkers.
  • one element e.g., a material, a layer, a substrate, etc.
  • one element can be “on,” “attached to,” “connected to,” or “coupled to” another element regardless of whether the one element is directly on, attached to, connected to, or coupled to the other element or there are one or more intervening elements between the one element and the other element.
  • elements e.g., elements a, b, c
  • such reference is intended to include any one of the listed elements by itself, any combination of less than all of the listed elements, and/or a combination of all of the listed elements. Section divisions in the specification are for ease of review only and do not limit any combination of elements discussed.
  • the phrase“features of interest”,“pathological features” or“metastatic features” can refer to tissue structures that have clinical significance. For example, benign tumors, metastatic cancers, damaged or diseased tissue, etc.
  • the phrase“medical imaging techniques”,“medical imaging methods” or “medical imaging systems” can denote techniques or processes for obtaining visual representations of the interior of an individual’s body for clinical analysis and medical intervention, as well as visual representation of the function of some organs or tissues.
  • various imaging features can be identified and characterized to provide a structural basis for diagnosing and treating various types of diseases (e.g., dementia, cancer, cardiovascular disease, cerebrovascular disease, liver disease, etc).
  • medical imaging techniques can include, but are not limited to, x-ray radiography, magnetic resonance imaging, ultrasound, positron emission tomography (PET), computed tomography (CT), etc.
  • Magnetic Resonance Imaging denotes a radiology imaging technique that uses a MRI scanner (that produces magnet fields and radio waves) and a computing device to produce images of body structures.
  • the MRI scanner can be a “closed MRI” consisting of a giant circular magnet where the patient subject is placed on a moveable bed that is inserted into the magnetic tube, an“open MRI” consisting of two horizontal magnetic disks connected to a pillar between them where the patient subject sits or stands between the disks, and a“portable MRI” consisting of a hand portable scanner containing magnet(s) that are optimized to general ultra-low frequency magnetic fields coupled to highly sensitive superconducting quantum interface detectors (SQUID).
  • SQUID superconducting quantum interface detectors
  • MRI works by employing the MRI scanner magnet(s) to create a strong magnetic field that aligns the protons of nuclei of interest, typically 1 H, which are then exposed to radiofrequency waves. This net magnetic moment created by spins of the nuclei of interest within tissues in the body produce a faint signal that are detected by receiver coil(s).
  • the receiver information is processed by a computer, and an image is produced.
  • the image and resolution produced by MRI can be quite detailed and can detect tiny changes of structures and function of tissues within the body.
  • contrast agents such as gadolinium, can be used to increase the accuracy of the images.
  • a“MRI pulsed sequence” or“MRI sequence” denotes a programmed set of changing radiofrequency pulses and magnetic gradients that are designed to result in images that emphasize one or more desired tissue image feature (or appearance). Each sequence will have a number of parameters, and multiple sequences are grouped together into an MRI protocol. Examples of the types of MRI pulsed sequences that are available include, but are not limited to: spin echo sequences (e.g., Tl weighted, T2 weighted, etc.), inversion recovery sequences, gradient echo sequences, diffusion weighted sequences, saturation-recovery sequences, echo planar pulse sequences, spiral pulse sequences, etc.
  • spin echo sequences e.g., Tl weighted, T2 weighted, etc.
  • inversion recovery sequences e.g., gradient echo sequences, diffusion weighted sequences, saturation-recovery sequences, echo planar pulse sequences, spiral pulse sequences, etc.
  • a“structural MRI” denotes MRI techniques that are focused on providing detailed images of anatomical structures, most commonly neurological or other soft (e.g., tendons, ligaments, fascia, skin, fibrous, fat, synovial membranes, muscles, nerves, blood vessels, etc.) tissue structures.
  • MR pulsed sequences that can be used for structural MRI include, but are not limited to: spin echo sequences (e.g., Tl weighted, T2 weighted, etc.), inversion recovery sequences, gradient echo sequences, diffusion weighted sequences, etc.
  • functional MRI denotes MRI techniques that are focused on providing images that can emphasize non-structural, anatomical information within tissues such as metabolic activity and the diffusive properties of water in tissue.
  • MR pulsed sequences that can be used for functional MRI include, but are not limited to: diffusion weighted sequences, arterial spin labeling, etc.
  • a Body Composition Profile derived from whole body MRI is an important predictor for several diseases including type 2 diabetes (T2D), coronary heart disease (CHD), cancer, and obesity.
  • T2D type 2 diabetes
  • CHD coronary heart disease
  • cancer cancer
  • obesity Currently, Dixon imaging is utilized primarily to determine the amount of fat present in the liver. Additionally, the only way to obtain segmented fat/muscle images is to manually delineate the images by hand. This process is costly and time consuming. Described herein are systems and methods for acquiring Dixon images throughout the whole body, enabling segmentation of fat and muscle in regions other than the liver. Additionally, fully convolutional neural networks with a U-Net style architecture may be utilized to automatically segment regions related to body composition in the acquired Dixon images. The segmented data is then used to derive quantitative biomarkers and a personalized body composition profile. These features and/or the neural network may also be used to build integrated risk models for several diseases.
  • a convolutional neural network is trained using a training database that contains patient cohort training data that pairs each patient’s MRI images (e.g., 3D Tl-weighted MRI images, etc.) with their corresponding cortical surface and volumetric measurements.
  • MRI images e.g., 3D Tl-weighted MRI images, etc.
  • the trained convolutional neural network can then be used to segment regions of a subject’s body using their structural MRI images (i.e., 3D Tl-weighted MRI Images) and quantify certain aspects (e.g., volume, surface area, thickness, etc.) of the morphological features (e.g., visceral fat, subcutaneous fat, intramuscular fat, lean muscle, etc. ) of the subject’s body.
  • structural MRI images i.e., 3D Tl-weighted MRI Images
  • aspects e.g., volume, surface area, thickness, etc.
  • the morphological features e.g., visceral fat, subcutaneous fat, intramuscular fat, lean muscle, etc.
  • CNN Deep learning and convolutional neural networks
  • FCN fully connected neural network
  • U-Net named after the shape of the neural network, performs the same convolutions as a CNN, but instead of a fully connected layer after the convolutions, the network uses the learned filters to reproduce the image.
  • examples of the appropriate output of the network in the current embodiment, segmented body fat/muscle areas are needed.
  • FIG. 1 illustrates a U-Net architecture 100, according to certain aspects of the present disclosure.
  • the U-net architecture allows for 32x32 pixels in the lowest resolution.
  • Each elongated rectangle (e.g., box) corresponds to a multi-channel feature map. The number of channels is denoted on top of the box.
  • the x-y-size is provided at the lower left edge of the box.
  • Boxes 102, 104, 106, and 108 represent copied feature maps.
  • the arrows denote the different operations.
  • the U-net architecture involves several 3x3 convolutions followed by ReLu layers 110, which are also copied and cropped 112, as shown by the horizontal arrows.
  • the copy and cropping steps are what give the U-NET its shape.
  • Several down conversions (e.g., max pool) 114 and up conversions 116 are also performed, as shown by the vertical arrows.
  • a lxl convolution 118 outputs a segmentation map.
  • Body composition profiling is important because it allows a clinician to gain valuable information about a patient’s health.
  • the amount of visceral fat, subcutaneous fat, intramuscular fat, and lean muscle, etc. are directly correlated with many diseases such as T2D and CHD. For example, higher liver fat is associated with T2D, whereas lower liver fat is associated with CHD.
  • an automated method segments the regions in the patient’s body including but not limited to, visceral fat, subcutaneous fat, intramuscular fat, lean muscle, etc.
  • a U-Net convolutional neural network is trained to segment different areas of body fat/muscle in an MRI. As described herein, this technique provides improved medical imaging segmentation.
  • Dixon imaging is an MRI sequence based on chemical shifts between water and lipids to separate water from fats in MRI images.
  • Dixon imaging is currently only used for determining the amount of fat in the liver. It is not currently used for the whole body, and has not been used for segmentation before. Conventionally, it is not common to use whole body imaging at all, due to issues of stitching multiple images together.
  • Dixon imaging visualizes fat and muscle better than other MRI methods/settings, and so is advantageous for segmenting fat in a patient’s whole body. Based on the whole body Dixon image, the volume in L of fat/muscle in the patient’ s whole body may be calculated.
  • an input into a U-Net convolutional neural network is a full-body Dixon MRI image.
  • the full-body Dixon image is stitched together from multiple images and two of four total slices are fed into the U-Net one slice at a time.
  • the output is a marked-up version of the Dixon image delineating areas of fat/muscle. Volume in liters of fat/muscle may be calculated by summing up the pixels/voxels for each slice.
  • the U-Net may be trained by a pair of classification models (e.g., segmentation models) with a binary classifier to identify slices within the abdomen for the fat models and regions pertaining to the thighs for the water models.
  • classification models e.g., segmentation models
  • the classifiers training sets included -20,000 training images with annotation of slice location from both sets, water and fat.
  • the pair of segmentation models were developed with randomly selected images holding out 4,000 images for validation.
  • the pair of segmentation models included -20,000 training images with manually segmented VAT, ASAT for fat images and posterior and anterior thigh lean mass for water images.
  • the pair of segmentation models were developed with randomly selected images holding out 1000 images for quantitative validation. Whole body volumes were qualitatively validated with Health Nucleus data.
  • FIG. 2 illustrates generation of a segmented image 202 from an input of an MRI image 200, according to certain aspects of the present disclosure.
  • the input image may be a colored MRI image.
  • masks of the targeted body components were generated using color thresholding techniques of the U-NET to isolate each segment from manually segmented images.
  • a black and white representation of the original image was also created to be fed into the neural network as training data.
  • the derived image biomarkers are used to create personalized body composition profiles 208 from the segmented images 204, 206 for each patient, which can be compared with age and gender matched population norms and disease profiles.
  • the derived image biomarkers and/or the FCN can also be used to generate integrated risk models.
  • the automated segmentation method may also leverage other images or image augmentation strategies to improve the performance of the U-Net segmentation model.
  • volumes calculated for the body composition profile or the FCN will be used in machine learning or statistical models for risk prediction of diseases and the development of personalized risk mitigation recommendations.
  • a greyscale MRI image is input into a convolutional neural network (e.g., the U-NET) to train the network.
  • the U-NET outputs images that include segmented areas of interest (e.g., fat/muscle composition).
  • the patient’s body composition may be calculated based on the output from the U-NET, which includes segments overlaid onto the original image.
  • a personalized body composition profile is generated using the information gathered in the previous steps.
  • 3D volumes for the patient’s body profiles are calculated from the resultant segmented masks by summing the total number of voxels and multiplying them by the volume of a single voxel as indicated in the Digital Imaging and Communications in Medicine (DICOM) metadata.
  • DICOM Digital Imaging and Communications in Medicine
  • images may be missing from a series of MRIs. To work around this issue, the z axis of each succeeding image is inspected to ensure it is the next image in the series. If it is not the next image, then it may be determined how many images are missing. This allows for accurate volume calculations regardless of missing images.
  • the performance of the U-NET may be assessed using the Dice Coefficient (DSC) to determine how well the model is working as well as manually overlaying the predicted mask over the target mask.
  • DSC Dice Coefficient
  • the intersection of the mask and original image is taken.
  • the pixels that overlap in the original image have their color channel values changed to their respective colors based on the type of mask.
  • the fully automated body composition results can be directly used in products that showcase body composition risk factors with MRI without a manual processing step.
  • the described systems and methods may also be used to link diseases to abnormal body compositions.
  • abnormal body compositions may be flagged and monitored to determine whether there are links to certain diseases. In this way, clinicians can improve patient health with better and earlier diagnosis of diseases.
  • the whole-body imaging described herein could be implemented in products that detect fat composition of various regions of the body.
  • a pair of convolutional neural networks (CNN) with ResNet style architectures may be utilized for classification of the slices of anatomical locations pertaining to the abdomen and thighs for fat and water images respectively.
  • Slices containing anatomy of interested as classified by the within a 3D volume may be followed by a pair of fully convolutional networks (FCN) with a U-Net style architecture to automatically segment regions related to body composition using fat and water images derived from multi-station whole body Dixon MRI.
  • FCN fully convolutional networks
  • U-Net style architecture to automatically segment regions related to body composition using fat and water images derived from multi-station whole body Dixon MRI.
  • the segmented data is reconstructed into 3D volumes and then used to derive a quantitative body composition profile.
  • the fully automated method for body composition profiling with MRI Dixon imaging can be used for radiation-free MRI risk stratification without any manual processing steps, making it more accessible clinically. This would be most likely used for diseases such as type 2 diabetes, cardiovascular disease, and obesity, but could also be used to make novel discoveries possibly linking diseases to abnormal body compositions. Evaluation of the predictive power of the automated body composition profile may also be utilized on large, longitudinal population cohorts and in integrated risk models that leverage genomics and lifestyle factors.
  • FIG. 3 illustrates a stitched image of a whole body 300, according to certain aspects of the present disclosure.
  • Dixon images from separate coils are stitched together to form a whole body Dixon image.
  • the chest from a chest coil 304 the abdomen from an abdomen coil 306, and legs from leg coils 308.
  • images using a VIBE sequence are acquired in eight axial stations throughout the body, although more or less may be acquired in some embodiments.
  • a combination of coils is used, that may include, but may not be limited to, a head coil, various a body array coils, and a scanner integrated body coil.
  • Station positions are encoded in such a way as to enable stitching together the images into a seamless sequence through the entire body. Images are normalized so as to further facilitate stitching and to minimize the differences in image contrast at station boundaries.
  • Processing of images is accomplished as follows. Using the encoded slab positions, images from all stations are compiled together to create one seamless series of axial images through the whole body.
  • the techniques described herein may be implemented as method(s) that are performed by physical computing device(s); as one or more non- transitory computer-readable storage media storing instructions which, when executed by computing device(s), cause performance of the method(s); or, as physical computing device(s) that are specially configured with a combination of hardware and software that causes performance of the method(s).
  • FIG. 4 illustrates an example flow diagram (e.g., process 400) for automatically generating a body composition profile, according to certain aspects of the disclosure.
  • process 400 for automatically generating a body composition profile.
  • steps of the example process 400 are described herein as occurring in serial, or linearly. However, multiple instances of the example process 400 may occur in parallel.
  • an MRI image of a patient is obtained.
  • the MRI image is manually segmented to obtain a manually segmented MRI image.
  • the manually segmented MRI image may include delineated segments that correspond to targeted body components of the patient.
  • the targeted body components may include fat and/or muscle.
  • a series of convolutions, pooling, and upsampling are performed on the MRI image to obtain a segmented output.
  • the segmented output may include masks of the targeted body components.
  • an amount of overlap between the segmented output and the manually segmented MRI image is determined.
  • the segmented output is validated when the overlap is within a defined threshold.
  • volume and statistical measurements are calculated from the segmented output to generate quantitative image biomarkers.
  • body composition profile is generated from the generated biomarkers.
  • obtaining the MRI image may further include acquiring a first Dixon image of a first portion of the patient.
  • the process 400 may further include acquiring a second Dixon image of a second portion of the patient. The second portion may be adjacent to the first portion.
  • the process 400 may further include stitching the first Dixon image together with the second Dixon image to generate a seamlessly stitched image of the first portion and the second portion.
  • the process 400 may further include acquiring a third Dixon image of a third portion of the patient.
  • the third portion may be adjacent to the second portion.
  • the process 400 may further include acquiring a fourth Dixon image of a fourth portion of the patient, the fourth portion adjacent to the third portion.
  • the process 400 may further include overlaying the masks of the segmented output over the MRI image.
  • the process 400 may further include determining an intersection of the masks and the MRI image.
  • the process 400 may further include changing pixel colors in the MRI image based on the intersection. The pixel colors may be changed based on a determined type for each mask.
  • calculating the volumes may include summing a total number of voxels and multiplying the total by a volume of a single voxel.
  • the process 400 may further include comparing the body composition profile with age and gender matched population norms and disease profiles to generate an integrated risk model.
  • the segmented image may include segmented regions corresponding to locations in the body of visceral fat, subcutaneous fat, intramuscular fat, and/or lean muscle.
  • performing the series of convolutions, pooling, and upsampling may include contracting features of the MRI image to capture context through repeated convolutions. Each convolution may be followed by a rectified linear unit (ReLU) and a max pooling operation for downsampling.
  • the process 400 may further include expanding features of the MRI image to localize the context through upsampling of a feature map followed by convolutions. The expanding may include concatenation with a correspondingly cropped feature map from the contracting.
  • FIG. 5 is a block diagram that illustrates a computer system 500, upon which embodiments of the present teachings may be implemented.
  • computer system 500 can include a bus 502 or other communication mechanism for communicating information, and a processor 504 coupled with bus 502 for processing information.
  • computer system 500 can also include a memory, which can be a random access memory (RAM) 406 or other dynamic storage device, coupled to bus 502 for determining instructions to be executed by processor 504. Memory also can be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 504.
  • RAM random access memory
  • computer system 500 can further include a read only memory (ROM) 508 or other static storage device coupled to bus 502 for storing static information and instructions for processor 504.
  • ROM read only memory
  • a storage device 510 such as a magnetic disk or optical disk, can be provided and coupled to bus 502 for storing information and instructions.
  • computer system 500 can be coupled via bus 502 to a display 512, such as a cathode ray tube (CRT) or liquid crystal display (LCD), for displaying information to a computer user.
  • a display 512 such as a cathode ray tube (CRT) or liquid crystal display (LCD)
  • An input device 514 can be coupled to bus 502 for communicating information and command selections to processor 504.
  • a cursor control 516 such as a mouse, a trackball or cursor direction keys for communicating direction information and command selections to processor 504 and for controlling cursor movement on display 512.
  • This input device 514 typically has two degrees of freedom in two axes, a first axis (i.e., x) and a second axis (i.e., y), that allows the device to specify positions in a plane.
  • a first axis i.e., x
  • a second axis i.e., y
  • input devices 514 allowing for 3 dimensional (x, y and z) cursor movement are also contemplated herein.
  • results can be provided by computer system 500 in response to processor 504 executing one or more sequences of one or more instructions contained in memory 506.
  • Such instructions can be read into memory 406 from another computer-readable medium or computer-readable storage medium, such as storage device 510. Execution of the sequences of instructions contained in memory 506 can cause processor 504 to perform the processes described herein.
  • hard-wired circuitry can be used in place of or in combination with software instructions to implement the present teachings.
  • implementations of the present teachings are not limited to any specific combination of hardware circuitry and software.
  • computer-readable medium e.g., data store, data storage, etc.
  • computer-readable storage medium refers to any media that participates in providing instructions to processor 504 for execution.
  • Such a medium can take many forms, including but not limited to, non-volatile media, volatile media, and transmission media.
  • non volatile media can include, but are not limited to, optical, solid state, magnetic disks, such as storage device 510.
  • volatile media can include, but are not limited to, dynamic memory, such as memory 506.
  • transmission media can include, but are not limited to, coaxial cables, copper wire, and fiber optics, including the wires that comprise bus 502.
  • Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, or any other tangible medium from which a computer can read.
  • instructions or data can be provided as signals on transmission media included in a communications apparatus or system to provide sequences of one or more instructions to processor 404 of computer system 500 for execution.
  • a communication apparatus may include a transceiver having signals indicative of instructions and data.
  • the instructions and data are configured to cause one or more processors to implement the functions outlined in the disclosure herein.
  • Representative examples of data communications transmission connections can include, but are not limited to, telephone modem connections, wide area networks (WAN), local area networks (LAN), infrared data connections, NFC connections, etc.
  • the methodologies described herein may be implemented by various means depending upon the application. For example, these methodologies may be implemented in hardware, firmware, software, or any combination thereof.
  • the processing unit may be implemented within one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro controllers, microprocessors, electronic devices, other electronic units designed to perform the functions described herein, or a combination thereof.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGAs field programmable gate arrays
  • processors controllers, micro controllers, microprocessors, electronic devices, other electronic units designed to perform the functions described herein, or a combination thereof.
  • the methods of the present teachings may be implemented as firmware and/or a software program and applications written in conventional programming languages such as C, C++, Python, etc. If implemented as firmware and/or software, the embodiments described herein can be implemented on a non-transitory computer-readable medium in which a program is stored for causing a computer to perform the methods described above. It should be understood that the various engines described herein can be provided on a computer system, such as computer system 500 of Appendix B, whereby processor 404 would execute the analyses and determinations provided by these engines, subject to instructions provided by any one of, or a combination of, memory components 506/508/510 and user input provided via input device 514.
  • the embodiments described herein can be practiced with other computer system configurations including hand-held devices, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers and the like.
  • the embodiments can also be practiced in distributing computing environments where tasks are performed by remote processing devices that are linked through a network.
  • any of the operations that form part of the embodiments described herein are useful machine operations.
  • the embodiments, described herein also relate to a device or an apparatus for performing these operations.
  • the systems and methods described herein can be specially constructed for the required purposes or it may be a general purpose computer selectively activated or configured by a computer program stored in the computer.
  • various general purpose machines may be used with computer programs written in accordance with the teachings herein, or it may be more convenient to construct a more specialized apparatus to perform the required operations.
  • Certain embodiments can also be embodied as computer readable code on a computer readable medium.
  • the computer readable medium is any data storage device that can store data, which can thereafter be read by a computer system. Examples of the computer readable medium include hard drives, network attached storage (NAS), read-only memory, random-access memory, CD-ROMs, CD-Rs, CD-RWs, magnetic tapes, and other optical, FLASH memory and non-optical data storage devices.
  • the computer readable medium can also be distributed over a network coupled computer systems so that the computer readable code is stored and executed in a distributed fashion.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

A method for automatically generating a body composition profile includes obtaining an MRI image. The method includes manually segmenting the MRI image to obtain a manually segmented MRI image, the manually segmented MRI image including delineated segments that correspond to targeted body components, the targeted body components comprising fat and/or muscle. The method further includes performing a series of convolutions, pooling, and upsampling on the MRI image to obtain a segmented output, the segmented output comprising masks of the targeted body components. The method further includes determining an amount of overlap between the segmented output and the manually segmented MRI image. The method further includes validating the segmented output when the overlap is within a defined threshold. The method further includes calculating volumes and statistical measurements from the segmented output to generate quantitative image biomarkers. The method further includes generating the body composition profile from the generated biomarkers.

Description

FULLY AUTOMATED PERSONALIZED BODY COMPOSITION PROFILE
FIELD
[0001] The embodiments disclosed herein are generally directed towards systems and methods for predicting and managing risk of disorders and diseases for individuals. More specifically, there is a need for systems and methods for acquiring whole body MRI images suitable for segmentation, and quantifying aspects of the morphological features of fat/muscle in various areas of the body to develop disease risk models, identify biomarkers for disease progression, etc.
BACKGROUND
[0002] Body composition profiling allows a clinician to gain valuable information about a patient’ s health. By analyzing the amount of visceral fat, subcutaneous fat, intramuscular fat, and lean muscle from an MRI image of the patient, the clinician is able to determine the patient’ s risk of many diseases, such as type 2 diabetes (T2D) and coronary heart disease (CHD).
[0003] Recently, deep learning-based techniques have shown superior performance both in accuracy and processing time and have outperformed traditional approaches for segmentation of images. Specifically, Fully Convolutional Networks (FCNs) for semantic segmentation have found success in many medical imaging applications. For applications involving fast and precise segmentation of biomedical images, recent work on U-Net Convolutional Networks (U-NETs) has had a great deal of success.
[0004] Currently, Dixon imaging is utilized primarily to determine the amount of fat present in the liver. Additionally, the only way to obtain segmented fat and/or muscle images is to manually delineate MRI images by hand. This process of manually highlighting the areas of interest in an MRI is very time consuming, expensive, and requires years of expertise. Each patient can have several hundred MRI images per series, each of which must be individually examined by the annotator. Additionally, different annotators may annotate the same set of MRI images differently, which leads to inconsistent diagnoses.
[0005] As such, there is a need for neural network-based approaches for disease detection and prediction. It can potentially reduce post processing time from hours to minutes or even seconds, which reduces computational costs and is feasible for quick analysis and same day delivery of results. Also, it would not require any iterative steps and allows for further processing time optimization via GPUs and Spark, if needed. SUMMARY
[0006] The subject disclosure provides for an automated method for body composition profiling with MRI DIXON imaging. The fully automated body composition method developed can be used for radiation-free MRI risk stratification without any manual processing steps making it more accessible clinically. This would most likely be used for risk prediction and risk stratification for diseases such as Type II Diabetes, cardiovascular disease, and obesity.
[0007] According to one embodiment of the present disclosure, a computer-implemented method for automatically generating a body composition profile is provided. The method includes obtaining an MRI image of a patient. The method further includes manually segmenting the MRI image to obtain a manually segmented MRI image, the manually segmented MRI image including delineated segments that correspond to targeted body components of the patient, the targeted body components comprising fat and/or muscle. The method further includes performing a series of convolutions, pooling, and upsampling on the MRI image to obtain a segmented output, the segmented output comprising masks of the targeted body components. The method further includes determining an amount of overlap between the segmented output and the manually segmented MRI image. The method further includes validating the segmented output when the overlap is within a defined threshold. The method further includes calculating volumes and statistical measurements from the segmented output to generate quantitative image biomarkers. The method further includes generating the body composition profile from the generated biomarkers.
[0008] According to one embodiment of the present disclosure, a system is provided including a processor and a memory comprising instructions stored thereon, which when executed by the processor, causes the processor to perform a method for automatically generating a body composition profile is provided. The method includes obtaining an MRI image of a patient. The method further includes manually segmenting the MRI image to obtain a manually segmented MRI image, the manually segmented MRI image including delineated segments that correspond to targeted body components of the patient, the targeted body components comprising fat and/or muscle. The method further includes performing a series of convolutions, pooling, and upsampling on the MRI image to obtain a segmented output, the segmented output comprising masks of the targeted body components. The method further includes determining an amount of overlap between the segmented output and the manually segmented MRI image. The method further includes validating the segmented output when the overlap is within a defined threshold. The method further includes calculating volumes and statistical measurements from the segmented output to generate quantitative image biomarkers. The method further includes generating the body composition profile from the generated biomarkers.
[0009] According to one embodiment of the present disclosure, a non-transitory computer- readable storage medium is provided including instructions (e.g., stored sequences of instructions) that, when executed by a processor, cause the processor to perform a method for automatically generating a body composition profile is provided. The method includes obtaining an MRI image of a patient. The method further includes manually segmenting the MRI image to obtain a manually segmented MRI image, the manually segmented MRI image including delineated segments that correspond to targeted body components of the patient, the targeted body components comprising fat and/or muscle. The method further includes performing a series of convolutions, pooling, and upsampling on the MRI image to obtain a segmented output, the segmented output comprising masks of the targeted body components. The method further includes determining an amount of overlap between the segmented output and the manually segmented MRI image. The method further includes validating the segmented output when the overlap is within a defined threshold. The method further includes calculating volumes and statistical measurements from the segmented output to generate quantitative image biomarkers. The method further includes generating the body composition profile from the generated biomarkers.
[0010] According to one embodiment of the present disclosure, a system is provided that includes means for storing instructions, and means for executing the stored instructions that, when executed by the means, cause the means to perform a method. The method includes obtaining an MRI image of a patient. The method further includes manually segmenting the MRI image to obtain a manually segmented MRI image, the manually segmented MRI image including delineated segments that correspond to targeted body components of the patient, the targeted body components comprising fat and/or muscle. The method further includes performing a series of convolutions, pooling, and upsampling on the MRI image to obtain a segmented output, the segmented output comprising masks of the targeted body components. The method further includes determining an amount of overlap between the segmented output and the manually segmented MRI image. The method further includes validating the segmented output when the overlap is within a defined threshold. The method further includes calculating volumes and statistical measurements from the segmented output to generate quantitative image biomarkers. The method further includes generating the body composition profile from the generated biomarkers. DESCRIPTION OF THE DRAWINGS
[0011] The accompanying drawings, which are included to provide further understanding and are incorporated in and constitute a part of this specification, illustrate disclosed embodiments and together with the description serve to explain the principles of the disclosed embodiments. In the drawings:
[0012] FIG. 1 illustrates a U-Net architecture, according to certain aspects of the present disclosure.
[0013] FIG. 2 illustrates generation of a segmented image from an input of an MRI image, according to certain aspects of the present disclosure.
[0014] FIG. 3 illustrates a stitched image of a whole body, according to certain aspects of the present disclosure.
[0015] FIG. 4 illustrates an example flow diagram for automatically generating a body composition profile, according to certain aspects of the disclosure.
[0016] FIG. 5 is a block diagram illustrating an example computer system with which aspects of the subject technology can be implemented.
[0017] In one or more implementations, not all of the depicted components in each figure may be required, and one or more implementations may include additional components not shown in a figure. Variations in the arrangement and type of the components may be made without departing from the scope of the subject disclosure. Additional components, different components, or fewer components may be utilized within the scope of the subject disclosure.
DETAILED DESCRIPTION
[0018] Diabetes and obesity have reached epidemic proportions as a public health problem not only in the United States (US) but also globally. Globally, over 115 million people suffer from obesity-related problems including 80-85% of the risk of developing type II diabetes (T2D). In the US, diabetes affects nearly 1 in 10 adults, with a majority (90%-95%) of the cases being T2D.
[0019] Body Composition Profiles derived from whole body MRI is an important predictor of metabolic syndromes, diseases including T2D and Obesity, as well as Coronary Heart Disease (CHD), Ischemic Stroke, and cancer, particularly colorectal cancer. Findings from Body Composition Profiles are actionable as metabolic syndrome or any of its components, can be managed with lifestyle changes to delay or prevent the development of serious health problems.
[0020] The physics of MRI Dixon imaging can isolate signal from water within muscle tissue and lipids in visceral adipose tissue (VAT) and abdominal subcutaneous adipose tissue (ASAT) more accurately than any other diagnostic modality. Recently, it has been shown to be a strong predictor of metabolic syndrome along with T2D and CHD. Quantification by MRI is advantageous over noisy measures such as BMI that do not account for lean mass or fat composition. Currently, the only way to retrieve body composition profiles from MRI is to manually segment the images by hand. This process is costly and time consuming making it not feasible for clinical use.
[0021] Aspects of the present disclosure describe a method using a pair convolutional neural networks (CNN) with ResNet style architectures for classification of the slices anatomical location within a 3D volume followed by a pair of fully convolutional networks (FCN) with a U- Net style architecture to automatically segment regions related to body composition using fat and water images derived from multi- station whole body Dixon MRI. The segmented data is then used to derive quantitative body composition profile.
[0022] Systems and methods for enhancing the ability of clinicians to identify features of interest in an individual using MRI images and a deep learning network are disclosed herein. These procedures enhance a clinician’s ability to generate body composition profiles for improved accuracy and consistent diagnosis of diseases. The systems and methods described herein leverage the performance of a U-Net style architecture for medical imaging segmentation.
[0023] According to an aspect of the present disclosure, a method for automatically generating a body composition profile includes obtaining an MRI image of a patient. The method further includes manually segmenting the MRI image to obtain a manually segmented MRI image, the manually segmented MRI image including delineated segments that correspond to targeted body components of the patient, the targeted body components comprising fat and/or muscle. The method further includes performing a series of convolutions, pooling, and upsampling on the MRI image to obtain a segmented output, the segmented output comprising masks of the targeted body components. The method further includes determining an amount of overlap between the segmented output and the manually segmented MRI image. The method further includes validating the segmented output when the overlap is within a defined threshold. The method further includes calculating volumes and statistical measurements from the segmented output to generate quantitative image biomarkers. The method further includes generating the body composition profile from the generated biomarkers.
[0024] According to an aspect of the present disclosure, a system for automatically generating a body composition profile includes a magnetic resonance imaging device configured to obtain an MRI image of a patient. The system further includes a computing device communicatively connected to the magnetic resonance imaging device. The computing device receives a manually segmented MRI image. The manually segmented MRI image includes delineated segments that correspond to targeted body components of the patient. The targeted body components include fat and/or muscle. The computing device includes a mask generator configured to perform a series of convolutions, pooling, and upsampling on the MRI image to obtain a segmented output, the segmented output comprising masks of the targeted body components. The mask generator is further configured to determine an amount of overlap between the segmented output and the manually segmented MRI image and validate the segmented output when the overlap is within a defined threshold. The computing device further includes a profile generator configured to calculate volumes and statistical measurements from the segmented output to generate quantitative image biomarkers. The profile generator is further configured to generate the body composition profile from the generated biomarkers.
[0025] This specification describes various exemplary embodiments of systems and methods for predicting and managing risk of brain function, disorders and diseases for individuals. The disclosure, however, is not limited to these exemplary embodiments and applications or to the manner in which the exemplary embodiments and applications operate or are described herein. Moreover, the figures may show simplified or partial views, and the dimensions of elements in the figures may be exaggerated or otherwise not in proportion. In addition, as the terms "on," "attached to," "connected to," "coupled to," or similar words are used herein, one element (e.g., a material, a layer, a substrate, etc.) can be "on," "attached to," "connected to," or "coupled to" another element regardless of whether the one element is directly on, attached to, connected to, or coupled to the other element or there are one or more intervening elements between the one element and the other element. In addition, where reference is made to a list of elements (e.g., elements a, b, c), such reference is intended to include any one of the listed elements by itself, any combination of less than all of the listed elements, and/or a combination of all of the listed elements. Section divisions in the specification are for ease of review only and do not limit any combination of elements discussed.
[0026] Unless otherwise defined, scientific and technical terms used in connection with the present teachings described herein shall have the meanings that are commonly understood by those of ordinary skill in the art. Further, unless otherwise required by context, singular terms shall include pluralities and plural terms shall include the singular.
[0027] As used herein, the phrase“features of interest”,“pathological features” or“metastatic features” can refer to tissue structures that have clinical significance. For example, benign tumors, metastatic cancers, damaged or diseased tissue, etc.
[0028] As used herein, the phrase“medical imaging techniques”,“medical imaging methods” or “medical imaging systems” can denote techniques or processes for obtaining visual representations of the interior of an individual’s body for clinical analysis and medical intervention, as well as visual representation of the function of some organs or tissues. Within these visual representations various imaging features can be identified and characterized to provide a structural basis for diagnosing and treating various types of diseases (e.g., dementia, cancer, cardiovascular disease, cerebrovascular disease, liver disease, etc). Examples of medical imaging techniques can include, but are not limited to, x-ray radiography, magnetic resonance imaging, ultrasound, positron emission tomography (PET), computed tomography (CT), etc.
[0029] As used herein, Magnetic Resonance Imaging (MRI) denotes a radiology imaging technique that uses a MRI scanner (that produces magnet fields and radio waves) and a computing device to produce images of body structures. In various embodiments, the MRI scanner can be a “closed MRI” consisting of a giant circular magnet where the patient subject is placed on a moveable bed that is inserted into the magnetic tube, an“open MRI” consisting of two horizontal magnetic disks connected to a pillar between them where the patient subject sits or stands between the disks, and a“portable MRI” consisting of a hand portable scanner containing magnet(s) that are optimized to general ultra-low frequency magnetic fields coupled to highly sensitive superconducting quantum interface detectors (SQUID). MRI works by employing the MRI scanner magnet(s) to create a strong magnetic field that aligns the protons of nuclei of interest, typically 1 H, which are then exposed to radiofrequency waves. This net magnetic moment created by spins of the nuclei of interest within tissues in the body produce a faint signal that are detected by receiver coil(s). The receiver information is processed by a computer, and an image is produced.
[0030] The image and resolution produced by MRI can be quite detailed and can detect tiny changes of structures and function of tissues within the body. For some procedures, contrast agents, such as gadolinium, can be used to increase the accuracy of the images.
[0031] As used herein, a“MRI pulsed sequence” or“MRI sequence” denotes a programmed set of changing radiofrequency pulses and magnetic gradients that are designed to result in images that emphasize one or more desired tissue image feature (or appearance). Each sequence will have a number of parameters, and multiple sequences are grouped together into an MRI protocol. Examples of the types of MRI pulsed sequences that are available include, but are not limited to: spin echo sequences (e.g., Tl weighted, T2 weighted, etc.), inversion recovery sequences, gradient echo sequences, diffusion weighted sequences, saturation-recovery sequences, echo planar pulse sequences, spiral pulse sequences, etc.
[0032] As used herein, a“structural MRI” denotes MRI techniques that are focused on providing detailed images of anatomical structures, most commonly neurological or other soft (e.g., tendons, ligaments, fascia, skin, fibrous, fat, synovial membranes, muscles, nerves, blood vessels, etc.) tissue structures. Examples of MR pulsed sequences that can be used for structural MRI include, but are not limited to: spin echo sequences (e.g., Tl weighted, T2 weighted, etc.), inversion recovery sequences, gradient echo sequences, diffusion weighted sequences, etc.
[0033] As used herein, functional MRI denotes MRI techniques that are focused on providing images that can emphasize non-structural, anatomical information within tissues such as metabolic activity and the diffusive properties of water in tissue. Examples of MR pulsed sequences that can be used for functional MRI include, but are not limited to: diffusion weighted sequences, arterial spin labeling, etc.
Neural Networks for Characterizing and/or Quantifying Body Regions in Dixon images
[0034] A Body Composition Profile derived from whole body MRI is an important predictor for several diseases including type 2 diabetes (T2D), coronary heart disease (CHD), cancer, and obesity. Currently, Dixon imaging is utilized primarily to determine the amount of fat present in the liver. Additionally, the only way to obtain segmented fat/muscle images is to manually delineate the images by hand. This process is costly and time consuming. Described herein are systems and methods for acquiring Dixon images throughout the whole body, enabling segmentation of fat and muscle in regions other than the liver. Additionally, fully convolutional neural networks with a U-Net style architecture may be utilized to automatically segment regions related to body composition in the acquired Dixon images. The segmented data is then used to derive quantitative biomarkers and a personalized body composition profile. These features and/or the neural network may also be used to build integrated risk models for several diseases.
[0035] Various aspects and embodiments are disclosed herein for applying neural network techniques to quantify aspects of the morphological features in various regions of a patient’s body to develop disease risk models, identify biomarkers for disease progression, etc. For example, in one aspect, a convolutional neural network is trained using a training database that contains patient cohort training data that pairs each patient’s MRI images (e.g., 3D Tl-weighted MRI images, etc.) with their corresponding cortical surface and volumetric measurements. The trained convolutional neural network can then be used to segment regions of a subject’s body using their structural MRI images (i.e., 3D Tl-weighted MRI Images) and quantify certain aspects (e.g., volume, surface area, thickness, etc.) of the morphological features (e.g., visceral fat, subcutaneous fat, intramuscular fat, lean muscle, etc. ) of the subject’s body.
[0036] Deep learning and convolutional neural networks (CNN) are a good choice for semantic segmentation due to their ability to take large amounts of input data and reduce them to their most important features for classification. A traditional fully connected neural network (FCN) attempts to take an image and flatten it, using weights for each individual pixel. In large images, this approach would take too long to train. In contrast, a CNN reduces the resolution of the image by applying filters and pooling the areas of the filter that were activated, reducing the number of weights learned to the number of filters by the size of filter window. The issue with reducing the resolution in each convolution is the loss of localization information (e.g., locations of fat in the MRI); therefore, U-Net is utilized. U-Net, named after the shape of the neural network, performs the same convolutions as a CNN, but instead of a fully connected layer after the convolutions, the network uses the learned filters to reproduce the image. To train the network, examples of the appropriate output of the network (in the current embodiment, segmented body fat/muscle areas) are needed.
[0037] FIG. 1 illustrates a U-Net architecture 100, according to certain aspects of the present disclosure. As described herein, the U-net architecture allows for 32x32 pixels in the lowest resolution. Each elongated rectangle (e.g., box) corresponds to a multi-channel feature map. The number of channels is denoted on top of the box. The x-y-size is provided at the lower left edge of the box. Boxes 102, 104, 106, and 108 (e.g., white boxes) represent copied feature maps. The arrows denote the different operations.
[0038] As illustrated in FIG. 1, the U-net architecture involves several 3x3 convolutions followed by ReLu layers 110, which are also copied and cropped 112, as shown by the horizontal arrows. The copy and cropping steps are what give the U-NET its shape. Several down conversions (e.g., max pool) 114 and up conversions 116 are also performed, as shown by the vertical arrows. A lxl convolution 118 outputs a segmentation map.
[0039] Currently, the only way to obtain segmented fat/muscle images is to manually delineate the images by hand. The process of manually highlighting the areas of interest in an MRI is very time consuming, expensive and requires years of expertise. Each patient can have several hundred MR images per series, each of which must be individually examined by the annotator. By fully automating the process of segmenting the images, time, cost, and accuracy are all improved. Using automatically segmented regions of interest, body composition is calculated, yielding quantitative data that are used in the prediction, risk assessment, and diagnosis of multiple diseases including type 2 diabetes (T2D), coronary heart disease (CHD), cancer, and obesity.
[0040] Body composition profiling is important because it allows a clinician to gain valuable information about a patient’s health. The amount of visceral fat, subcutaneous fat, intramuscular fat, and lean muscle, etc. are directly correlated with many diseases such as T2D and CHD. For example, higher liver fat is associated with T2D, whereas lower liver fat is associated with CHD. [0041] According to an aspect of the present disclosure, an automated method segments the regions in the patient’s body including but not limited to, visceral fat, subcutaneous fat, intramuscular fat, lean muscle, etc. For example, a U-Net convolutional neural network is trained to segment different areas of body fat/muscle in an MRI. As described herein, this technique provides improved medical imaging segmentation.
[0042] Dixon imaging is an MRI sequence based on chemical shifts between water and lipids to separate water from fats in MRI images. Dixon imaging is currently only used for determining the amount of fat in the liver. It is not currently used for the whole body, and has not been used for segmentation before. Conventionally, it is not common to use whole body imaging at all, due to issues of stitching multiple images together. Dixon imaging visualizes fat and muscle better than other MRI methods/settings, and so is advantageous for segmenting fat in a patient’s whole body. Based on the whole body Dixon image, the volume in L of fat/muscle in the patient’ s whole body may be calculated.
[0043] According to an aspect of the present disclosure, an input into a U-Net convolutional neural network is a full-body Dixon MRI image. The full-body Dixon image is stitched together from multiple images and two of four total slices are fed into the U-Net one slice at a time. The output is a marked-up version of the Dixon image delineating areas of fat/muscle. Volume in liters of fat/muscle may be calculated by summing up the pixels/voxels for each slice.
[0044] According to aspects, the U-Net may be trained by a pair of classification models (e.g., segmentation models) with a binary classifier to identify slices within the abdomen for the fat models and regions pertaining to the thighs for the water models. For example, the classifiers’ training sets included -20,000 training images with annotation of slice location from both sets, water and fat. The pair of segmentation models were developed with randomly selected images holding out 4,000 images for validation.
[0045] The pair of segmentation models included -20,000 training images with manually segmented VAT, ASAT for fat images and posterior and anterior thigh lean mass for water images. The pair of segmentation models were developed with randomly selected images holding out 1000 images for quantitative validation. Whole body volumes were qualitatively validated with Health Nucleus data.
[0046] According to aspects, classification models were evaluated using the AUC ROC on the hold out validation set. The performance of the segmentation models was assessed qualitatively and via the Dice Coefficient on the hold-out validation set. The output of the four models was also evaluated qualitatively over the entire 3D volumes in hold-out data from Health Nucleus. [0047] FIG. 2 illustrates generation of a segmented image 202 from an input of an MRI image 200, according to certain aspects of the present disclosure. For example, the input image may be a colored MRI image. For each colored MRI image, masks of the targeted body components (fat, muscle) were generated using color thresholding techniques of the U-NET to isolate each segment from manually segmented images. A black and white representation of the original image was also created to be fed into the neural network as training data.
[0048] Data from the automated segmentation are used to produce quantitative image biomarkers by finding the volumes and other statistical measurements of the 3D segmented masks.
[0049] The derived image biomarkers are used to create personalized body composition profiles 208 from the segmented images 204, 206 for each patient, which can be compared with age and gender matched population norms and disease profiles. The derived image biomarkers and/or the FCN can also be used to generate integrated risk models.
[0050] According to additional aspects of the present disclosure, the automated segmentation method may also leverage other images or image augmentation strategies to improve the performance of the U-Net segmentation model.
[0051] Volumes calculated for the body composition profile or the FCN will be used in machine learning or statistical models for risk prediction of diseases and the development of personalized risk mitigation recommendations.
[0052] According to an aspect of the present disclosure, a greyscale MRI image is input into a convolutional neural network (e.g., the U-NET) to train the network. After training is completed, the U-NET outputs images that include segmented areas of interest (e.g., fat/muscle composition). The patient’s body composition may be calculated based on the output from the U-NET, which includes segments overlaid onto the original image. A personalized body composition profile is generated using the information gathered in the previous steps.
[0053] 3D volumes for the patient’s body profiles are calculated from the resultant segmented masks by summing the total number of voxels and multiplying them by the volume of a single voxel as indicated in the Digital Imaging and Communications in Medicine (DICOM) metadata. In some cases, images may be missing from a series of MRIs. To work around this issue, the z axis of each succeeding image is inspected to ensure it is the next image in the series. If it is not the next image, then it may be determined how many images are missing. This allows for accurate volume calculations regardless of missing images.
[0054] The performance of the U-NET may be assessed using the Dice Coefficient (DSC) to determine how well the model is working as well as manually overlaying the predicted mask over the target mask. For example, the DSC measures how much of an overlap exists between the target and the resultant mask according to the following formula:
Figure imgf000014_0001
[0055] To validate the model, 10% of the samples were withheld from the training set. These were used as a validation set and their predictions were tested via manual inspection as well as the DSC.
[0056] To generate the colored image output, the intersection of the mask and original image is taken. The pixels that overlap in the original image have their color channel values changed to their respective colors based on the type of mask.
[0057] To see whether an individual has risk for specific disease, the Mean Standard Deviation of the respective age and gender matched fat/muscle volumes of the population of interest (e.g., T2D, health, CAD) is taken and compared against the individual’s composition.
[0058] The fully automated body composition results can be directly used in products that showcase body composition risk factors with MRI without a manual processing step. According to additional aspects of the present disclosure, the described systems and methods may also be used to link diseases to abnormal body compositions. For example, abnormal body compositions may be flagged and monitored to determine whether there are links to certain diseases. In this way, clinicians can improve patient health with better and earlier diagnosis of diseases. Additionally, the whole-body imaging described herein could be implemented in products that detect fat composition of various regions of the body.
[0059] According to aspects, a pair of convolutional neural networks (CNN) with ResNet style architectures may be utilized for classification of the slices of anatomical locations pertaining to the abdomen and thighs for fat and water images respectively. Slices containing anatomy of interested as classified by the within a 3D volume may be followed by a pair of fully convolutional networks (FCN) with a U-Net style architecture to automatically segment regions related to body composition using fat and water images derived from multi-station whole body Dixon MRI. The segmented data is reconstructed into 3D volumes and then used to derive a quantitative body composition profile.
[0060] As described herein, the fully automated method for body composition profiling with MRI Dixon imaging can be used for radiation-free MRI risk stratification without any manual processing steps, making it more accessible clinically. This would be most likely used for diseases such as type 2 diabetes, cardiovascular disease, and obesity, but could also be used to make novel discoveries possibly linking diseases to abnormal body compositions. Evaluation of the predictive power of the automated body composition profile may also be utilized on large, longitudinal population cohorts and in integrated risk models that leverage genomics and lifestyle factors.
[0061] FIG. 3 illustrates a stitched image of a whole body 300, according to certain aspects of the present disclosure. Dixon images from separate coils are stitched together to form a whole body Dixon image. For example, there may be separate Dixon images for the head from a head coil 302, the chest from a chest coil 304, the abdomen from an abdomen coil 306, and legs from leg coils 308.
[0062] According to an aspect, images using a VIBE sequence are acquired in eight axial stations throughout the body, although more or less may be acquired in some embodiments. A combination of coils is used, that may include, but may not be limited to, a head coil, various a body array coils, and a scanner integrated body coil. Station positions are encoded in such a way as to enable stitching together the images into a seamless sequence through the entire body. Images are normalized so as to further facilitate stitching and to minimize the differences in image contrast at station boundaries.
[0063] Processing of images is accomplished as follows. Using the encoded slab positions, images from all stations are compiled together to create one seamless series of axial images through the whole body.
[0064] The techniques described herein may be implemented as method(s) that are performed by physical computing device(s); as one or more non- transitory computer-readable storage media storing instructions which, when executed by computing device(s), cause performance of the method(s); or, as physical computing device(s) that are specially configured with a combination of hardware and software that causes performance of the method(s).
[0065] FIG. 4 illustrates an example flow diagram (e.g., process 400) for automatically generating a body composition profile, according to certain aspects of the disclosure. For explanatory purposes, the steps of the example process 400 are described herein as occurring in serial, or linearly. However, multiple instances of the example process 400 may occur in parallel.
[0066] At step 402, an MRI image of a patient is obtained. At step 404, the MRI image is manually segmented to obtain a manually segmented MRI image. According to aspects, the manually segmented MRI image may include delineated segments that correspond to targeted body components of the patient. According to aspects, the targeted body components may include fat and/or muscle. [0067] At step 406, a series of convolutions, pooling, and upsampling are performed on the MRI image to obtain a segmented output. According to aspects, the segmented output may include masks of the targeted body components.
[0068] At step 408, an amount of overlap between the segmented output and the manually segmented MRI image is determined. At step 410, the segmented output is validated when the overlap is within a defined threshold.
[0069] At step 412, volumes and statistical measurements are calculated from the segmented output to generate quantitative image biomarkers. At step 414, the body composition profile is generated from the generated biomarkers.
[0070] According to aspects, obtaining the MRI image may further include acquiring a first Dixon image of a first portion of the patient. The process 400 may further include acquiring a second Dixon image of a second portion of the patient. The second portion may be adjacent to the first portion. The process 400 may further include stitching the first Dixon image together with the second Dixon image to generate a seamlessly stitched image of the first portion and the second portion.
[0071] According to aspects, the process 400 may further include acquiring a third Dixon image of a third portion of the patient. The third portion may be adjacent to the second portion. The process 400 may further include acquiring a fourth Dixon image of a fourth portion of the patient, the fourth portion adjacent to the third portion.
[0072] According to aspects, the process 400 may further include overlaying the masks of the segmented output over the MRI image. The process 400 may further include determining an intersection of the masks and the MRI image. The process 400 may further include changing pixel colors in the MRI image based on the intersection. The pixel colors may be changed based on a determined type for each mask.
[0073] According to aspects, calculating the volumes may include summing a total number of voxels and multiplying the total by a volume of a single voxel.
[0074] According to aspects, the process 400 may further include comparing the body composition profile with age and gender matched population norms and disease profiles to generate an integrated risk model.
[0075] According to aspects, the segmented image may include segmented regions corresponding to locations in the body of visceral fat, subcutaneous fat, intramuscular fat, and/or lean muscle.
[0076] According to aspects, performing the series of convolutions, pooling, and upsampling may include contracting features of the MRI image to capture context through repeated convolutions. Each convolution may be followed by a rectified linear unit (ReLU) and a max pooling operation for downsampling. The process 400 may further include expanding features of the MRI image to localize the context through upsampling of a feature map followed by convolutions. The expanding may include concatenation with a correspondingly cropped feature map from the contracting.
Computer-Implemented System
[0077] FIG. 5 is a block diagram that illustrates a computer system 500, upon which embodiments of the present teachings may be implemented. In various embodiments of the present teachings, computer system 500 can include a bus 502 or other communication mechanism for communicating information, and a processor 504 coupled with bus 502 for processing information. In various embodiments, computer system 500 can also include a memory, which can be a random access memory (RAM) 406 or other dynamic storage device, coupled to bus 502 for determining instructions to be executed by processor 504. Memory also can be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 504. In various embodiments, computer system 500 can further include a read only memory (ROM) 508 or other static storage device coupled to bus 502 for storing static information and instructions for processor 504. A storage device 510, such as a magnetic disk or optical disk, can be provided and coupled to bus 502 for storing information and instructions.
[0078] In various embodiments, computer system 500 can be coupled via bus 502 to a display 512, such as a cathode ray tube (CRT) or liquid crystal display (LCD), for displaying information to a computer user. An input device 514, including alphanumeric and other keys, can be coupled to bus 502 for communicating information and command selections to processor 504. Another type of user input device is a cursor control 516, such as a mouse, a trackball or cursor direction keys for communicating direction information and command selections to processor 504 and for controlling cursor movement on display 512. This input device 514 typically has two degrees of freedom in two axes, a first axis (i.e., x) and a second axis (i.e., y), that allows the device to specify positions in a plane. However, it should be understood that input devices 514 allowing for 3 dimensional (x, y and z) cursor movement are also contemplated herein.
[0079] Consistent with certain implementations of the present teachings, results can be provided by computer system 500 in response to processor 504 executing one or more sequences of one or more instructions contained in memory 506. Such instructions can be read into memory 406 from another computer-readable medium or computer-readable storage medium, such as storage device 510. Execution of the sequences of instructions contained in memory 506 can cause processor 504 to perform the processes described herein. Alternatively, hard-wired circuitry can be used in place of or in combination with software instructions to implement the present teachings. Thus, implementations of the present teachings are not limited to any specific combination of hardware circuitry and software.
[0080] The term "computer-readable medium" (e.g., data store, data storage, etc.) or "computer- readable storage medium" as used herein refers to any media that participates in providing instructions to processor 504 for execution. Such a medium can take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. Examples of non volatile media can include, but are not limited to, optical, solid state, magnetic disks, such as storage device 510. Examples of volatile media can include, but are not limited to, dynamic memory, such as memory 506. Examples of transmission media can include, but are not limited to, coaxial cables, copper wire, and fiber optics, including the wires that comprise bus 502.
[0081] Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, or any other tangible medium from which a computer can read.
[0082] In addition to computer readable medium, instructions or data can be provided as signals on transmission media included in a communications apparatus or system to provide sequences of one or more instructions to processor 404 of computer system 500 for execution. For example, a communication apparatus may include a transceiver having signals indicative of instructions and data. The instructions and data are configured to cause one or more processors to implement the functions outlined in the disclosure herein. Representative examples of data communications transmission connections can include, but are not limited to, telephone modem connections, wide area networks (WAN), local area networks (LAN), infrared data connections, NFC connections, etc.
[0083] It should be appreciated that the methodologies described herein flow charts, diagrams and accompanying disclosure can be implemented using computer system 500 as a standalone device or on a distributed network of shared computer processing resources such as a cloud computing network.
[0084] The methodologies described herein may be implemented by various means depending upon the application. For example, these methodologies may be implemented in hardware, firmware, software, or any combination thereof. For a hardware implementation, the processing unit may be implemented within one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro controllers, microprocessors, electronic devices, other electronic units designed to perform the functions described herein, or a combination thereof.
[0085] In various embodiments, the methods of the present teachings may be implemented as firmware and/or a software program and applications written in conventional programming languages such as C, C++, Python, etc. If implemented as firmware and/or software, the embodiments described herein can be implemented on a non-transitory computer-readable medium in which a program is stored for causing a computer to perform the methods described above. It should be understood that the various engines described herein can be provided on a computer system, such as computer system 500 of Appendix B, whereby processor 404 would execute the analyses and determinations provided by these engines, subject to instructions provided by any one of, or a combination of, memory components 506/508/510 and user input provided via input device 514.
[0086] While the present teachings are described in conjunction with various embodiments, it is not intended that the present teachings be limited to such embodiments. On the contrary, the present teachings encompass various alternatives, modifications, and equivalents, as will be appreciated by those of skill in the art.
[0087] Further, in describing various embodiments, the specification may have presented a method and/or process as a particular sequence of steps. However, to the extent that the method or process does not rely on the particular order of steps set forth herein, the method or process should not be limited to the particular sequence of steps described. As one of ordinary skill in the art would appreciate, other sequences of steps may be possible. Therefore, the particular order of the steps set forth in the specification should not be construed as limitations on the claims. In addition, the claims directed to the method and/or process should not be limited to the performance of their steps in the order written, and one skilled in the art can readily appreciate that the sequences may be varied and still remain within the spirit and scope of the various embodiments.
[0088] The embodiments described herein, can be practiced with other computer system configurations including hand-held devices, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers and the like. The embodiments can also be practiced in distributing computing environments where tasks are performed by remote processing devices that are linked through a network.
[0089] It should also be understood that the embodiments described herein can employ various computer-implemented operations involving data stored in computer systems. These operations are those requiring physical manipulation of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. Further, the manipulations performed are often referred to in terms, such as producing, identifying, determining, or comparing.
[0090] Any of the operations that form part of the embodiments described herein are useful machine operations. The embodiments, described herein, also relate to a device or an apparatus for performing these operations. The systems and methods described herein can be specially constructed for the required purposes or it may be a general purpose computer selectively activated or configured by a computer program stored in the computer. In particular, various general purpose machines may be used with computer programs written in accordance with the teachings herein, or it may be more convenient to construct a more specialized apparatus to perform the required operations.
[0091] Certain embodiments can also be embodied as computer readable code on a computer readable medium. The computer readable medium is any data storage device that can store data, which can thereafter be read by a computer system. Examples of the computer readable medium include hard drives, network attached storage (NAS), read-only memory, random-access memory, CD-ROMs, CD-Rs, CD-RWs, magnetic tapes, and other optical, FLASH memory and non-optical data storage devices. The computer readable medium can also be distributed over a network coupled computer systems so that the computer readable code is stored and executed in a distributed fashion.

Claims

Claims
1. A method for automatically generating a body composition profile, comprising:
obtaining an MRI image of a patient;
manually segmenting the MRI image to obtain a manually segmented MRI image, the manually segmented MRI image including delineated segments that correspond to targeted body components of the patient, the targeted body components comprising fat and/or muscle; performing a series of convolutions, pooling, and upsampling on the MRI image to obtain a segmented output, the segmented output comprising masks of the targeted body components;
determining an amount of overlap between the segmented output and the manually segmented MRI image;
validating the segmented output when the overlap is within a defined threshold;
calculating volumes and statistical measurements from the segmented output to generate quantitative image biomarkers; and
generating the body composition profile from the generated biomarkers.
2. The method of claim 1, wherein obtaining the MRI image comprises:
acquiring a first Dixon image of a first portion of the patient;
acquiring a second Dixon image of a second portion of the patient, the second portion adjacent to the first portion;
stitching the first Dixon image together with the second Dixon image to generate a seamlessly stitched image of the first portion and the second portion.
3. The method of claim 2, further comprising:
acquiring a third Dixon image of a third portion of the patient, the third portion adjacent to the second portion; and
acquiring a fourth Dixon image of a fourth portion of the patient, the fourth portion adjacent to the third portion.
4. The method of claim 1, further comprising:
overlaying the masks of the segmented output over the MRI image;
determining an intersection of the masks and the MRI image; and changing pixel colors in the MRI image based on the intersection, the pixel colors changed based on a determined type for each mask.
5. The method of claim 1, wherein calculating the volumes comprises summing a total number of voxels and multiplying the total by a volume of a single voxel.
6. The method of claim 1, further comprising comparing the body composition profile with age and gender matched population norms and disease profiles to generate an integrated risk model.
7. The method of claim 1, wherein the segmented image comprises segmented regions corresponding to locations in the body of visceral fat, subcutaneous fat, intramuscular fat, and/or lean muscle.
8. The method of claim 1, wherein performing the series of convolutions, pooling, and upsampling comprises:
contracting features of the MRI image to capture context through repeated convolutions, each convolution followed by a rectified linear unit (ReLU) and a max pooling operation for downsampling; and
expanding features of the MRI image to localize the context through upsampling of a feature map followed by convolutions, the expanding including concatenation with a correspondingly cropped feature map from the contracting.
9. A system for automatically generating a body composition profile, comprising:
a magnetic resonance imaging device configured to obtain an MRI image of a patient; and
a computing device communicatively connected to the magnetic resonance imaging device, the computing device receiving a manually segmented MRI image, the manually segmented MRI image including delineated segments that correspond to targeted body components of the patient, the targeted body components comprising fat and/or muscle, the computing device comprising:
a mask generator configured to perform a series of convolutions, pooling, and upsampling on the MRI image to obtain a segmented output, the segmented output comprising masks of the targeted body components, the mask generator further configured to determine an amount of overlap between the segmented output and the manually segmented MRI image and validate the segmented output when the overlap is within a defined threshold, and
a profile generator configured to calculate volumes and statistical measurements from the segmented output to generate quantitative image biomarkers, the profile generator further configured to generate the body composition profile from the generated biomarkers.
10. The system of claim 9, wherein the magnetic resonance imaging device is further configured to:
acquire a first Dixon image of a first portion of the patient; and
acquire a second Dixon image of a second portion of the patient, the second portion adjacent to the first portion.
11. The system of claim 10, wherein the computing device further comprises:
an image stitching engine configured to stitch the first Dixon image together with the second Dixon image to generate a seamlessly stitched image of the first portion and the second portion.
12. The system of claim 10, wherein the magnetic resonance imaging device is further configured to:
acquire a third Dixon image of a third portion of the patient, the third portion adjacent to the second portion; and
acquire a fourth Dixon image of a fourth portion of the patient, the fourth portion adjacent to the third portion.
13. The system of claim 12, wherein the image stitching engine is further configured to stitch the second Dixon image together with the third Dixon image and stitch the third Dixon image together with the fourth Dixon image.
14. The system of claim 11, wherein the seamlessly stitched image comprises a full body Dixon image.
15. The system of claim 9, wherein the profile generator is further configured to compare the body composition profile with age and gender matched population norms and disease profiles to generate an integrated risk model.
16. The system of claim 9, wherein the profile generator is further configured to:
overlay the masks of the segmented output over the MRI image;
determine an intersection of the masks and the MRI image; and
change pixel colors in the MRI image based on the intersection, the pixel colors changed based on a determined type for each mask.
PCT/US2019/050898 2018-09-13 2019-09-12 Fully automated personalized body composition profile WO2020056196A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201862730995P 2018-09-13 2018-09-13
US62/730,995 2018-09-13
US201862757102P 2018-11-07 2018-11-07
US62/757,102 2018-11-07

Publications (1)

Publication Number Publication Date
WO2020056196A1 true WO2020056196A1 (en) 2020-03-19

Family

ID=68069886

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/050898 WO2020056196A1 (en) 2018-09-13 2019-09-12 Fully automated personalized body composition profile

Country Status (1)

Country Link
WO (1) WO2020056196A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112101362A (en) * 2020-08-25 2020-12-18 中国科学院空间应用工程与技术中心 Semantic segmentation method and system for space science experimental data
CN112435266A (en) * 2020-11-10 2021-03-02 中国科学院深圳先进技术研究院 Image segmentation method, terminal equipment and computer readable storage medium
US20220301723A1 (en) * 2021-03-16 2022-09-22 Advanced Human Imaging Limited Assessing disease risks from user captured images

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
DATABASE COMPENDEX [online] ENGINEERING INFORMATION, INC., NEW YORK, NY, US; 1 April 2018 (2018-04-01), MOGHBELI F ET AL: "A method for body fat composition analysis in abdominal magnetic resonance images via self-organizing map neural network", XP002795550, Database accession no. E20181304956120 *
IRANIAN JOURNAL OF MEDICAL PHYSICS 20180401 MASHHAD UNIVERSITY OF MEDICAL SCIENCES IRN, vol. 15, no. 2, 1 April 2018 (2018-04-01), pages 108 - 116, DOI: 10.22038/IJMP.2017.26347.1265 *
YAO JIANHUA ET AL: "Holistic Segmentation of Intermuscular Adipose Tissues on Thigh MRI", 4 September 2017, INTELLIGENT VIRTUAL AGENT. IVA 2015. LNCS; [LECTURE NOTES IN COMPUTER SCIENCE; LECT.NOTES COMPUTER], SPRINGER, BERLIN, HEIDELBERG, PAGE(S) 737 - 745, ISBN: 978-3-642-17318-9, XP047429228 *
ZHAO LIANG ET AL: "Identification of Water and Fat Images in Dixon MRI Using Aggregated Patch-Based Convolutional Neural Networks", 22 September 2016, INTELLIGENT VIRTUAL AGENT. IVA 2015. LNCS; [LECTURE NOTES IN COMPUTER SCIENCE; LECT.NOTES COMPUTER], SPRINGER, BERLIN, HEIDELBERG, PAGE(S) 125 - 132, ISBN: 978-3-642-17318-9, XP047358638 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112101362A (en) * 2020-08-25 2020-12-18 中国科学院空间应用工程与技术中心 Semantic segmentation method and system for space science experimental data
CN112435266A (en) * 2020-11-10 2021-03-02 中国科学院深圳先进技术研究院 Image segmentation method, terminal equipment and computer readable storage medium
US20220301723A1 (en) * 2021-03-16 2022-09-22 Advanced Human Imaging Limited Assessing disease risks from user captured images

Similar Documents

Publication Publication Date Title
CN111709953B (en) Output method and device in lung lobe segment segmentation of CT (computed tomography) image
Salem et al. Multiple sclerosis lesion synthesis in MRI using an encoder-decoder U-NET
EP3703007B1 (en) Tumor tissue characterization using multi-parametric magnetic resonance imaging
US11443433B2 (en) Quantification and staging of body-wide tissue composition and of abnormal states on medical images via automatic anatomy recognition
Zhang et al. Effective staging of fibrosis by the selected texture features of liver: Which one is better, CT or MR imaging?
CN113711271A (en) Deep convolutional neural network for tumor segmentation by positron emission tomography
Li et al. DenseX-net: an end-to-end model for lymphoma segmentation in whole-body PET/CT images
Kline et al. Semiautomated segmentation of polycystic kidneys in T2-weighted MR images
CN110322444A (en) Medical image processing method, device, storage medium and computer equipment
US8824766B2 (en) Systems and methods for automated magnetic resonance imaging
WO2020056196A1 (en) Fully automated personalized body composition profile
Wang et al. JointVesselNet: Joint volume-projection convolutional embedding networks for 3D cerebrovascular segmentation
EP4092621A1 (en) Technique for assigning a perfusion metric to dce mr images
KR20190137283A (en) Method for producing medical image and device for producing medical image
WO2020033566A1 (en) Neural networks for volumetric segmentation and parcellated surface representations
CN117649400B (en) Image histology analysis method and system under abnormality detection framework
Aja-Fernández et al. Validation of deep learning techniques for quality augmentation in diffusion MRI for clinical studies
Basty et al. Artifact-free fat-water separation in Dixon MRI using deep learning
Amirrajab et al. A framework for simulating cardiac MR images with varying anatomy and contrast
Kulasekara et al. Comparison of two-dimensional and three-dimensional U-Net architectures for segmentation of adipose tissue in cardiac magnetic resonance images
Shen et al. Automated segmentation of biventricular contours in tissue phase mapping using deep learning
Waldkirch Methods for three-dimensional Registration of Multimodal Abdominal Image Data
Kollmann et al. Cardiac function in a large animal model of myocardial infarction at 7 T: deep learning based automatic segmentation increases reproducibility
Lewis et al. Quantifying the importance of spatial anatomical context in cadaveric, non-contrast enhanced organ segmentation
Bologna MRI-based radiomic analysis of rare tumors: optimization of a workflow for retrospective and multicentric studies

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19778734

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19778734

Country of ref document: EP

Kind code of ref document: A1