GB2621332A - A method and an artificial intelligence system for assessing an MRI image - Google Patents

A method and an artificial intelligence system for assessing an MRI image Download PDF

Info

Publication number
GB2621332A
GB2621332A GB2211511.7A GB202211511A GB2621332A GB 2621332 A GB2621332 A GB 2621332A GB 202211511 A GB202211511 A GB 202211511A GB 2621332 A GB2621332 A GB 2621332A
Authority
GB
United Kingdom
Prior art keywords
mri image
neural network
segmentation mask
fat
assessing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2211511.7A
Other versions
GB202211511D0 (en
Inventor
Hassn Alenaini Wareed
McInally Jamie
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Twinn Health Ltd
Original Assignee
Twinn Health Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Twinn Health Ltd filed Critical Twinn Health Ltd
Priority to GB2211511.7A priority Critical patent/GB2621332A/en
Publication of GB202211511D0 publication Critical patent/GB202211511D0/en
Priority to PCT/IB2023/057971 priority patent/WO2024033789A1/en
Publication of GB2621332A publication Critical patent/GB2621332A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation

Abstract

The invention relates to a system and method of training artificial intelligence to assess an MRI image, comprising the steps of receiving, by at least one server, a training data set including a plurality of labelled data entries, each labelled data entry of the plurality of labelled data entries including a reference MRI image having an associated metadata and a first segmentation mask. A server performs, until a predetermined condition is met, the following sub-steps: augment the labelled data entries by modifying the reference MRI image’s parameters to obtain augmented data, obtain a second segmentation mask by performing inference of the augmented data through a neural network, compare the second segmentation mask with the first segmentation mask using a loss function to obtain a comparison result, optimise weights and biases of the neural network based on the comparison result, update the weights and biases of the neural network using backpropagation and calculate the accuracy based on the comparison result and to a method of assessing an MRI image using said trained AI.

Description

A method and an artificial intelligence system for assessing an MRI image
Technical field
[1] Aspects of the invention generally relate to methods and systems for analysing images generated by Magnetic Resonance Imaging (MRI) devices.
Background
[2] Analysing the images from Magnetic Resonance Imaging (MRI) devices requires significant labour cost, because it is typically performed manually by highly qualified personnel.
[3] For example, it takes several hours for a highly trained radiologist to perform the analysis of MRI images of the abdomen to assess the amount of visceral fat and subcutaneous fat. In this regard, it is estimated that approximately 10% of the population have an increased risk of mortality due to obesity and inflammatory-related chronic conditions caused by TOFI (Thin Outside Fat Inside) -normal weight subjects with high visceral fat putting them at a high-risk of developing metabolic diseases, which is currently expensive and inaccurate to diagnose without an MRI image analysis.
[4] Anthropometry measurements (e.g. body mass index, BMI, and waist circumference, WC), while cheap and easily obtainable, do not provide any information about TOFI. Current tools (such as computed tomography CT and dual-energy X-ray absorptiometry, DXA) are invasive and involve radiation exposure.
[5] Therefore, MRI remains the only viable method of assessing some of the medical conditions, however it is a time-consuming process when performed manually. It is often even more delayed due to the lack of sufficiently qualified personnel.
[6] Recent advances of Artificial Intelligence (Al) systems and Machine Learning (ML) techniques have enabled automated analysis of MRI images. The main industry focus in this domain is detection and assessment of brain disorders and cancers.
[7] US patent US11263749B1 describes a method and system for training artificial intelligence in order to perform automated diagnosis of brain disorders. The automated diagnosis is achieved by obtaining an image, for example from MRI, and obtaining text input comprising information about a patient, automatically segmenting the image using a neural network, extracting volumes of one or more structures of the region of interest, and determining a feature associated with one or more structures.
[8] The features listed in this patent are specific to various brain disorders and do not include assessing the risk of developing or having other medical conditions of a patient, for example related to adiposity.
[9] It is the aim of the present invention to provide a system and method that is capable of automatically and quickly assessing an MRI image in order to determine the risk of developing or having a medical condition in a patient.
Summary
[0010] According to a first aspect of the present invention there is provided a method of training artificial intelligence in order to assess an MRI image, comprising the following steps: receiving, by at least one server, a training data set including a plurality of labelled data entries, each labelled data entry of the plurality of labelled data entries including a reference MRI image having an associated metadata and a first segmentation mask; performing, by the at least one server, the following sub-steps, until a predetermined condition is met, preferably a number of epochs is reached or a configured threshold of accuracy is reached: augmenting the labelled data entries by modifying the reference MRI image's parameters to obtain augmented data; obtaining a second segmentation mask by performing inference of the augmented data through a neural network; comparing the second segmentation mask with the first segmentation mask using a loss function to obtain a comparison result; optimising weights and biases of the neural network based on the comparison result; updating the weights and biases of the neural network using backpropagation; and calculating the accuracy based on the comparison result; and outputting, by the at least one server, when the predetermined condition is met, the updated neural network weights and biases.
[0011] Preferably, the first or second segmentation mask is an image comprising a plurality of pixels or voxels, each pixel or voxel having a value representing a region of the MRI image out of a plurality of possible regions.
[0012] By "voxel" or "pixel" according to the present invention it is understood a volume pixel, in other words a three-dimensional pixel.
[0013] By the term "epoch" as used in the present invention it is understood a single cycle of training the neural network with all the training data. Reaching a number of epochs is equivalent to performing a number of cycles of training the neural network with all the training data.
[0014] "Reaching a configured threshold of accuracy" may mean that the accuracy calculated based on the comparison result reaches the configured threshold value. It may also mean that the accuracy calculated based on the comparison result does not improve for a configured number of epochs.
[0015] The loss function according to the present invention is meant to represent a function that quantifies the difference between the first segmentation mask and the second segmentation mask. Examples of loss functions for image segmentation are Binary Cross-Entropy (BCE), DICE, Shape-Aware Loss, etc. In a preferred embodiment, freely combinable with the previous ones, the loss function is DICE. [0016] Weights and bias are both parameters inside the neural network and are well-known parameters for the person skilled in the art.
[0017] In a preferred embodiment, the region identifies (i.e. relates to) one type of adipose tissue or lack of adipose tissue, and the method is used for assessing an adipose tissue from the MRI image.
[0018] More preferably, the reference MRI image comprises the abdomen pad of a human or animal body, and the method is used for assessing visceral adiposity from the MRI image.
[0019] Most preferably, the metadata associated with the reference MRI image comprises gender and/or ethnicity and/or any other relevant information about a human patient whose reference MRI image of the abdomen was received by the at least one server, and the region may identify visceral fat or subcutaneous fat or lack of fat.
[0020] In another preferred embodiment, freely combinable with the previous ones, the image parameters are at least one of brightness, saturation, rotation, orientation. However, other image parameters known in the field may be used, for example contrast, hue, stretching, etc. [0021] In a further aspect, the invention relates to a method of assessing an MRI image, comprising the following steps: reading input data comprising an input MRI image and an associated metadata; obtaining a third segmentation mask by performing inference of the input data through a neural network trained according to the method of the above embodiments; calculating a result based on the third segmentation mask.
[0022] Preferably, the third segmentation mask is an image comprising a plurality of pixels or voxels, each pixel or voxel having a value representing a region of the MRI image out of the plurality of possible regions for the first and the second segmentation mask. For example, if the possible regions for the first and the second segmentation mask are visceral fat (having example pixel/voxel value of 1) and subcutaneous fat (having example pixel/voxel value of 2) and lack of fat (having example pixel/voxel value of 0), the pixels/voxels of the third segmentation mask will also have values selected from 0, 1 and 2.
[0023] More preferably, the result is an amount of at least one type of adipose tissue, and the method is used for assessing an adipose tissue from the MRI image.
[0024] More preferably, the MRI image comprises the abdomen part of a human or animal body, and the method is used for assessing visceral adiposity from the MRI image.
[0025] More preferably, the result is an amount of visceral fat and subcutaneous fat. [0026] More preferably, the metadata associated with the input MRI image comprises gender and/or ethnicity and/or any other relevant information about a human patient whose input MRI image of the abdomen was used in the reading step and preferably the result is a TOFI score based on the amount of visceral fat, subcutaneous fat, and the metadata associated with the input MRI image.
[0027] Most preferably, the TOFI score is calculated as a standard deviation of ratio of the amount of visceral fat to the amount of subcutaneous fat. Advantageously, the TOFI score is automatically (i.e. instantly) calculated, saving radiologists time.
[0028] In a preferred embodiment, freely combinable with the previous ones, the neural network is a convolutional neural network, preferably a u-net convolutional neural network. Convolutional neural network has the meaning well known in the art. [0029] In a further aspect, the invention relates to a system for training artificial intelligence in order to assess an MRI image, comprising at least a processing means, wherein the processing means is configured to: receive a training data set including a plurality of labelled data entries, each labelled data entry of the plurality of labelled data entries including a reference MRI image having an associated metadata and a first segmentation mask; perform the following sub-steps, until a predetermined condition is met, preferably a number of epochs is reached or a configured threshold of accuracy is reached: augment the labelled data entries by modifying the reference MRI image's parameters to obtain augmented data; obtain a second segmentation mask by performing inference of the augmented data through a neural network; compare the second segmentation mask with the first segmentation mask using a loss function to obtain a comparison result; optimise weights and biases of the neural network based on the comparison result; update the weights and biases of the neural network using backpropagation; and calculate the accuracy based on the comparison result; and generate a neural network configuration comprising the updated neural network weights and biases, when the predetermined condition is met [0030] Thus, a system according to the invention may comprise one or more processing means, each of the processing means being able to execute one or more of the above steps. Also, the processing means can be run on one or more computers, which could be standalone workstations or virtual servers in a cloud environment. [0031] All the expressions used in the description of system for training artificial intelligence have the same meaning as the expressions used in the description of method for training artificial intelligence.
[0032] In a further aspect, the invention relates to a system for using artificial intelligence to assess an MRI image, comprising at least a processing means, wherein the processing means is configured to: read input data comprising an input MRI image and an associated metadata; obtain a third segmentation mask by performing inference of the input data through a neural network trained by the system according to above embodiment; calculate a result based on the third segmentation mask.
[0033] Preferably, the first or second or third segmentation mask is an image comprising a plurality of pixels or voxels, each pixel or voxel having a value representing a region of the MRI image out of a plurality of possible regions.
[0034] More preferably, the region identifies one type of adipose tissue or lack of adipose tissue, and the result is an amount of at least one type of adipose tissue, and the system is used for assessing an adipose tissue from the MRI image.
[0035] More preferably, the reference MRI image comprises the abdomen part of a human or animal body and the system is used for assessing visceral adiposity from the MRI image.
[0036] Even more preferably, the metadata associated with the reference MRI image comprises gender and/or ethnicity and/or other relevant information about a human patient whose reference MRI image of the abdomen was received by the processing means. Also preferably, the region identifies visceral fat or subcutaneous fat or lack of fat. In addition, the result may be an amount of visceral fat and subcutaneous fat and a TOP! score based on the amount of visceral fat, subcutaneous fat, and the metadata associated with the input MRI image.
[0037] Most preferably, the TOFI score is calculated as a standard deviation of ratio of the amount of visceral fat to the amount of subcutaneous fat.
[0038] In a preferred embodiment, freely combinable with the previous ones, the image parameters are at least one of brightness, saturation, rotation, orientation. However, other image parameters known in the field may be used, for example contrast, hue, stretching, etc. [0039] In yet another preferred embodiment, freely combinable with the previous ones, the loss function is DICE.
[0040] In yet another preferred embodiment, freely combinable with the previous ones, the neural network is a convolutional neural network, preferably a u-net convolutional neural network.
[0041] Further features of the present invention will be apparent from the following description of exemplary embodiments with reference to the attached drawings.
Brief description of the drawings
[0042] Fig. 1 is a block diagram illustrating the hardware components of a computer being pad of the system according to one embodiment of the invention.
[0043] Fig. 2 is a block diagram illustrating the logical components of the system according to one embodiment of the invention.
[0044] Fig. 3 is a flowchart illustrating the method of Al training according to one embodiment of the invention.
[0045] Fig. 4 is a flowchart illustrating the process of data labelling according to one embodiment of the invention.
[0046] Fig. 5 is a flowchart illustrating the method of assessing the MRI image according to one embodiment of the invention.
Detailed description
[0047] A system according to the invention comprises a frontend component and a backend component. Each of the above components can be run on one or more computers.
[0048] As presented in Fig. 1, a computer 101 mentioned in the above embodiments, in the most general form, includes a processing means 102, which typically is CPU and/or GPU, a memory 103, which is typically RAM, and a storage means 104, which is typically hard disk. The computer may also include a network interface 105 to receive the data from other computers and transmit the data to other computers in the network. A computer can be a standalone workstation or a virtual server instance in a cloud computing environment.
[0049] The logical components of the system are shown in more detail in Fig. 2. The system 201 comprises frontend component 210 and backend component 220. Frontend component 210 includes modules which allow the user interact with the system, such as a user input module 211 and a display module 212. Backend component 220 includes Al modules, such as Al training module 221 and Al inference module 222. Frontend and backend components communicate with each other via Application Programming Interface (API).
[0050] Fig. 3 illustrates the method of Al training according to the invention. In step 301, one or more servers on which the Al training module 221 is running, receive a training data set including a plurality of labelled data entries. The process of data labelling is described further in more detail. Each labelled data entry includes a reference MRI image having an associated metadata and a first segmentation mask. MRI image may comprise a human or animal body or a part of the body, for example abdomen part. Metadata are data associated with an MRI image, and may comprise information about a human patient, for example patient's gender and/or ethnicity. A segmentation mask is an image comprising a plurality of pixels or voxels, each pixel or voxel having a value representing a region of the MRI image out of a plurality of possible regions. The region may for example identify a specific type of adipose tissue, such as visceral fat or subcutaneous fat. The region may also identify a lack of adipose tissue.
[0051] In step 302, the labelled data are augmented to increase the variation of data used for Al training. Data augmentation is performed by modifying the reference MRI image's parameters, for example brightness, saturation, rotation and/or orientation. Then, in step 303, the inference is performed on the augmented data through a neural network, to obtain the second segmentation mask. The preferred neural network is a convolutional neural network, and the most preferred is u-net. In step 304, the second segmentation mask is compared with the first segmentation mask using a loss function, which typically is a pixel-wise or voxel-wise comparison method. The preferred loss function is DICE. Then, based on the comparison result, in step 305 the neural network weights and biases are optimised and the accuracy is calculated. Subsequently, in step 306, the weights and biases are updated in the neural network using backpropagation.
[0052] The steps 302-306 are repeated until a predetermined condition in step 307 is met. The condition may be reaching a number of epochs, which is the number of repetitions of steps 302-306. Other possible conditions are reaching the threshold of accuracy, or not improving the accuracy for a number of epochs (so called "early stopping" condition). Finally, when the condition in step 307 is met, the updated neural network weights and biases are outputted in step 308 and may be stored using the storage means 104.
[0053] Fig. 4 illustrates the process of data labelling. In step 401, a server receives raw data set. Each raw data entry includes a reference MRI image having an associated metadata. In step 402, the server distributes each raw data entry to one or more designated human labellers for providing a segmentation mask. For example, the human labellers, using a computer program, mark the regions which identify specific types of adipose tissue, or regions which identify lack of adipose tissue. When all designated human labellers provide their segmentation masks for a raw data entry, the server, in step 403 consolidates the segmentation masks and outputs labelled data entry which includes a reference MRI image having an associated metadata and a consolidated segmentation mask.
[0054] Fig. 5 illustrates the method of assessing an MRI image according to the invention. Preferably, the method can be used for assessing an adipose tissue, for example visceral adiposity.
[0055] In step 501, the system receives input data comprising an input MRI image and an associated metadata. The input data may be provided to the system by user input module 211 of the frontend component 210, which in turn uses an API to transmit the input data to the Al inference module 222 of the backend component. The input data may be alternatively provided by any other means and then transmitted to the Al inference module 222 of the backend component using the same API. The MRI image may for example comprise the abdomen pad of a human or animal body. Metadata may comprise information about a human patient, for example patient's gender and/or ethnicity.
[0056] Then, in step 502, a third segmentation mask is obtained by performing inference of the input data through a neural network which has been earlier trained according to the method described above and illustrated on Fig. 3 [0057] Finally, in step 503, a result is calculated based on the third segmentation mask. The result may for example include an amount of visceral fat and subcutaneous fat. In a further example, the result may also include a TOFI score calculated based on the amount of visceral fat, subcutaneous fat, and the metadata associated with the input MRI image.
[0058] The calculated result may be transmitted by Al inference module 222 via API to the display module 212 of the frontend component, or to any other device or system. [0059] Additionally or alternatively, any of the described methods may be embodied as instructions on a non-transitory computer-readable medium such that, when the instructions are executed by a suitable module within the system (such as a processor), cause the module to perform a described method.
[0060] The invention has been described in terms of various specific embodiments. However, it will be appreciated that these are only examples which are used to illustrate the invention without limitation to those specific embodiments. Consequently, modifications can be made to the described embodiments without departing from the scope of the invention.

Claims (25)

  1. Claims 1. A method of training artificial intelligence in order to assess an MRI image, comprising the following steps: receiving, by at least one server, a training data set including a plurality of labelled data entries, each labelled data entry of the plurality of labelled data entries including a reference MRI image having an associated metadata and a first segmentation mask; performing, by the at least one server, the following sub-steps, until a predetermined condition is met, preferably a number of epochs is reached or a configured threshold of accuracy is reached: augmenting the labelled data entries by modifying the reference MRI image's parameters to obtain augmented data; obtaining a second segmentation mask by performing inference of the augmented data through a neural network; comparing the second segmentation mask with the first segmentation mask using a loss function to obtain a comparison result; optimising weights and biases of the neural network based on the comparison result; updating the weights and biases of the neural network using backpropagation; and calculating the accuracy based on the comparison result; and outputting, by the at least one server, when the predetermined condition is met, the updated neural network weights and biases.
  2. 2. The method according to claim 1, wherein the first or second segmentation mask is an image comprising a plurality of pixels or voxels, each pixel or voxel having a value representing a region of the MRI image out of a plurality of possible regions.
  3. 3. The method according to claim 2 for assessing an adipose tissue from the MRI image, wherein the region identifies one type of adipose tissue or lack of adipose tissue.
  4. 4. The method according to claim 3 for assessing visceral adiposity from the MRI image, wherein the reference MRI image comprises the abdomen part of a human or animal body.
  5. 5. The method according to claim 4, wherein the metadata associated with the reference MRI image comprises gender and ethnicity information about a human patient whose reference MRI image of the abdomen was received by the at least one server, and the region identifies visceral fat or subcutaneous fat or lack of fat.
  6. 6. The method according to any of the claims 1-5, wherein the image parameters are at least one of brightness, saturation, rotation, orientation.
  7. 7. The method according to any of the claims 1-6, wherein the loss function is DICE.
  8. 8. A method of assessing an MRI image, comprising the following steps: reading input data comprising an input MRI image and an associated metadata; obtaining a third segmentation mask by performing inference of the input data through a neural network trained according to any of claims 1 to 7; and calculating a result based on the third segmentation mask.
  9. 9. The method according to claim 8, wherein the third segmentation mask is an image comprising a plurality of pixels or voxels, each pixel or voxel having a value representing a region of the MRI image out of the plurality of possible regions for the first and the second segmentation mask.
  10. 10. The method according to claim 8 or 9 for assessing an adipose tissue from the MRI image, wherein the result is an amount of at least one type of adipose tissue.
  11. 11. The method according to any of the claims 8 to 10 for assessing visceral adiposity from the MRI image, wherein the MRI image comprises the abdomen part of a human or animal body.
  12. 12. The method according to claim 11, wherein the result is an amount of visceral fat and subcutaneous fat.
  13. 13. The method according to claim 12, wherein the metadata associated with the input MRI image comprises gender and ethnicity information about a human patient whose input MRI image of the abdomen was used in the reading step and the result is a TOF I score based on the amount of visceral fat, subcutaneous fat, and the metadata associated with the input MRI image.
  14. 14. The method according to claim 13, wherein the TOFI score is calculated as a standard deviation of ratio of the amount of visceral fat to the amount of subcutaneous fat.
  15. 15. The method according to any of the preceding claims, wherein the neural network is a convolutional neural network, preferably a u-net convolutional neural network.
  16. 16.A system for training artificial intelligence in order to assess an MRI image, comprising at least a processing means, wherein the processing means is configured to: receive a training data set including a plurality of labelled data entries, each labelled data entry of the plurality of labelled data entries including a reference MRI image having an associated metadata and a first segmentation mask; perform the following sub-steps, until a predetermined condition is met, preferably a number of epochs is reached or a configured threshold of accuracy is reached: augment the labelled data entries by modifying the reference MRI image's parameters to obtain augmented data; obtain a second segmentation mask by performing inference of the augmented data through a neural network; compare the second segmentation mask with the first segmentation mask using a loss function to obtain a comparison result; optimise weights and biases of the neural network based on the comparison result; update the weights and biases of the neural network using backpropagation; and calculate the accuracy based on the comparison result; and generate a neural network configuration comprising the updated neural network weights and biases, when the predetermined condition is met.
  17. 17.A system for using artificial intelligence to assess an MRI image, comprising at least a processing means, wherein the processing means is configured to: read input data comprising an input MRI image and an associated metadata; obtain a third segmentation mask by performing inference of the input data through a neural network trained by the system according to claim 16; and calculate a result based on the third segmentation mask.
  18. 18. The system according to claim 17, wherein the first or second or third segmentation mask is an image comprising a plurality of pixels or voxels, each pixel or voxel having a value representing a region of the MRI image out of a plurality of possible regions.
  19. 19. The system according to claim 18 for assessing an adipose tissue from the MRI image, wherein the region identifies one type of adipose tissue or lack of adipose tissue, and the result is an amount of at least one type of adipose tissue
  20. 20. The system according to claim 19 for assessing visceral adiposity from the MRI image, wherein the reference MRI image comprises the abdomen part of a human or animal body.
  21. 21.The system according to claim 20, wherein the metadata associated with the reference MRI image comprises gender and ethnicity information about a human patient whose reference MRI image of the abdomen was received by the processing means, and the region identifies visceral fat or subcutaneous fat or lack of fat, and the result is an amount of visceral fat and subcutaneous fat and a TOFI score based on the amount of visceral fat, subcutaneous fat, and the metadata associated with the input MRI image.
  22. 22. The system according to claim 21, wherein the TOFI score is calculated as a standard deviation of ratio of the amount of visceral fat to the amount of subcutaneous fat.
  23. 23. The system according to any of the claims 16-22, wherein the image parameters are at least one of brightness, saturation, rotation, orientation.
  24. 24. The system according to any of the claims 16-23, wherein the loss function is DICE.
  25. 25. The system according to any of the claims 16-24, wherein the neural network is a convolutional neural network, preferably a u-net convolutional neural network.
GB2211511.7A 2022-08-08 2022-08-08 A method and an artificial intelligence system for assessing an MRI image Pending GB2621332A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB2211511.7A GB2621332A (en) 2022-08-08 2022-08-08 A method and an artificial intelligence system for assessing an MRI image
PCT/IB2023/057971 WO2024033789A1 (en) 2022-08-08 2023-08-07 A method and an artificial intelligence system for assessing adiposity using abdomen mri image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2211511.7A GB2621332A (en) 2022-08-08 2022-08-08 A method and an artificial intelligence system for assessing an MRI image

Publications (2)

Publication Number Publication Date
GB202211511D0 GB202211511D0 (en) 2022-09-21
GB2621332A true GB2621332A (en) 2024-02-14

Family

ID=84546258

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2211511.7A Pending GB2621332A (en) 2022-08-08 2022-08-08 A method and an artificial intelligence system for assessing an MRI image

Country Status (2)

Country Link
GB (1) GB2621332A (en)
WO (1) WO2024033789A1 (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180165808A1 (en) * 2016-06-27 2018-06-14 University Of Central Florida Research Foundation, Inc. System and method for image-based quantification of white and brown adipose tissue at the whole-body, organ and body-region levels
WO2019182520A1 (en) * 2018-03-22 2019-09-26 Agency For Science, Technology And Research Method and system of segmenting image of abdomen of human into image segments corresponding to fat compartments
CN110517241A (en) * 2019-08-23 2019-11-29 吉林大学第一医院 Method based on the full-automatic stomach fat quantitative analysis of NMR imaging IDEAL-IQ sequence
WO2020034469A1 (en) * 2018-08-13 2020-02-20 Beijing Ande Yizhi Technology Co., Ltd. Method and apparatus for classifying a brain anomaly based on a 3d mri image
US20200058126A1 (en) * 2018-08-17 2020-02-20 12 Sigma Technologies Image segmentation and object detection using fully convolutional neural network
WO2020190821A1 (en) * 2019-03-15 2020-09-24 Genentech, Inc. Deep convolutional neural networks for tumor segmentation with positron emission tomography
CN111709952A (en) * 2020-05-21 2020-09-25 无锡太湖学院 MRI brain tumor automatic segmentation method based on edge feature optimization and double-flow decoding convolutional neural network
CN111862087A (en) * 2020-08-03 2020-10-30 张政 Liver and pancreas steatosis distinguishing method based on deep learning
CN113205566A (en) * 2021-04-23 2021-08-03 复旦大学 Abdomen three-dimensional medical image conversion generation method based on deep learning
US20210383537A1 (en) * 2020-06-09 2021-12-09 Siemens Healthcare Gmbh Synthesis of contrast enhanced medical images
WO2022051290A1 (en) * 2020-09-02 2022-03-10 Genentech, Inc. Connected machine-learning models with joint training for lesion detection
CN114202545A (en) * 2020-08-27 2022-03-18 东北大学秦皇岛分校 UNet + + based low-grade glioma image segmentation method
CN114549417A (en) * 2022-01-20 2022-05-27 高欣 Abdominal fat quantification method based on deep learning and nuclear magnetic resonance Dixon

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018222755A1 (en) * 2017-05-30 2018-12-06 Arterys Inc. Automated lesion detection, segmentation, and longitudinal identification
US11263749B1 (en) 2021-06-04 2022-03-01 In-Med Prognostics Inc. Predictive prognosis based on multimodal analysis

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180165808A1 (en) * 2016-06-27 2018-06-14 University Of Central Florida Research Foundation, Inc. System and method for image-based quantification of white and brown adipose tissue at the whole-body, organ and body-region levels
WO2019182520A1 (en) * 2018-03-22 2019-09-26 Agency For Science, Technology And Research Method and system of segmenting image of abdomen of human into image segments corresponding to fat compartments
WO2020034469A1 (en) * 2018-08-13 2020-02-20 Beijing Ande Yizhi Technology Co., Ltd. Method and apparatus for classifying a brain anomaly based on a 3d mri image
US20200058126A1 (en) * 2018-08-17 2020-02-20 12 Sigma Technologies Image segmentation and object detection using fully convolutional neural network
WO2020190821A1 (en) * 2019-03-15 2020-09-24 Genentech, Inc. Deep convolutional neural networks for tumor segmentation with positron emission tomography
CN110517241A (en) * 2019-08-23 2019-11-29 吉林大学第一医院 Method based on the full-automatic stomach fat quantitative analysis of NMR imaging IDEAL-IQ sequence
CN111709952A (en) * 2020-05-21 2020-09-25 无锡太湖学院 MRI brain tumor automatic segmentation method based on edge feature optimization and double-flow decoding convolutional neural network
US20210383537A1 (en) * 2020-06-09 2021-12-09 Siemens Healthcare Gmbh Synthesis of contrast enhanced medical images
CN111862087A (en) * 2020-08-03 2020-10-30 张政 Liver and pancreas steatosis distinguishing method based on deep learning
CN114202545A (en) * 2020-08-27 2022-03-18 东北大学秦皇岛分校 UNet + + based low-grade glioma image segmentation method
WO2022051290A1 (en) * 2020-09-02 2022-03-10 Genentech, Inc. Connected machine-learning models with joint training for lesion detection
CN113205566A (en) * 2021-04-23 2021-08-03 复旦大学 Abdomen three-dimensional medical image conversion generation method based on deep learning
CN114549417A (en) * 2022-01-20 2022-05-27 高欣 Abdominal fat quantification method based on deep learning and nuclear magnetic resonance Dixon

Also Published As

Publication number Publication date
GB202211511D0 (en) 2022-09-21
WO2024033789A1 (en) 2024-02-15

Similar Documents

Publication Publication Date Title
RU2677764C2 (en) Registration of medical images
Liu et al. Automatic whole heart segmentation using a two-stage u-net framework and an adaptive threshold window
WO2021128825A1 (en) Three-dimensional target detection method, method and device for training three-dimensional target detection model, apparatus, and storage medium
US11615508B2 (en) Systems and methods for consistent presentation of medical images using deep neural networks
US11756292B2 (en) Similarity determination apparatus, similarity determination method, and similarity determination program
US11062443B2 (en) Similarity determination apparatus, similarity determination method, and program
CN113012173A (en) Heart segmentation model and pathology classification model training, heart segmentation and pathology classification method and device based on cardiac MRI
CN114450716A (en) Image processing for stroke characterization
da Silva et al. A cascade approach for automatic segmentation of cardiac structures in short-axis cine-MR images using deep neural networks
Wehbe et al. Deep learning for cardiovascular imaging: A review
Davamani et al. Biomedical image segmentation by deep learning methods
Faisal et al. Computer assisted diagnostic system in tumor radiography
CN110910342A (en) Analyzing bone trauma by using deep learning
Benčević et al. Epicardial adipose tissue segmentation from CT images with a semi-3D neural network
Pollack et al. Deep learning prediction of voxel-level liver stiffness in patients with nonalcoholic fatty liver disease
CN114612484B (en) Retina OCT image segmentation method based on unsupervised learning
Chacón et al. Computational assessment of stomach tumor volume from multi-slice computerized tomography images in presence of type 2 cancer
CN115861172A (en) Wall motion estimation method and device based on self-adaptive regularized optical flow model
GB2621332A (en) A method and an artificial intelligence system for assessing an MRI image
US11893735B2 (en) Similarity determination apparatus, similarity determination method, and similarity determination program
US20210279879A1 (en) Similarity determination apparatus, similarity determination method, and similarity determination program
EP3989165A1 (en) Detecting anatomical abnormalities by segmentation results with and without shape priors
EP4064179A1 (en) Masking unwanted pixels in an image
EP4312229A1 (en) Information processing apparatus, information processing method, program, trained model, and learning model generation method
Kareem et al. Effective classification of medical images using image segmentation and machine learning