WO2024033789A1 - Procédé et système d'intelligence artificielle pour évaluer l'adiposité à l'aide d'une image irm de l'abdomen - Google Patents

Procédé et système d'intelligence artificielle pour évaluer l'adiposité à l'aide d'une image irm de l'abdomen Download PDF

Info

Publication number
WO2024033789A1
WO2024033789A1 PCT/IB2023/057971 IB2023057971W WO2024033789A1 WO 2024033789 A1 WO2024033789 A1 WO 2024033789A1 IB 2023057971 W IB2023057971 W IB 2023057971W WO 2024033789 A1 WO2024033789 A1 WO 2024033789A1
Authority
WO
WIPO (PCT)
Prior art keywords
mri image
neural network
segmentation mask
assessing
fat
Prior art date
Application number
PCT/IB2023/057971
Other languages
English (en)
Inventor
Wareed Hassn ALENAINI
Original Assignee
Twinn Health Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Twinn Health Limited filed Critical Twinn Health Limited
Publication of WO2024033789A1 publication Critical patent/WO2024033789A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation

Definitions

  • aspects of the invention generally relate to methods and systems for analysing images generated by Magnetic Resonance Imaging (MRI) devices.
  • MRI Magnetic Resonance Imaging
  • Anthropometry measurements e.g. body mass index, BMI, and waist circumference, WO
  • Current tools such as computed tomography CT and dual-energy X-ray absorptiometry, DXA) are invasive and involve radiation exposure.
  • MRI remains the only viable method of assessing some of the medical conditions, however it is a time-consuming process when performed manually. It is often even more delayed due to the lack of sufficiently qualified personnel.
  • ML Machine Learning
  • US patent US11263749B1 describes a method and system fortraining artificial intelligence in order to perform automated diagnosis of brain disorders.
  • the automated diagnosis is achieved by obtaining an image, for example from MRI, and obtaining text input comprising information about a patient, automatically segmenting the image using a neural network, extracting volumes of one or more structures of the region of interest, and determining a feature associated with one or more structures.
  • a method of training artificial intelligence in order to assess an MRI image comprising the following steps: receiving, by at least one server, a training data set including a plurality of labelled data entries, each labelled data entry of the plurality of labelled data entries including a reference MRI image having an associated metadata and a first segmentation mask; performing, by the at least one server, the following sub-steps, until a predetermined condition is met, preferably a number of epochs is reached or a configured threshold of accuracy is reached: augmenting the labelled data entries by modifying the reference MRI image’s parameters to obtain augmented data; obtaining a second segmentation mask by performing inference of the augmented data through a neural network; comparing the second segmentation mask with the first segmentation mask using a loss function to obtain a comparison result; optimising weights and biases of the neural network based on the comparison result; updating the weights and biases of the neural network using backpropagation; and
  • the first or second segmentation mask is an image comprising a plurality of pixels or voxels, each pixel or voxel having a value representing a region of the MRI image out of a plurality of possible regions.
  • voxel or “pixel” according to the present invention it is understood a volume pixel, in other words a three-dimensional pixel.
  • epoch As used in the present invention it is understood a single cycle of training the neural network with all the training data. Reaching a number of epochs is equivalent to performing a number of cycles of training the neural network with all the training data.
  • “Reaching a configured threshold of accuracy” may mean that the accuracy calculated based on the comparison result reaches the configured threshold value. It may also mean that the accuracy calculated based on the comparison result does not improve for a configured number of epochs.
  • the loss function according to the present invention is meant to represent a function that quantifies the difference between the first segmentation mask and the second segmentation mask.
  • loss functions for image segmentation are Binary Cross-Entropy (BCE), DICE, Shape-Aware Loss, etc.
  • BCE Binary Cross-Entropy
  • DICE Shape-Aware Loss
  • the loss function is DICE.
  • Weights and bias are both parameters inside the neural network and are well-known parameters for the person skilled in the art.
  • the region identifies (i.e. relates to) one type of adipose tissue or lack of adipose tissue, and the method is used for assessing an adipose tissue from the MRI image.
  • the reference MRI image comprises the abdomen part of a human or animal body, and the method is used for assessing visceral adiposity from the MRI image.
  • the metadata associated with the reference MRI image comprises gender and/or ethnicity and/or any other relevant information about a human patient whose reference MRI image of the abdomen was received by the at least one server, and the region may identify visceral fat or subcutaneous fat or lack of fat.
  • the image parameters are at least one of brightness, saturation, rotation, orientation.
  • other image parameters known in the field may be used, for example contrast, hue, stretching, etc.
  • the invention relates to a method of assessing an MRI image, comprising the following steps: reading input data comprising an input MRI image and an associated metadata; obtaining a third segmentation mask by performing inference of the input data through a neural network trained according to the method of the above embodiments; calculating a result based on the third segmentation mask.
  • the third segmentation mask is an image comprising a plurality of pixels or voxels, each pixel or voxel having a value representing a region of the MRI image out of the plurality of possible regions for the first and the second segmentation mask.
  • the possible regions for the first and the second segmentation mask are visceral fat (having example pixel/voxel value of 1 ) and subcutaneous fat (having example pixel/voxel value of 2) and lack of fat (having example pixel/voxel value of 0)
  • the pixels/voxels of the third segmentation mask will also have values selected from 0, 1 and 2.
  • the result is an amount of at least one type of adipose tissue, and the method is used for assessing an adipose tissue from the MRI image.
  • the MRI image comprises the abdomen part of a human or animal body, and the method is used for assessing visceral adiposity from the MRI image.
  • the result is an amount of visceral fat and subcutaneous fat.
  • the metadata associated with the input MRI image comprises gender and/or ethnicity and/or any other relevant information about a human patient whose input MRI image of the abdomen was used in the reading step and preferably the result is a TOFI score based on the amount of visceral fat, subcutaneous fat, and the metadata associated with the input MRI image.
  • the TOFI score is calculated as a standard deviation of ratio of the amount of visceral fat to the amount of subcutaneous fat.
  • the TOFI score is automatically (i.e. instantly) calculated, saving radiologists time.
  • the neural network is a convolutional neural network, preferably a u-net convolutional neural network.
  • Convolutional neural network has the meaning well known in the art.
  • the invention relates to a system for training artificial intelligence in order to assess an MRI image, comprising at least a processing means, wherein the processing means is configured to: receive a training data set including a plurality of labelled data entries, each labelled data entry of the plurality of labelled data entries including a reference MRI image having an associated metadata and a first segmentation mask; perform the following sub-steps, until a predetermined condition is met, preferably a number of epochs is reached or a configured threshold of accuracy is reached: augment the labelled data entries by modifying the reference MRI image’s parameters to obtain augmented data; obtain a second segmentation mask by performing inference of the augmented data through a neural network; compare the second segmentation mask with the first segmentation mask using a loss function to obtain a comparison result; optimise weights and biases of the neural network based on the comparison result; update the weights and biases of the neural network using backpropagation; and calculate the accuracy based on the comparison result;
  • a system according to the invention may comprise one or more processing means, each of the processing means being able to execute one or more of the above steps.
  • the processing means can be run on one or more computers, which could be standalone workstations or virtual servers in a cloud environment.
  • the invention relates to a system for using artificial intelligence to assess an MRI image, comprising at least a processing means, wherein the processing means is configured to: read input data comprising an input MRI image and an associated metadata; obtain a third segmentation mask by performing inference of the input data through a neural network trained by the system according to above embodiment; calculate a result based on the third segmentation mask.
  • the first or second or third segmentation mask is an image comprising a plurality of pixels or voxels, each pixel or voxel having a value representing a region of the MRI image out of a plurality of possible regions.
  • the region identifies one type of adipose tissue or lack of adipose tissue, and the result is an amount of at least one type of adipose tissue, and the system is used for assessing an adipose tissue from the MRI image.
  • the reference MRI image comprises the abdomen part of a human or animal body and the system is used for assessing visceral adiposity from the MRI image.
  • the metadata associated with the reference MRI image comprises gender and/or ethnicity and/or other relevant information about a human patient whose reference MRI image of the abdomen was received by the processing means.
  • the region identifies visceral fat or subcutaneous fat or lack of fat.
  • the result may be an amount of visceral fat and subcutaneous fat and a TOFI score based on the amount of visceral fat, subcutaneous fat, and the metadata associated with the input MRI image.
  • the TOFI score is calculated as a standard deviation of ratio of the amount of visceral fat to the amount of subcutaneous fat.
  • the image parameters are at least one of brightness, saturation, rotation, orientation.
  • other image parameters known in the field may be used, for example contrast, hue, stretching, etc.
  • the loss function is DICE.
  • the neural network is a convolutional neural network, preferably a u-net convolutional neural network.
  • Fig. 1 is a block diagram illustrating the hardware components of a computer being part of the system according to one embodiment of the invention.
  • Fig. 2 is a block diagram illustrating the logical components of the system according to one embodiment of the invention.
  • Fig. 3 is a flowchart illustrating the method of Al training according to one embodiment of the invention.
  • Fig. 4 is a flowchart illustrating the process of data labelling according to one embodiment of the invention.
  • Fig. 5 is a flowchart illustrating the method of assessing the MRI image according to one embodiment of the invention.
  • a system according to the invention comprises a frontend component and a backend component. Each of the above components can be run on one or more computers.
  • a computer 101 mentioned in the above embodiments in the most general form, includes a processing means 102, which typically is CPU and/or GPU, a memory 103, which is typically RAM, and a storage means 104, which is typically hard disk.
  • the computer may also include a network interface 105 to receive the data from other computers and transmit the data to other computers in the network.
  • a computer can be a standalone workstation or a virtual server instance in a cloud computing environment.
  • the logical components of the system are shown in more detail in Fig. 2.
  • the system 201 comprises frontend component 210 and backend component 220.
  • Frontend component 210 includes modules which allow the user interact with the system, such as a user input module 211 and a display module 212.
  • Backend component 220 includes Al modules, such as Al training module 221 and Al inference module 222. Frontend and backend components communicate with each other via Application Programming Interface (API).
  • API Application Programming Interface
  • Fig. 3 illustrates the method of Al training according to the invention.
  • one or more servers on which the Al training module 221 is running receive a training data set including a plurality of labelled data entries.
  • Each labelled data entry includes a reference MRI image having an associated metadata and a first segmentation mask.
  • MRI image may comprise a human or animal body or a part of the body, for example abdomen part.
  • Metadata are data associated with an MRI image, and may comprise information about a human patient, for example patient’s gender and/or ethnicity.
  • a segmentation mask is an image comprising a plurality of pixels or voxels, each pixel or voxel having a value representing a region of the MRI image out of a plurality of possible regions.
  • the region may for example identify a specific type of adipose tissue, such as visceral fat or subcutaneous fat.
  • the region may also identify a lack of adipose tissue.
  • the labelled data are augmented to increase the variation of data used for Al training.
  • Data augmentation is performed by modifying the reference MRI image’s parameters, for example brightness, saturation, rotation and/or orientation.
  • the inference is performed on the augmented data through a neural network, to obtain the second segmentation mask.
  • the preferred neural network is a convolutional neural network, and the most preferred is u-net.
  • the second segmentation mask is compared with the first segmentation mask using a loss function, which typically is a pixel-wise or voxel-wise comparison method.
  • the preferred loss function is DICE.
  • the neural network weights and biases are optimised and the accuracy is calculated.
  • the weights and biases are updated in the neural network using backpropagation.
  • step 307 The steps 302-306 are repeated until a predetermined condition in step 307 is met.
  • the condition may be reaching a number of epochs, which is the number of repetitions of steps 302-306.
  • Other possible conditions are reaching the threshold of accuracy, or not improving the accuracy for a number of epochs (so called “early stopping” condition).
  • the condition in step 307 is met, the updated neural network weights and biases are outputted in step 308 and may be stored using the storage means 104.
  • Fig. 4 illustrates the process of data labelling.
  • a server receives raw data set. Each raw data entry includes a reference MRI image having an associated metadata.
  • the server distributes each raw data entry to one or more designated human labellers for providing a segmentation mask.
  • the human labellers using a computer program, mark the regions which identify specific types of adipose tissue, or regions which identify lack of adipose tissue.
  • the server in step 403 consolidates the segmentation masks and outputs labelled data entry which includes a reference MRI image having an associated metadata and a consolidated segmentation mask.
  • Fig. 5 illustrates the method of assessing an MRI image according to the invention.
  • the method can be used for assessing an adipose tissue, for example visceral adiposity.
  • the system receives input data comprising an input MRI image and an associated metadata.
  • the input data may be provided to the system by user input module 211 of the frontend component 210, which in turn uses an API to transmit the input data to the Al inference module 222 of the backend component.
  • the input data may be alternatively provided by any other means and then transmitted to the Al inference module 222 of the backend component using the same API.
  • the MRI image may for example comprise the abdomen part of a human or animal body.
  • Metadata may comprise information about a human patient, for example patient’s gender and/or ethnicity.
  • a third segmentation mask is obtained by performing inference of the input data through a neural network which has been earlier trained according to the method described above and illustrated on Fig. 3.
  • a result is calculated based on the third segmentation mask.
  • the result may for example include an amount of visceral fat and subcutaneous fat.
  • the result may also include a TOFI score calculated based on the amount of visceral fat, subcutaneous fat, and the metadata associated with the input MRI image.
  • the calculated result may be transmitted by Al inference module 222 via API to the display module 212 of the frontend component, or to any other device or system.
  • any of the described methods may be embodied as instructions on a non-transitory computer-readable medium such that, when the instructions are executed by a suitable module within the system (such as a processor), cause the module to perform a described method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

L'invention concerne un système et un procédé d'apprentissage d'intelligence artificielle permettant d'évaluer une image IRM, comprenant les étapes consistant à recevoir, par au moins un serveur, un ensemble de données d'entraînement comprenant une pluralité d'entrées de données étiquetées, chaque entrée de données étiquetées de la pluralité d'entrées de données étiquetées comprenant une image IRM de référence ayant des métadonnées associées et un premier masque de segmentation; à effectuer, par le ou les serveurs, jusqu'à ce qu'une condition prédéterminée soit satisfaite, les sous-étapes suivantes consistant à : augmenter les entrées de données étiquetées en modifiant les paramètres de l'image IRM de référence pour obtenir des données augmentées; obtenir un deuxième masque de segmentation en effectuant une inférence des données augmentées par l'intermédiaire d'un réseau neuronal; comparer le deuxième masque de segmentation au premier masque de segmentation à l'aide d'une fonction de perte pour obtenir un résultat de comparaison; optimiser les poids et les biais du réseau neuronal sur la base du résultat de comparaison; mettre à jour les poids et les biais du réseau neuronal à l'aide d'une rétropropagation; et calculer la précision sur la base du résultat de comparaison et un procédé d'évaluation d'une image IRM à l'aide dudit AI entraîné.
PCT/IB2023/057971 2022-08-08 2023-08-07 Procédé et système d'intelligence artificielle pour évaluer l'adiposité à l'aide d'une image irm de l'abdomen WO2024033789A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB2211511.7 2022-08-08
GB2211511.7A GB2621332A (en) 2022-08-08 2022-08-08 A method and an artificial intelligence system for assessing an MRI image

Publications (1)

Publication Number Publication Date
WO2024033789A1 true WO2024033789A1 (fr) 2024-02-15

Family

ID=84546258

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2023/057971 WO2024033789A1 (fr) 2022-08-08 2023-08-07 Procédé et système d'intelligence artificielle pour évaluer l'adiposité à l'aide d'une image irm de l'abdomen

Country Status (2)

Country Link
GB (1) GB2621332A (fr)
WO (1) WO2024033789A1 (fr)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200085382A1 (en) * 2017-05-30 2020-03-19 Arterys Inc. Automated lesion detection, segmentation, and longitudinal identification
US11263749B1 (en) 2021-06-04 2022-03-01 In-Med Prognostics Inc. Predictive prognosis based on multimodal analysis

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10157462B2 (en) * 2016-06-27 2018-12-18 University Of Central Florida Research Foundation, Inc. System and method for image-based quantification of white and brown adipose tissue at the whole-body, organ and body-region levels
WO2019182520A1 (fr) * 2018-03-22 2019-09-26 Agency For Science, Technology And Research Procédé et système de segmentation d'image d'abdomen humain en segments d'image correspondant à des compartiments de graisse
CN109190682B (zh) * 2018-08-13 2020-12-18 北京安德医智科技有限公司 一种基于3d核磁共振影像的脑部异常的分类方法及设备
US10304193B1 (en) * 2018-08-17 2019-05-28 12 Sigma Technologies Image segmentation and object detection using fully convolutional neural network
EP3939002A1 (fr) * 2019-03-15 2022-01-19 Genentech, Inc. Réseaux neuronaux à convolution profonde pour segmentation de tumeur à tomographie par émission de positrons
CN110517241A (zh) * 2019-08-23 2019-11-29 吉林大学第一医院 基于核磁成像ideal-iq序列全自动腹部脂肪定量分析的方法
CN111709952B (zh) * 2020-05-21 2023-04-18 无锡太湖学院 一种基于边缘特征优化的双流解码卷积神经网络的mri脑肿瘤自动分割方法
US11308613B2 (en) * 2020-06-09 2022-04-19 Siemens Healthcare Gmbh Synthesis of contrast enhanced medical images
CN111862087A (zh) * 2020-08-03 2020-10-30 张政 一种基于深度学习的肝胰脂肪变性判别方法
CN114202545A (zh) * 2020-08-27 2022-03-18 东北大学秦皇岛分校 一种基于UNet++的低级别胶质瘤图像分割方法
WO2022051290A1 (fr) * 2020-09-02 2022-03-10 Genentech, Inc. Modèles d'apprentissage automatique connectés avec entraînement conjoint pour la détection de lésion
CN113205566A (zh) * 2021-04-23 2021-08-03 复旦大学 基于深度学习的腹部三维医学影像转换生成方法
CN114549417A (zh) * 2022-01-20 2022-05-27 高欣 一种基于深度学习和核磁共振Dixon的腹部脂肪定量方法

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200085382A1 (en) * 2017-05-30 2020-03-19 Arterys Inc. Automated lesion detection, segmentation, and longitudinal identification
US11263749B1 (en) 2021-06-04 2022-03-01 In-Med Prognostics Inc. Predictive prognosis based on multimodal analysis

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
BHANU PRAKASH KN ET AL: "CAFT: a deep learning-based comprehensive abdominal fat analysis tool for large cohort studies", MAGNETIC RESONANCE MATERIALS IN PHYSICS, BIOLOGY AND MEDICINE, SPRINGER INTERNATIONAL PUBLISHING, CHAM, vol. 35, no. 2, 2 August 2021 (2021-08-02), pages 205 - 220, XP037809328, DOI: 10.1007/S10334-021-00946-9 *
MONNEREAU C ET AL: "Associations of adult genetic risk scores for adiposity with childhood abdominal, liver and pericardial fat assessed by magnetic resonance imaging", INTERNATIONAL JOURNAL OF OBESITY, vol. 42, no. 4, 7 December 2017 (2017-12-07), London, pages 897 - 904, XP093118151, ISSN: 0307-0565, Retrieved from the Internet <URL:http://www.nature.com/articles/ijo2017302> DOI: 10.1038/ijo.2017.302 *
SANTIAGO ESTRADA ET AL: "FatSegNet : A Fully Automated Deep Learning Pipeline for Adipose Tissue Segmentation on Abdominal Dixon MRI", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 3 April 2019 (2019-04-03), XP081526554, DOI: 10.1002/MRM.28022 *
WU ZHANXUAN E ET AL: "Untargeted metabolomics reveals plasma metabolites predictive of ectopic fat in pancreas and liver as assessed by magnetic resonance imaging: the TOFI_Asia study", INTERNATIONAL JOURNAL OF OBESITY, NATURE PUBLISHING GROUP UK, LONDON, vol. 45, no. 8, 16 May 2021 (2021-05-16), pages 1844 - 1854, XP037519053, ISSN: 0307-0565, [retrieved on 20210516], DOI: 10.1038/S41366-021-00854-X *

Also Published As

Publication number Publication date
GB2621332A (en) 2024-02-14
GB202211511D0 (en) 2022-09-21

Similar Documents

Publication Publication Date Title
US11645748B2 (en) Three-dimensional automatic location system for epileptogenic focus based on deep learning
Zhao et al. A novel U-Net approach to segment the cardiac chamber in magnetic resonance images with ghost artifacts
RU2677764C2 (ru) Координатная привязка медицинских изображений
WO2021128825A1 (fr) Procédé de détection de cible tridimensionnelle, procédé et dispositif de formation de modèle de détection de cible tridimensionnelle, appareil, et support d&#39;enregistrement
Liu et al. Automatic whole heart segmentation using a two-stage u-net framework and an adaptive threshold window
CN110910342B (zh) 通过使用深度学习来分析骨骼创伤
US11615508B2 (en) Systems and methods for consistent presentation of medical images using deep neural networks
US11062443B2 (en) Similarity determination apparatus, similarity determination method, and program
Wehbe et al. Deep learning for cardiovascular imaging: A review
da Silva et al. A cascade approach for automatic segmentation of cardiac structures in short-axis cine-MR images using deep neural networks
Davamani et al. Biomedical image segmentation by deep learning methods
Faisal et al. Computer assisted diagnostic system in tumor radiography
Benčević et al. Epicardial adipose tissue segmentation from CT images with a semi-3D neural network
Pollack et al. Deep learning prediction of voxel-level liver stiffness in patients with nonalcoholic fatty liver disease
CN114612484B (zh) 基于无监督学习的视网膜oct图像分割方法
CN115861172A (zh) 基于自适应正则化光流模型的室壁运动估计方法及装置
US20240005484A1 (en) Detecting anatomical abnormalities by segmentation results with and without shape priors
WO2024033789A1 (fr) Procédé et système d&#39;intelligence artificielle pour évaluer l&#39;adiposité à l&#39;aide d&#39;une image irm de l&#39;abdomen
US11893735B2 (en) Similarity determination apparatus, similarity determination method, and similarity determination program
EP4064179A1 (fr) Masquage de pixels indésirables dans une image
EP4312229A1 (fr) Appareil de traitement d&#39;informations, procédé de traitement d&#39;informations, programme, modèle entraîné et procédé de génération de modèle d&#39;apprentissage
US20240144469A1 (en) Systems and methods for automatic cardiac image analysis
US20230169653A1 (en) Medical image processing apparatus, medical image processing method, and storage medium
Sharif et al. 3D oncological PET volume analysis using CNN and LVQNN
TG Femur bone volumetric estimation for osteoporosis classification based on deep learning with tuna jellyfish optimization using X-ray images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23790065

Country of ref document: EP

Kind code of ref document: A1