CN117455931A - Segmentation and quantitative measurement method based on deep learning MR body adipose tissue - Google Patents

Segmentation and quantitative measurement method based on deep learning MR body adipose tissue Download PDF

Info

Publication number
CN117455931A
CN117455931A CN202311323381.2A CN202311323381A CN117455931A CN 117455931 A CN117455931 A CN 117455931A CN 202311323381 A CN202311323381 A CN 202311323381A CN 117455931 A CN117455931 A CN 117455931A
Authority
CN
China
Prior art keywords
fat
segmentation
model
image
adipose tissue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311323381.2A
Other languages
Chinese (zh)
Inventor
谭友果
陈小燕
蔡端芳
詹孔才
黄莎
王丙龙
廖云鑫
陈潇霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zigong Mental Health Center
Original Assignee
Zigong Mental Health Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zigong Mental Health Center filed Critical Zigong Mental Health Center
Priority to CN202311323381.2A priority Critical patent/CN117455931A/en
Publication of CN117455931A publication Critical patent/CN117455931A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention discloses a segmentation and quantitative measurement method based on deep learning MR body adipose tissue, which comprises the following specific steps: s1, establishing a fat segmentation model, and defining the use case: user samples for researching and developing body fat MR segmentation models are defined, wherein the user samples comprise body fat MR segmentation AI model IDs, clinical problems, scene descriptions, model calling processes in actual work and model input and output data structures, and AI model return results are defined as different body fat areas. The segmentation and quantitative measurement method based on the deep learning MR body adipose tissue can completely calculate various quantitative parameters of adipose tissue, the subjective evaluation and objective evaluation results are good, the method is similar to the previous research results, the measurement results generated by the model are automatically filled in the structured report, the method can be intuitively displayed to a patient and a clinician in cooperation with key images, the MR images of different parts and different devices are measured, and the model has high adaptability based on the current small sample results.

Description

Segmentation and quantitative measurement method based on deep learning MR body adipose tissue
Technical Field
The invention relates to the technical field of segmentation and quantitative measurement of body adipose tissue, in particular to a segmentation and quantitative measurement method of body adipose tissue based on deep learning MR.
Background
Adipose tissue is one of the largest compartments of the human body, and as the incidence of obesity and related metabolic abnormalities increases, clinical evaluation of liver metabolic abnormalities and diabetic conditions requires the evaluation of systemic fat quantitative information. In addition to fat quantification, fat distribution is also associated with disease, and there is clear evidence suggesting that excessive visceral fat deposition leads to a significant increase in the risk of developing cardiovascular and metabolic diseases in patients. The need to segment and accurately measure adipose tissue volume noninvasively is therefore becoming increasingly apparent. Methods for measuring total and local body fat are many such as simple parametric measurement, bioelectrical impedance, dual-energy X-ray absorption, ultrasound, etc. MR has a variety of scanning sequences that can selectively image adipose tissue, can noninvasively and accurately display local fat distribution, and is significantly advantageous over other imaging examinations.
Although MR images show adipose tissue well, the volume of adipose tissue cannot be accurately and quantitatively measured for each case in clinical work, and the generation of fat volumes at various parts and different areas of a patient in image reports has limited the practical clinical application of MR fat quantification. In recent years, the deep learning technology has been greatly improved in medical image processing, various deep learning software can be used for medical image segmentation, classification, target detection and the like, and various measurement results can be automatically transmitted into a structured report to achieve a report result similar to a doctor, so that the working efficiency of an imaging doctor is greatly improved, a 3D-Unet network model is trained, body adipose tissues are segmented in MRI images, quantitative measurement results are generated, and the feasibility of the technology in clinical application is primarily explored.
Disclosure of Invention
The invention aims to provide a segmentation and quantitative measurement method of adipose tissue based on deep learning MR body, which aims to solve the problems that although the MR image can well display the adipose tissue, the adipose tissue volume can not be accurately and quantitatively measured for each case in clinical work, and the adipose volume of each part and different areas of a patient can not be generated in an image report, so that the practical clinical application of MR fat quantification is limited.
In order to achieve the above purpose, the present invention provides the following technical solutions: the segmentation and quantitative measurement method based on the deep learning MR body adipose tissue comprises the following specific steps:
s1, establishing a fat segmentation model, and defining the use case: defining a user sample for researching and developing a body fat MR segmentation model, wherein the user sample comprises body fat MR segmentation AI model ID, clinical problems, scene description, model calling flow in actual work and model person input and output data structure, and AI model return results are defined as fat areas of different body parts; total fat volume, average fat volume, ratio of subcutaneous fat to visceral fat, body radial; data collection: collecting continuous case data for segmentation model establishment and evaluation;
and (3) data marking: selecting an axial surface scanning reconstructed fat image, converting a GRE DIXON format image into a Nifty format, binarizing the image by a threshold segmentation method, and dividing the MR image into fat and non-fat components, wherein the threshold segmentation parameter is 0.2-0.4;
training a segmentation model: using hardware of GPU NVIDIA Tesla P and 10016G, software of Python3.6, pytorch 0.4.1 and Opencv, numpy, simpleITK and Adam as a training optimizer, and randomly dividing 67 cases of data into a training set, a tuning set and a testing set, wherein the image preprocessing parameters are size=96X256X 256 (z, y, x), automatic window width and window level;
s2, evaluating a model: the objective evaluation index is a Dice similarity coefficient value, and the subjective evaluation is carried out on the manual labeling and model prediction output results of visceral fat and subcutaneous fat respectively by combining a manual labeling and model prediction filling diagram, a manual labeling-model prediction overlapping diagram and a manual labeling-model prediction difference diagram;
s3, quantitative information: automatically outputting a fat segmentation result, calculating quantitative measurement values of subcutaneous fat and visceral fat of body sound through image processing, wherein the quantitative measurement values comprise fat volume, average fat volume, ratio of subcutaneous fat to visceral fat, body radial line and the like of each region, and returning the results to a structural report;
s4, a statistical method: the statistical analysis is carried out by using SPSS20.0 and PRISM8 software, the subjective evaluation results of manual labeling and model prediction output images are compared by using Wilcoxon pairing test, the quantitative measurement values of the manual labeling and model prediction segmentation results are manually labeled by using Pearson correlation analysis, bland-Altman analysis and intra-group correlation analysis, and the difference is considered to have statistical significance by P < 0.05.
Further, in the step S1, the selection criteria are: completing chest, abdomen and pelvic cavity examination in the hospital; an axial scan image and reconstructing a fat image; the exclusion criteria were: obvious structural damage is seen on the image; has obvious metal artifact; obvious structural changes are caused after operation; the image quality is poor.
In step S1, the data is marked on the axial MR image by using ITK-SNAP software, the window width is manually adjusted to the optimal display level, adipose tissue on the image is divided into 3 areas of subcutaneous, musculoskeletal and visceral, and subcutaneous fat and visceral fat are manually marked to obtain the label.
Further, in the step S1, the image amplification parameters include horizontal flip, translation, and random noise.
Further, DSC is a set similarity measure index, which is used to calculate the similarity of two samples, and is understood to be the coincidence degree of the physician labeling area and the model prediction area.
Further, subjective evaluation is that an imaging physician compares a model segmentation result with an MR image, and the manual labeling and model prediction filling map, the manual labeling-model prediction overlapping map and the manual labeling-model prediction difference map are combined to perform subjective evaluation on a manual labeling and model prediction output result map of visceral fat and subcutaneous fat respectively.
Further, subjective evaluation was performed on the fat segmentation results by the imaging physician, and the median of subjective scores of subcutaneous fat and visceral fat was 10.00 for model prediction and manual labeling.
Further, body part fat regions are classified as subcutaneous fat, visceral fat.
Further, the DSC value range is 0-1, the best value of the segmentation result is 1, and the worst value is 0, and the specific calculation formula is dsc=2|i1 n I2|/(|i1|+|i2|).
The invention provides a segmentation and quantitative measurement method based on deep learning MR body adipose tissue, which has the following beneficial effects: the method has the advantages that various quantitative parameters of adipose tissue can be completely calculated, the results of subjective evaluation and objective evaluation are good, the results are similar to the past research results, the measurement results generated by the model are automatically filled in the structural report, the structural report can be intuitively displayed to a patient and a clinician in cooperation with key images, MR images of different parts and different equipment are measured, the model has high adaptability based on the current small sample results, the stability of the measurement results is good, the individual calculation of the fat content of each part in the images can be easily completed by combining fat segmentation AI and body positioning AI in the future, a convenient and effective way is provided for fat quantification and clinical disease typing, the application scene of the future model can be not only full-body MR imaging but also local images, and the future model can be expected to be used for other sequences similar to the properties of fat image such as T after more data training 1 WI et al, fat quantitative report can be applied to both specialised fat quantitative MR examinations and other MR examinations, since the model processing of the images and automatic generation of the report requires little extra effort from the doctor, this additional information can be easily generated in allIn the applicable examination projects, a large amount of basic data is provided for clinical evaluation and scientific research;
the threshold segmentation is used as a preliminary labeling tool, firstly, the image is binarized by using the threshold segmentation, and then, subcutaneous fat and visceral fat areas are labeled in a binary image by a doctor. And then, further dividing the fat of different parts by using a deep learning method, and dividing the fat region into different regions such as subcutaneous fat, visceral fat and the like. The function of marking by using threshold segmentation is to improve manual marking efficiency, and the function of deep learning is to classify different fat intervals. The image segmentation technology currently mainstream is mainly based on threshold values, regions, edges, segmentation methods of specific theory and the like. The image is divided into areas with different characteristics through an image segmentation technology to extract the region of interest, and clinical tasks can be directly completed in some cases. Previous researchers have tried to measure fat volume in CT images with a threshold segmentation method to achieve a certain effect, but the method has defects. MR images are rich in contrast, single tissue segmentation cannot be directly completed through a simple threshold segmentation method, but labeling can be performed on the basis of threshold segmentation, so that labeling efficiency can be improved, and model training is beneficial.
Drawings
FIG. 1 is a subjective evaluation chart of an MR body fat segmentation model based on a deep learning MR body fat tissue segmentation and quantitative measurement method of the invention;
FIG. 2 is a DSC diagram of an MR body fat segmentation model based on a deep learning MR body fat tissue segmentation and quantitative measurement method of the present invention;
fig. 3 is a graph of MR body fat segmentation output quantitative measurements based on the segmentation and quantitative measurement method of deep learning MR body fat tissue of the present invention.
Detailed Description
Embodiments of the present invention are described in further detail below with reference to the accompanying drawings and examples. The following examples are illustrative of the invention but are not intended to limit the scope of the invention.
As shown in fig. 1-3, the segmentation and quantitative measurement method based on deep learning MR body adipose tissue comprises the following specific steps:
s1, establishing a fat segmentation model, and defining the use case: defining a user sample for researching and developing a body fat MR segmentation model, wherein the user sample comprises a body fat MR segmentation AI model ID, clinical problems, scene descriptions, a model calling flow in actual work and a model person conveying output data structure, and the AI model return results are defined as different body part fat areas which are divided into subcutaneous fat and visceral fat; total fat volume, average fat volume, ratio of subcutaneous fat to visceral fat, body radial; data collection: collecting continuous case data for segmentation model establishment and evaluation, wherein the selection criteria are as follows: completing chest, abdomen and pelvic cavity examination in the hospital; an axial scan image and reconstructing a fat image; the exclusion criteria were: obvious structural damage is seen on the image; has obvious metal artifact; obvious structural changes are caused after operation; poor image quality;
and (3) data marking: selecting an axial surface scanning reconstructed fat image, converting a GRE DIXON format image into a Nifty format, firstly, dividing the image into fat and non-fat components by a threshold value dividing method, dividing the MR image into fat and non-fat components, marking the threshold value dividing parameter by 0.2-0.4 on the axial surface MR image by using ITK-SNAP software for data marking, manually adjusting window width to an optimal display level, dividing fat tissues on the image into 3 areas of subcutaneous, musculature and visceral, and manually marking the subcutaneous fat and the visceral fat to obtain a label;
training a segmentation model: using hardware of GPU NVIDIA Tesla P and 10016G, software of Python3.6, pytorch 0.4.1 and Opencv, numpy, simpleITK and Adam as a training optimizer, and randomly dividing 67 cases of data into a training set, a tuning set and a testing set, wherein the image preprocessing parameters comprise size=96X256X 256 (z, y, x), automatic window width and window level, and the image amplification parameters comprise horizontal overturning, translation and random noise;
s2, evaluating a model: the objective evaluation index is a Dice similarity coefficient value, the manual labeling and model prediction filling diagram, the manual labeling-model prediction overlapping diagram and the manual labeling-model prediction difference diagram are combined, the manual labeling and model prediction output results of visceral fat and subcutaneous fat are respectively subjected to subjective evaluation, the subjective evaluation is that an image physician compares a model segmentation result with an MR image, the manual labeling and model prediction filling diagram, the manual labeling-model prediction overlapping diagram and the manual labeling-model prediction difference diagram respectively carry out subjective evaluation on the manual labeling and model prediction output result diagram of visceral fat and subcutaneous fat, the image physician carries out subjective evaluation on the fat segmentation result, the model prediction and the manual labeling of subcutaneous fat and the subjective evaluation of visceral fat are both 10.00, DSC is a set similarity measurement index and is used for calculating the similarity of two samples, the degree of coincidence between a doctor labeling area and a model prediction area is understood to be 0-1, the best time value of the segmentation result is 1, the worst time value is 0, and the specific calculation formula is DSC=2.I1 #
I2|/(|I1|+|I2|);
S3, quantitative information: automatically outputting a fat segmentation result, calculating quantitative measurement values of subcutaneous fat and visceral fat of body sound through image processing, wherein the quantitative measurement values comprise fat volume, average fat volume, ratio of subcutaneous fat to visceral fat, body radial line and the like of each region, and returning the results to a structural report;
s4, a statistical method: the statistical analysis is carried out by using SPSS20.0 and PRISM8 software, the subjective evaluation results of manual labeling and model prediction output images are compared by using Wilcoxon pairing test, the quantitative measurement values of the manual labeling and model prediction segmentation results are manually labeled by using Pearson correlation analysis, bland-Altman analysis and intra-group correlation analysis, and the difference is considered to have statistical significance by P < 0.05.
Examples: patient basic information: of the 53 patients in the group, 33 men and 20 women had an average age (64.01.+ -. 13.98) years. The scanning device generates lipid images for the GRE DIXON sequence for Siemens 1.5T MR and Philips 3.0 MR. The ratio of the chest, abdomen and pelvic cavity images in the training set, the tuning set and the test set is 14:22:16 respectively; 1:1:4 and 2:3:4;
as shown in fig. 2, the average DSC values of the chest, abdomen and pelvis were all high.
The fat segmentation results were subjectively evaluated by the imaging physician. The median of subjective scores of subcutaneous fat and visceral fat is 10.00, and the subjective evaluation results of fat segmentation results of the two methods have no statistical difference (P > 0.05). Wherein the subcutaneous fat score and the visceral fat score of model prediction and manual labeling are respectively more than 77.6% at a ratio of 10 minutes;
the 94%/85.1% image in the model predicted subcutaneous fat/visceral fat segmentation result was rated as coverage 3 minutes (covering almost the whole area), the 100%/100% image was rated as out-of-rate 3 minutes (almost not out of range), the 92.5%/91% image was rated as edge fit 3 minutes (very good), the 7.5%/1.5% subcutaneous fat/visceral fat segmentation image appeared to contain other interstitial adipose tissue, only 1 visceral fat appeared to contain non-adipose structure (artifact), and the manually noted segmentation result was similar to the model predicted result (P > 0.05).
In summary, the application of deep learning based methods to automatically segment and quantitatively measure body adipose tissue on MR images can be technically realized. The model has high repeatability and stability of processing data, is expected to be used for automatically generating a fat quantitative report in clinical work, and can reduce the measurement time of doctors.
The embodiments of the invention have been presented for purposes of illustration and description, and are not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.

Claims (9)

1. The segmentation and quantitative measurement method based on the deep learning MR body adipose tissue is characterized by comprising the following specific steps:
s1, establishing a fat segmentation model, and defining the use case: defining a user sample for researching and developing a body fat MR segmentation model, wherein the user sample comprises body fat MR segmentation AI model ID, clinical problems, scene description, model calling flow in actual work and model person input and output data structure, and AI model return results are defined as fat areas of different body parts; total fat volume, average fat volume, ratio of subcutaneous fat to visceral fat, body radial; data collection: collecting continuous case data for segmentation model establishment and evaluation;
and (3) data marking: selecting an axial surface scanning reconstructed fat image, converting a GRE DIXON format image into a Nifty format, binarizing the image by a threshold segmentation method, and dividing the MR image into fat and non-fat components, wherein the threshold segmentation parameter is 0.2-0.4;
training a segmentation model: using hardware of GPU NVIDIA Tesla P and 10016G, software of Python3.6, pytorch 0.4.1 and Opencv, numpy, simpleITK and Adam as a training optimizer, and randomly dividing 67 cases of data into a training set, a tuning set and a testing set, wherein the image preprocessing parameters are size=96X256X 256 (z, y, x), automatic window width and window level;
s2, evaluating a model: the objective evaluation index is a Dice similarity coefficient value, and the subjective evaluation is carried out on the manual labeling and model prediction output results of visceral fat and subcutaneous fat respectively by combining a manual labeling and model prediction filling diagram, a manual labeling-model prediction overlapping diagram and a manual labeling-model prediction difference diagram;
s3, quantitative information: automatically outputting a fat segmentation result, calculating quantitative measurement values of subcutaneous fat and visceral fat of body sound through image processing, wherein the quantitative measurement values comprise fat volume, average fat volume, ratio of subcutaneous fat to visceral fat, body radial line and the like of each region, and returning the results to a structural report;
s4, a statistical method: the statistical analysis is carried out by using SPSS20.0 and PRISM8 software, the subjective evaluation results of manual labeling and model prediction output images are compared by using Wilcoxon pairing test, the quantitative measurement values of the manual labeling and model prediction segmentation results are manually labeled by using Pearson correlation analysis, bland-Altman analysis and intra-group correlation analysis, and the difference is considered to have statistical significance by P < 0.05.
2. The method for segmentation and quantitative measurement of adipose tissue based on deep learning MR body according to claim 1, wherein in step S1, the inclusion criteria are: completing chest, abdomen and pelvic cavity examination in the hospital; an axial scan image and reconstructing a fat image; the exclusion criteria were: obvious structural damage is seen on the image; has obvious metal artifact; obvious structural changes are caused after operation; the image quality is poor.
3. The method for segmenting and quantitatively measuring adipose tissue based on deep learning MR body according to claim 1, wherein in step S1, the data labeling is performed on the axial MR image by using ITK-SNAP software, the window width is manually adjusted to the optimal display level, the adipose tissue on the image is segmented into 3 areas of subcutaneous, musculoskeletal and visceral, and the subcutaneous fat and visceral fat are manually labeled to obtain the label.
4. The method for segmentation and quantitative measurement of adipose tissue based on deep learning MR body according to claim 1, wherein the image amplification parameters in step S1 include horizontal flip, translation, random noise.
5. The method for segmentation and quantitative measurement of adipose tissue based on deep learning MR body according to claim 1, wherein DSC is a set similarity measure for calculating the similarity of two samples, which is understood as the overlap ratio of the physician labeling region and the model prediction region.
6. The method for segmenting and quantitatively measuring adipose tissue based on deep learning MR body according to claim 1, wherein the subjective evaluation is performed by comparing the model segmentation result with the MR image by an imaging physician, and performing subjective evaluation on the manual labeling of visceral fat and subcutaneous fat and the model prediction output result map respectively by combining the manual labeling and model prediction filling map, the manual labeling-model prediction overlap map and the manual labeling-model prediction difference map.
7. The method for segmenting and quantitatively measuring adipose tissue based on deep learning MR body according to claim 1, wherein the subjective evaluation of the fat segmentation result is performed by imaging physician, and the median of subjective scores of subcutaneous fat and visceral fat is 10.00 for model prediction and manual labeling.
8. The method for segmentation and quantitative measurement of adipose tissue based on deep learning MR body according to claim 1, wherein the body part fat region is divided into subcutaneous fat, visceral fat.
9. The method for segmenting and quantitatively measuring adipose tissue based on deep learning MR body according to claim 5, wherein the DSC value range is 0-1, the segmentation result is best 1, and the worst value is 0, and the specific calculation formula is dsc=2|i1 n I2|/(|i1|+|i2|).
CN202311323381.2A 2023-10-13 2023-10-13 Segmentation and quantitative measurement method based on deep learning MR body adipose tissue Pending CN117455931A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311323381.2A CN117455931A (en) 2023-10-13 2023-10-13 Segmentation and quantitative measurement method based on deep learning MR body adipose tissue

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311323381.2A CN117455931A (en) 2023-10-13 2023-10-13 Segmentation and quantitative measurement method based on deep learning MR body adipose tissue

Publications (1)

Publication Number Publication Date
CN117455931A true CN117455931A (en) 2024-01-26

Family

ID=89579002

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311323381.2A Pending CN117455931A (en) 2023-10-13 2023-10-13 Segmentation and quantitative measurement method based on deep learning MR body adipose tissue

Country Status (1)

Country Link
CN (1) CN117455931A (en)

Similar Documents

Publication Publication Date Title
US10028700B2 (en) Method and system for non-invasive determination of human body fat
WO2020164468A1 (en) Medical image segmentation method, image segmentation method, related device and system
CN101626726B (en) Identification and analysis of lesions in medical imaging
CN113781439B (en) Ultrasonic video focus segmentation method and device
CN110292396B (en) Predictive use of quantitative imaging
US8073214B2 (en) Computer aided lesion assessment in dynamic contrast enhanced breast MRI images
CN114782307A (en) Enhanced CT image colorectal cancer staging auxiliary diagnosis system based on deep learning
Balasooriya et al. Intelligent brain hemorrhage diagnosis using artificial neural networks
Kim et al. Computerized scheme for assessing ultrasonographic features of breast masses1
WO2009048536A1 (en) Method and system for automatic classification of lesions in breast mri
US7353117B2 (en) Computation of wall thickness
Sayed et al. Automatic classification of breast tumors using features extracted from magnetic resonance images
Molinari et al. Accurate and automatic carotid plaque characterization in contrast enhanced 2-D ultrasound images
CN114943688A (en) Method for extracting interest region in mammary gland image based on palpation and ultrasonic data
Zhou et al. Sonomyography
WO2011139232A1 (en) Automated identification of adipose tissue, and segmentation of subcutaneous and visceral abdominal adipose tissue
CN117455931A (en) Segmentation and quantitative measurement method based on deep learning MR body adipose tissue
CN114708283A (en) Image object segmentation method and device, electronic equipment and storage medium
Wald et al. Automated quantification of adipose and skeletal muscle tissue in whole-body MRI data for epidemiological studies
CN114782375B (en) Bone density measuring method, device and equipment
US20090069669A1 (en) Efficient Features for Detection of Motion Artifacts in Breast MRI
US20170178338A1 (en) Identification and analysis of lesions in medical imaging
Simion et al. A Non-invasive Diagnosis Tool Based on Hepatorenal Index For Hepatic Steatosis
Inoue et al. Automated Discrimination of Tissue Boundaries using Ultrasound Images of" Ubiquitous Echo"
CN117876407A (en) Multi-parameter ultrasonic accurate prediction system and method for malignant risk of breast small-volume tumor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication