CN114141336A - Method, system, device and storage medium for marking human body components based on MRI - Google Patents

Method, system, device and storage medium for marking human body components based on MRI Download PDF

Info

Publication number
CN114141336A
CN114141336A CN202111455156.5A CN202111455156A CN114141336A CN 114141336 A CN114141336 A CN 114141336A CN 202111455156 A CN202111455156 A CN 202111455156A CN 114141336 A CN114141336 A CN 114141336A
Authority
CN
China
Prior art keywords
image
mri
target tissue
tissue organ
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111455156.5A
Other languages
Chinese (zh)
Inventor
张福生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202111455156.5A priority Critical patent/CN114141336A/en
Publication of CN114141336A publication Critical patent/CN114141336A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/055Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves  involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4869Determining body composition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4869Determining body composition
    • A61B5/4872Body fat
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Theoretical Computer Science (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Pathology (AREA)
  • Veterinary Medicine (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Quality & Reliability (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention discloses a human body component marking method, a system, equipment and a storage medium based on MRI, wherein the method comprises the following steps: acquiring a Magnetic Resonance Imaging (MRI) image at a specified position; preprocessing the MRI image to obtain a preprocessed target tissue organ image sequence; positioning and identifying the image sequence of the target tissue organ in the MRI sagittal position to obtain a positioning image of the sagittal position of the target tissue organ; and carrying out segmentation processing on the target tissue organ vector positioning image to obtain an MRI image segmentation result of the target tissue organ, and marking based on the MRI image segmentation result to further obtain various body constitution component parameters of the target tissue organ. The invention can quickly and accurately realize the measurement of the human body composition parameter information of the medical image, thereby improving the processing efficiency of the medical image (image) aiming at the human body composition and reducing the labor cost.

Description

Method, system, device and storage medium for marking human body components based on MRI
Technical Field
The invention relates to the technical field of medical image processing, in particular to a human body composition marking method, system, equipment and storage medium based on MRI.
Background
Two health challenges facing today's society are the increasing risk of obesity and aging-related diseases, respectively, increasing year by year. Obesity is closely related to type 2diabetes, cardiovascular diseases, neurovascular diseases and certain cancers, and leads to increased mortality and decreased quality of life. Also, sarcopenia, i.e. a decrease in muscle mass with age or diseases such as trauma, osteoarthritis, etc., is closely associated with a decrease in quality of life and an increase in the rate of disability, which is closely associated with diseases associated with a local or total decrease in muscle mass, such as muscular dystrophies (muscular dystrophies), spinal cord injuries (spinal cord injuries) and sports injuries (sports injuries), etc. Therefore, accurate measurement, quantitative and qualitative analysis of fat and muscle in vivo is very important.
Magnetic Resonance Imaging (MRI) is a new imaging standard which is newly proposed at present for human body composition analysis, has high soft tissue contrast, can accurately quantify the content and distribution of human fat and muscle, and realizes accurate segmentation of fat and muscle. The manual or semi-automatic segmentation method is time-consuming and labor-consuming, the standard uniformity is poor, the clinical practicability of the physique segmentation by using MRI is limited, and the automatic segmentation method based on deep learning is uniform in standard and high in speed. At present, the method for automatically identifying and quantifying the body constitution components is based on the tissue and organ segmentation of the whole area, or is limited to a specific area of a human body, or quantifies fat or muscle tissues separately, and does not specially aim at the comprehensive, detailed and partitioned segmentation of the fat and the muscle.
The main evaluation parameters of the constitutional component division comprise: body and limb muscles or muscle groups (muscles), Visceral Adipose Tissue (VAT), Abdominal Subcutaneous Adipose Tissue (ASAT), inter-muscular adipose tissue (IMAT), Subcutaneous Adipose Tissue (SAT), subcutaneous adipose tissue (SFAT) of limbs, area of subcutaneous adipose tissue (SFAT), area index (MI), ratio of visceral fat to subcutaneous fat area (V/S), liver muscle and Proton Density Fat Fraction (PDFF), subcutaneous fat signal ratio of muscles, muscle fat content (FC, fat content).
The muscle tissue is divided into thoracic and abdominal muscles and muscle groups, hip muscles and muscle groups, thighs, calves, upper arms and forearm muscles and muscle groups according to the scanning part.
1. The thoracic and abdominal muscles and muscle groups are as follows:
anterior muscle group: pectoralis major, pectoralis minor, serratus anterior, pectoralis major.
Superficial layer of back muscle: sternocleidomastoid muscle, clamp muscle, scapular post, deltoid muscle, extraabdominal oblique muscle, latissimus dorsi, great circular muscle, small circular muscle and trapezius muscle;
middle layer of back muscle: minor rhomboid, clip, scapula levator, major rhomboid, minor, infraspinatus, major, anterior, inferior-posterior, latissimus dorsi, supraspinatus;
dorsal spinal muscle group: upper posterior serratus muscle, Splint, posterior greater rectus muscle, hemispinatus capitis, thoracanthosis, longissimus thoracis, iliocostalis lumbosalis, transverse abdominalis, erector spinae muscle group, external abdominalis, internal abdominalis, lower posterior serratus muscle, intercostal muscle, and cervical spina muscles;
deep back muscles: hemispinalis capitata, major rectus capitis, minor rectus capitis, superior oblique muscle, inferior oblique muscle, pectoralis major, levator costalis, interspinous muscle, lateral interspinous muscle, quadratus lumborum, iliac crest, abdominalis, multifidus, hemispinatus thoracis, intercostal muscle;
abdominal cross section muscles or muscle groups: psoas major, psoas quadratus, erector spinae, multifidus, dorsal paraspinal, rectus abdominis, lateral abdominal muscles, skeletal muscles.
2. The hip muscles and muscle groups are as follows:
gluteus maximus (greater trochanter level of femur), gluteus medius, and gluteus minimus (third sacral S3 level).
3. The thigh muscles and muscle groups are as follows:
anterior muscle group: quadriceps femoris (rectus femoris, vastus lateralis and vastus intermedius, vastus medialis), sartorius, tensor fascia latae;
medial muscle group: adductor muscles (pubis, adductor magnus, adductor longus, adductor brevis, gracilis);
posterior muscle group: biceps femoris, semitendinosus, semimembranosus.
4. The calf muscles and muscle groups are as follows:
the former group: tibialis anterior, extensor hallucis longus, extensor digitorum longus;
lateral muscle group: long fibular muscle, short fibular muscle;
muscle hind group: gastrocnemius, soleus, popliteus, flexor digitorum longus, tibialis posterior, and flexor hallucis longus.
5. The upper arm muscles and muscle groups are as follows:
upper arm front group: biceps brachii, brachial muscle, brachial coracoid muscle;
upper arm rear group: the triceps brachii muscle.
6. The forearm muscles and muscle groups were as follows:
forearm forepart group: brachioradial muscle, circumflex, flexor carpi radialis, flexor palmaris longus, flexor carpi ulnaris; flexor digitorum superficialis, flexor digitorum profundus, flexor hallucis longus, and quadratus pronator;
forearm posterior group: extensor carpi radialis longus, extensor carpi radialis brevis, extensor digitorum longus, extensor carpi ulnaris, supinator, extensor hallucis longus, extensor hallucis brevis, and extensor digitorum longus.
VAT is divided into intraperitoneal adipose tissue (IPAT) and retroperitoneal adipose tissue (RPAT); ASAT can be classified into Deep adipose tissue (Deep-ASAT, D-ASAT) and Superficial adipose tissue (Superspecial-ASAT, S-ASAT).
The research on human body components of large-scale crowds in China is in the initial stage of research and is not comprehensively developed, and the analysis of the human body components of large-scale, multi-region and multi-center crowds is necessary, so that the data needs to be automatically collected and analyzed by an artificial intelligence means. At present, human body composition marking by using an MRI (magnetic resonance imaging) inspection technology is mainly carried out by a manual or semi-manual method, and the method is easy to fatigue, poor in standard consistency in different regions and heavy in data quantification work, so that the marking work is slow in progress, easy to make mistakes and low in efficiency, and the development of marking and measuring work is not facilitated. A fully automated tool does not require human intervention, improving efficiency and eliminating the difference of human subjective measurements.
Disclosure of Invention
The invention aims to provide a human body composition marking method, a human body composition marking system, human body composition marking equipment and a human body composition marking storage medium based on MRI (magnetic resonance imaging), so that the problems in the prior art are solved, the measurement and storage of human body composition related parameters of medical images can be quickly and accurately realized, the processing efficiency of medical images (images) aiming at human body compositions is improved, and the labor cost is reduced.
In order to achieve the purpose, the invention provides the following scheme: the invention provides a human body component marking method based on MRI, which comprises the following steps:
acquiring a Magnetic Resonance Imaging (MRI) image at a specified position;
preprocessing the MRI image to obtain a preprocessed target tissue organ image sequence;
positioning and identifying the image sequence of the target tissue organ in the MRI sagittal position to obtain a positioning image of the sagittal position of the target tissue organ;
and carrying out segmentation processing on the target tissue organ vector positioning image to obtain an MRI image segmentation result of the target tissue organ, and marking based on the MRI image segmentation result to further obtain various body constitution component parameters of the target tissue organ.
Optionally, the preprocessing the MRI image, and obtaining the preprocessed target tissue organ image sequence includes:
resampling the MRI image to obtain a resampled MRI image;
extracting a target region of the resampled MRI image to obtain a target tissue organ region image;
and carrying out normalization processing on the target tissue organ region image to obtain a target tissue organ image sequence.
Optionally, the step of performing positioning identification on the image sequence of the target tissue organ in the MRI sagittal region to obtain a positioning image of the target tissue organ in the sagittal region includes:
constructing and training an image positioning model, wherein the image positioning model is used for carrying out MRI sagittal position positioning identification on the target tissue organ image sequence, and comprises an input layer, a first convolution layer, a first maximum value pooling layer, a second convolution layer, a second maximum value pooling layer, a third convolution layer, a third maximum value pooling layer, a fourth convolution layer, a fourth maximum value pooling layer, a full-connection layer and an output layer which are sequentially connected;
and based on the trained image positioning model, positioning and identifying the image sequence of the target tissue organ in the MRI sagittal position to obtain a positioning image of the sagittal position of the target tissue organ.
Optionally, the number of the first convolutional layers is 2, the number of the second convolutional layers is 2, the number of the third convolutional layers is 2, the number of the fourth convolutional layers is 1, and the number of the fully-connected layers is 2.
Optionally, the segmenting the target tissue organ sagittal positioning image, and obtaining an MRI image segmentation result of the target tissue organ includes:
constructing and training an image segmentation model, wherein the image segmentation model is used for segmenting a sagittal positioning image of the target tissue organ, and a neural network is adopted in the image segmentation model;
and inputting the sagittal positioning image of the target tissue organ into the trained image segmentation model, and performing segmentation processing on the sagittal positioning image of the target tissue organ at the axial level to obtain an MRI image segmentation result of the target tissue organ, wherein the MRI image segmentation result comprises a plurality of MRI parameters.
Optionally, the image segmentation model comprises an input layer, a forward segmentation sub-network, a backward segmentation sub-network, a convolutional layer and a softmax layer which are connected in sequence; wherein the forward partitioning sub-network comprises a convolution residual module-pooling layer pair; the inverse partitioning sub-network includes a convolution residual module-inverse pooling layer pair.
Optionally, the plurality of constitutional parameters of the target tissue organ include muscle or muscle group at a designated position, area of fat tissue in different regions, area index, ratio of visceral fat to subcutaneous fat area, liver and muscle proton density fat fraction, muscle subcutaneous fat signal ratio, and muscle fat content.
There is also provided an MRI-based body composition labeling system, the system comprising:
the preprocessing module is used for preprocessing the acquired Magnetic Resonance Imaging (MRI) image at the designated position to obtain a preprocessed target tissue organ image sequence;
the positioning identification module is used for positioning and identifying the target tissue organ image sequence in the MRI sagittal position to obtain a target tissue organ sagittal position positioning image;
and the image segmentation module is used for segmenting the target tissue organ vector positioning image to obtain an MRI image segmentation result of the target tissue organ, and marking the MRI image segmentation result to further obtain a plurality of body constitution component parameters of the target tissue organ.
There is also provided an MRI-based body composition tagging device comprising a processor and a memory, said memory storing machine executable instructions executable by said processor, said processor executing said machine executable instructions to implement an MRI-based body composition tagging method.
An MRI-based body component tagging storage medium is also provided, the storage medium storing machine-executable instructions that, when invoked and executed by a processor, cause the processor to implement an MRI-based body component tagging method.
The invention discloses the following technical effects:
the invention preprocesses the obtained target MRI image to obtain a preprocessed constitution component image sequence, then carries out positioning identification on the constitution component image sequence in the MRI sagittal position to obtain a sagittal position positioning image of the target constitution component (including muscles or muscle groups at a designated position, adipose tissues in different regions, liver and other tissue organs), and carries out segmentation processing on the sagittal position positioning image of the target tissue in the axial level to obtain a marking result of the MRI image of the tissue organs such as the muscles or muscle groups at the designated position, adipose tissues in different regions, liver and the like, and the result comprises a plurality of target parameters such as cross sectional area, area index, ratio of visceral fat to subcutaneous fat area (V/S), proton density fat fraction of liver and muscle, signal ratio of muscle subcutaneous fat, muscle fat content and other parameters of different constitution components, the human body composition parameter information identification measurement of the MRI medical image can be quickly and accurately realized, so that the processing efficiency of the human body composition medical image (image) is improved, and the labor cost is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
Fig. 1 is a schematic flow chart of a human body composition marking method based on MRI provided in an embodiment of the present invention;
FIG. 2 is a schematic diagram of a detailed structure of an image localization model network according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a specific image segmentation model according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of an MRI-based body composition marker system according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a body composition marking apparatus based on MRI according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Considering that the existing method for marking the human body components by using the MRI inspection technology mainly depends on manual or semi-manual methods, the method is easy to fatigue, poor in standard consistency in different regions, heavy in data quantification work and not beneficial to large-scale and multi-center development of the human body component marking and measuring work. Embodiments of the present invention provide a method, a system, an electronic device, and a machine storage medium for marking a body composition image, which can quickly and accurately measure body composition segmentation parameter information of an MRI medical image, thereby improving processing efficiency of medical images (images) for tissues and organs such as muscles or muscle groups, adipose tissues in different regions, and livers, and reducing labor cost.
The embodiment of the invention provides a human body component marking method based on MRI, which comprises the following steps as shown in figure 1:
s100, acquiring a Magnetic Resonance Imaging (MRI) image of the designated position.
The target tissue organ is a designated position, namely an object for carrying out human body component MRI image marking.
The MRI images of the body composition are MRI images acquired by magnetic resonance imaging examination technology, and the MRI images of the target tissue and organ include, but are not limited to, medical images originally acquired by MRI images of muscles or muscle groups, adipose tissues in different regions, organs and organs such as liver and the like.
S200, preprocessing the MRI image of the target tissue organ to obtain a preprocessed target tissue organ image sequence, namely an image sequence of tissues and organs such as muscles or muscle groups, fat tissues in different regions, livers and the like.
In order to ensure the accuracy of the positioning identification and the segmentation processing of the subsequent human body composition MRI image, the preprocessing operation can comprise the steps of carrying out data sampling processing on the obtained human body composition MRI image and extracting the region containing the region to be processed in the human body composition MRI image, so that the accuracy of the subsequent positioning segmentation can be ensured by the human body composition image sequence obtained after preprocessing.
The human body composition image sequence is a tomographic image sequence, namely the tomographic image after the preprocessing operation. The image sequence may be a slice sequence in an MRI tomographic image, and the slice sequence in the MRI tomographic image may include a sequence of medical images with a plurality of slice intervals, a plurality of slice numbers, and a plurality of MRI resolutions.
In order to ensure that the quality input into the positioning neural network meets the requirement of positioning segmentation, firstly, the obtained MRI image of the target tissue organ is preprocessed to obtain a plurality of sequences of images of muscles or muscle groups, adipose tissues in different regions, livers and the like. In the specific implementation, the steps shown in S201-S203 are adopted:
s201, resampling the human body component MRI image to obtain a resampled MRI image; the MRI images include a plurality of image images at the vertebral body level and at the limb level.
In the resampling, the resampling can be performed according to a preselected sampling interval, so as to ensure the data sampling quality of the MRI images of muscles or muscle groups, fat tissues in different areas, liver and the like.
The MRI mode to be taken may be an image such as thoracic MRI, lumbar MRI, abdominal MRI, pelvic MRI, extremity MRI, etc., involving a plurality of cones and extremity different level levels, and may be T1-L1 (thoracolumbar), T12-L2 (thoracolumbar), L3-L5 (lumbar), S1 (sacral) -tail 3, hip S1 to 5cm below the femoral tuberosity, thigh and calf target level.
S202, extracting the region of interest of the resampled MRI image to obtain a human body component region image; the human body composition region image comprises regions such as muscles or muscle groups to be segmented, fat tissues in different regions, livers and the like;
the region of interest extraction may be a way to select a fixed slice region in different types of MRI images, and by performing the extraction of the slice region, it may be ensured that the human composition region image includes a human composition region to be segmented. The human body component region to be segmented needs to contain an accurate segmentation area for segmentation so that a complete and accurate segmentation result can be obtained during subsequent segmentation.
S203, carrying out normalization processing on the human body composition region images to obtain image sequences of muscles or muscle groups, adipose tissues in different regions, livers and the like, namely target tissue organ image sequences.
In addition, this embodiment provides another specific example of the MRI image preprocessing of the body composition, taking the lumbar MRI image as an example, the acquired MRI image is processed first, and the reference axis is determined. And then, taking the normalized DICOM image as a starting point, detecting and segmenting the spine, and then segmenting the cone into independent units, taking the longitudinal axis of the cranium and the caudal region as the direction, and forming a scaling reference axis which comprises a threshold range, morphological characteristics and the like. The reference axis is formed for positioning to the correct level of the cone for segmentation of tissue organs such as muscles or muscle groups, adipose tissues in different regions, liver, etc.
Furthermore, the uncompressed DICOM data is preprocessed by the medical image module processing platform and converted into data which can be directly input to the positioning neural network of the embodiment for processing. Processing collected data in advance, artificially marking a target organization to obtain a golden standard database, randomly dividing the golden standard database into 5 subsets for analysis, wherein 4 subsets are set as training sets, 1 subset is set as a test set, and training a deep learning system consisting of a positioning neural network and a dividing neural network by adopting a 5-fold cross test so as to divide the constitutional components. By the method, the muscle or muscle group, the adipose tissues in different areas, the liver and other tissue organs are segmented on the level of a plurality of cones and different limb layers, the segmentation precision is analyzed, and the spatial overlapping degree between manual segmentation and automatic segmentation is measured.
S300, positioning and identifying the image sequence of the target tissue organ in the MRI sagittal position to obtain a positioning image of the sagittal position of the target tissue organ.
The target human body components comprise muscles or muscle groups at specified positions, fat tissues in different regions, tissues and organs of the liver and the like. In this embodiment, the positioning identification is realized by an image positioning model, and the image positioning model is constructed by a neural network. The positioning identification is executed by inputting the human body component image sequence into a pre-trained image positioning model. The pre-trained positioning neural network adopts the ideas of convolution, pooling, full connection and coding and decoding, and the specific network structure is determined according to the characteristics of tests and medical image sequences.
In this embodiment, the image localization model includes an input layer, a first convolution layer, a first maximum value pooling layer, a second convolution layer, a second maximum value pooling layer, a third convolution layer, a third maximum value pooling layer, a fourth convolution layer, a fourth maximum value pooling layer, a full connection layer, and an output layer, which are connected in sequence. As shown in fig. 2, the number of first convolution layers in the image localization model is 2, the number of second convolution layers is 2, the number of third convolution layers is 2, the number of fourth convolution layers is 1, and the number of fully-connected layers is 2.
The specific parameters of each layer in the image localization model are as follows:
the first layer is the input layer, which is a sequence of slices in a single-channel MRI two-dimensional image.
The second layer is convolutional layer Conv1 with convolution kernel 3 x 3, number of input channels 1, number of output channels 6, and shift step s of 1.
The third layer is convolutional layer Conv2, with convolution kernel 3 × 3, number of input channels 6, number of output channels 6, shift step s of 1, followed by ReLU function active layer.
The fourth layer is the max pooling layer MaxP3, using a filter of 2 x 2, with a moving step s of 2.
The fifth layer is convolutional layer Conv4, with convolution kernel 3 × 3, number of input channels 6, number of output channels 16, and shift step s of 1.
The sixth layer is convolutional layer Conv5 with convolution kernel 3 x 3, number of input channels 16, number of output channels 16, shift step s1, followed by the ReLU function active layer.
The seventh layer is a maximum pooling layer MaxP6, using a filter of 2 x 2, with a moving step s of 2.
The eighth layer is convolutional layer Conv7 with convolution kernel 3 x 3, number of input channels 16, number of output channels 16, and shift step s of 1.
The ninth layer is convolutional layer Conv8 with convolution kernel 3 x 3, number of input channels 16, number of output channels 32, shift step s1, followed by the ReLU function active layer.
The tenth layer is the max pooling layer MaxP9, with 2 x 2 filters and a moving step s of 2.
The eleventh layer is convolutional layer Conv10 with convolution kernel 3 x 3, number of input channels 32, number of output channels 64, shift step s1, followed by the ReLU function active layer.
The twelfth layer is a maximum pooling layer MaxP11, using a filter of 2 x 2, with a moving step s of 2.
The thirteenth layer is a full connection layer, the number of input channels is 64, and the number of output channels is 240.
The fourteenth layer is a fully connected layer, the number of input channels is 240, and the number of output channels is 84.
And the fifteenth layer is an output layer and adopts a sigmoid activation function to carry out positioning output.
The obtained human body component positioning image is a two-dimensional medical image which is an MRI slice, namely the MRI slice is determined through a neural network in the image positioning model so as to carry out subsequent analysis processing on the MRI slice.
S400, carrying out segmentation processing on the target tissue organ vector positioning image to obtain an MRI image segmentation result of the target tissue organ, and marking based on the MRI image segmentation result to further obtain various body constitution component parameters of the target tissue organ.
The image segmentation model can be built by adopting an improved U-net neural network or an improved Full Convolution Network (FCN).
In the embodiment, an image segmentation model is built by adopting an improved U-net neural network, and comprises an input layer, a forward segmentation sub-network, a convolution residual module, a reverse segmentation sub-network, a convolution layer and a softmax layer which are sequentially connected; wherein the forward segmentation subnetwork comprises a plurality of convolution residual modules-pooling layer pairs; the inverse partitioning sub-network includes several convolution residual module-inverse pooling layer pairs. The convolution residual module-pooling layer pair is also a convolution residual module and a pooling layer which are connected in sequence, and the convolution residual module-inverse pooling layer pair is also an inverse pooling layer and a convolution residual module which are connected in sequence. As shown in fig. 3, the number of convolution residual module-pooling layer pairs is 4, and the number of convolution residual module-anti-pooling layer pairs is 4.
Specifically, the method comprises the following steps: the number of layers of the image segmentation model is as follows in sequence:
the first layer is an input layer, and the input is a single-channel MRI two-dimensional slice image, namely a positioning image of a target tissue organ output by a positioning neural network.
The second layer is a convolution residual block ResBlock1 with an input channel number of 1 and an output channel number of 16, followed by a prilu function activation layer (not shown). The convolution residual block1 includes a convolution path with a convolution kernel size of 5 × 5 and a residual path with a shift step s of 1, and the residual path performs element addition based on the convolution path.
The third layer is a pooling layer MaxP2, using a filter of 2 x 2, with a moving step s of 2.
The fourth layer is the convolution residual block ResBlock3, with 16 input channels and 32 output channels, followed by the prilu function activation layer (not shown). The convolution residual block3 includes a convolution path with a convolution kernel size of 5 × 5 and a residual path with a shift step s of 1, and the residual path performs element addition based on the convolution path.
The fifth layer is a pooling layer MaxP4, using a filter of 2 x 2, with a moving step s of 2.
The sixth layer is the convolution residual block ResBlock5, with 32 input channels and 64 output channels, followed by the prilu function activation layer (not shown). The convolution residual block5 comprises a convolution path and a residual path, the convolution kernel size of the convolution path is 5 × 5, the moving step s is 1, a 2-layer convolution neural network is adopted, and the residual path is subjected to element addition on the basis of the convolution path.
The seventh layer is a pooling layer MaxP6, using a filter of 2 x 2, with a moving step s of 2.
The eighth layer is a convolution residual block ResBlock7, with an input channel number of 64 and an output channel number of 128, followed by a prilu function activation layer (not shown). The convolution residual block7 comprises a convolution path and a residual path, the convolution kernel size of the convolution path is 5 × 5, the moving step s is 1, a 3-layer convolution neural network is adopted, and the residual path is subjected to element addition on the basis of the convolution path.
The ninth layer is a pooling layer MaxP8, using a filter of 2 x 2, with a moving step s of 2.
The tenth layer is a convolution residual block ResBlock9, with an input channel number of 128 and an output channel number of 256, followed by a prilu function activation layer (not shown). The convolution residual block9 comprises a convolution path and a residual path, the convolution kernel size of the convolution path is 5 × 5, the moving step s is 1, a 3-layer convolution neural network is adopted, and the residual path is subjected to element addition on the basis of the convolution path.
The eleventh layer is an anti-pooling layer MaxP10, with a filter of 2 x 2 and a moving step s of 2.
The twelfth layer is a convolution residual block ResBlock11, with an input channel number of 256 and an output channel number of 256, followed by a prilu function activation layer (not shown). The convolution residual block11 comprises a convolution path and a residual path, the convolution kernel size of the convolution path is 5 × 5, the moving step s is 1, a 3-layer convolution neural network is adopted, and the residual path is subjected to element addition on the basis of the convolution path.
The thirteenth layer is an anti-pooling layer MaxP12, which uses a filter of 2 × 2 and moves by step s of 2.
The fourteenth layer is the convolution residual block ResBlock13, with 256 input channels and 128 output channels, followed by the prime function activation layer (not shown). The convolution residual block13 comprises a convolution path and a residual path, the convolution kernel size of the convolution path is 5 × 5, the moving step s is 1, a 2-layer convolution neural network is adopted, and the residual path is subjected to element addition on the basis of the convolution path.
The fifteenth layer is an anti-pooling layer MaxP14, using a filter of 2 x 2, and a moving step s of 2.
The sixteenth layer is a convolution residual block ResBlock15, with 128 input channels and 64 output channels, followed by a prilu function activation layer (not shown). The convolution residual block15 comprises a convolution path and a residual path, the convolution kernel size of the convolution path is 5 × 5, the moving step s is 1, a 1-layer convolution neural network is adopted, and the residual path is subjected to element addition on the basis of the convolution path.
The seventeenth layer is an anti-pooling layer MaxP16, with a filter of 2 x 2 and a moving step s of 2.
The eighteenth layer is a convolution residual block ResBlock17, with an input channel number of 64 and an output channel number of 32, followed by a prilu function activation layer (not shown). The convolution residual block17 comprises a convolution path and a residual path, the convolution kernel size of the convolution path is 5 × 5, the moving step s is 1, a 1-layer convolution neural network is adopted, and the residual path is subjected to element addition on the basis of the convolution path.
The nineteenth layer is convolutional layer Conv18, the number of input channels is 32, the number of output channels is 7, the size of the convolutional kernel is 5 × 5, and the shift step s is 1.
The twentieth layer is a SoftMax layer, and normalizes the channel weights of the convolutional layer Conv18 to output the above-described division result.
And inputting the obtained human body component positioning image into an image segmentation model, so that an accurate segmentation result can be obtained. The segmentation result comprises a plurality of target parameters, wherein the main parameters comprise parameters such as muscles or muscle groups, the cross-sectional areas of adipose tissues in different areas, the ratio of visceral fat to subcutaneous fat area (V/S), the area index, the proton density and fat fraction of liver and muscles, the signal ratio of subcutaneous fat of muscles, the content of muscle fat and the like. Preferably, in order to obtain the change of each parameter accurately, the change rate and the time-dynamic change curve of the parameter can be included.
The segmentation result obtained by the segmentation of the image segmentation model can replace the work of artificial human body component marking, and because the positioning of the human body component slice and the segmentation of the human body component are carried out by adopting the mode of combining the two neural networks of the positioning neural network and the image segmentation neural network, the efficiency and the accuracy of intelligent identification and the segmentation of the human body component image can be improved, and the segmentation accuracy of the human body component image is improved.
Evaluating the ratio of the average signal intensity of the target muscle to the average signal intensity of subcutaneous fat in the region of interest to obtain the signal ratio of the subcutaneous fat of the muscle; finally, the ratio of the area of the high signal area replaced by the fat tissue in the contour of the interest region of the target muscle to the area of the interest region of the target muscle is calculated, and the muscle Fat Content (FC) is obtained. The target parameters are used for accurately evaluating the human body components.
When the quantitative evaluation of the human body components is carried out, the cross section area of a target area is corrected by the height to obtain the cross section area, the Area Index (AI) and the ratio (V/S) of the visceral fat to the subcutaneous fat area of the tissue organs such as muscles, muscle groups, adipose tissues of different areas and the like. The cross-sectional area is the area of the designated muscle or muscle group and adipose tissue in different areas obtained by automatic segmentation by software, and then the area index is calculated by using the height, wherein the area index is the area/height2
When the quantitative evaluation of the constitutional component muscle tissue is carried out, the ratio of the average signal intensity of the target muscle to the average signal intensity of subcutaneous fat in the interested region is evaluated to obtain the signal ratio of the subcutaneous fat of the muscle; finally, the ratio of the area of the high signal area replaced by the fat tissue in the contour of the interest region of the target muscle to the area of the interest region of the target muscle is calculated, and the muscle Fat Content (FC) is obtained.
The body composition image applied to the embodiment may include MRI images of different fat tissues, muscles and muscle groups, and liver in a designated number and at designated positions. In the following, abdominal MRI and lumbar MRI are taken as examples and may include the following groups: skeletal muscle; the lateral psoas major, the left psoas major and the right psoas major; two lateral posterior spine muscle groups, a left lateral posterior spine muscle group and a right lateral posterior spine muscle group; two lateral erector spinae muscles, a left side erector spinae muscle and a right side erector spinae muscle; the multifidus muscles on both sides, the left multifidus muscle and the right multifidus muscle; bilateral quadratus lumbalis, left quadratus lumbalis and right quadratus lumbalis; the area of the paraspinal muscle groups on both sides, the area of the paraspinal muscle group on the left side and the area of the paraspinal muscle group on the right side; two lateral rectus abdominis muscles, a left lateral rectus abdominis muscle, and a right lateral rectus abdominis muscle; two abdominal sidewall muscle groups, a left abdominal sidewall muscle group, and a right abdominal sidewall muscle group; liver (liver); visceral Adipose Tissue (VAT) tissue; subcutaneous abdominal adipose tissue (SAT); inter-muscular adipose tissue (IMAT); subcutaneous adipose tissue of limbs (SAT); sub-fascial adipose tissue (SFAT).
Among them, VAT is classified into intraperitoneal adipose tissue (IPAT) and retroperitoneal adipose tissue (RPAT), and SAT is classified into Deep adipose tissue (Deep-SAT, DSAT) and Superficial adipose tissue (SSAT).
After the continuous region is segmented, evaluating the ratio of the average signal intensity of the target muscle to the average signal intensity of subcutaneous fat in the region of interest to obtain the signal ratio of the subcutaneous fat of the muscle; finally, the ratio of the area of the high signal area replaced by the fat tissue in the contour of the interest region of the target muscle to the area of the interest region of the target muscle is calculated, and the muscle Fat Content (FC) is obtained.
In conclusion, the method can effectively establish a related database based on the human body component marking and the characteristic analysis of the deep learning, obtain cutoff values (cutoff values) of the body constitution components of normal healthy people and realize the quantitative evaluation data basis of the human body component parameters. Further, an image and clinical evaluation system can be established, aiming at serious diseases such as cardiovascular diseases, metabolism-related diseases, tumors and the like, the system formed by the method can be used for monitoring and evaluating, the prognosis condition of a patient can be more accurately estimated, manual intervention is carried out to a limited extent, and the survival rate and the life quality of the patient are improved. And compared with manual or semi-manual operation, the consistency of the marking standard can be realized, the speed is high, the fatigue is avoided, the data collection is easy, a large amount of scientific research data can be quickly obtained, the crowd constitution component parameters are obtained, and the crowd standard parameters are finally obtained.
For the MRI-based human body composition marking method, the present embodiment further provides an MRI-based human body composition marking system, as shown in fig. 4, the system includes the following parts:
the preprocessing module is used for preprocessing the acquired human body component MRI images at the designated positions to obtain preprocessed image sequences of muscles or muscle groups, adipose tissues in different regions, livers and the like;
the positioning identification module is used for positioning and identifying the human body composition image sequence in the MRI sagittal position to obtain a sagittal position positioning image of the target tissue; wherein, the target tissue comprises muscles or muscle groups at specified positions, fat tissues in different areas, liver and other tissue organs;
the image segmentation module is used for performing segmentation processing on the sagittal positioning image of the target tissue at the axial level to obtain the segmentation result of the MRI images of muscles or muscle groups at the specified position, adipose tissues in different regions, liver and the like; the segmentation result includes a plurality of body composition parameters.
The human body component marking system based on MRI provided by the embodiment of the invention obtains parameters including cross-sectional areas, area indexes, ratio of visceral fat to subcutaneous fat (V/S), density and fat fraction of liver and muscle proton, signal ratio of muscle subcutaneous fat, muscle fat content and the like of different human body components by positioning, identifying and segmenting the processed human body component images, and can quickly and accurately realize the measurement of human body tissue segmentation parameter information of MRI medical images, thereby improving the processing efficiency of the medical images (images) aiming at tissues and organs such as muscles or muscle groups, fat tissues in different areas, livers and the like and reducing the labor cost.
In some embodiments, the preprocessing module is configured to resample an MRI image of a human tissue to obtain a resampled MRI image; the MRI image comprises image images of a plurality of vertebral bodies horizontally and different target layers of limbs; extracting the region of interest of the resampled MRI image to obtain regional images of muscles or muscle groups, adipose tissues of different regions, liver and the like; the target area image comprises a target area to be segmented; and carrying out normalization processing on the human tissue area image to obtain a human tissue image sequence.
In some embodiments, the positioning and identifying module is further configured to perform positioning and identifying on the MRI sagittal position of the human tissue image sequence based on a pre-trained image positioning model to obtain a human tissue positioning image; the pre-trained image positioning model comprises an input layer, a first convolution layer, a first maximum value pooling layer, a second convolution layer, a second maximum value pooling layer, a third convolution layer, a third maximum value pooling layer, a fourth convolution layer, a fourth maximum value pooling layer, a full-connection layer and an output layer which are connected in sequence. The number of the first convolution layers in the image positioning model is 2, the number of the second convolution layers is 2, the number of the third convolution layers is 2, the number of the fourth convolution layers is 1, and the number of the full-connection layers is 2.
In some embodiments, the image segmentation module is further configured to input a sagittal location image of the target tissue into a pre-trained image segmentation model, so as to obtain a segmentation result of an MRI image of a muscle or a muscle group, fat tissues in different regions, a liver, and the like at a specified position; the pre-trained image segmentation model comprises an input layer, a forward segmentation sub-network, a reverse segmentation sub-network, a convolution layer and a softmax layer which are connected in sequence; the forward segmentation sub-network comprises a fourth preset number of convolution residual modules-pooling layer pairs; the backward partition subnetwork includes a fifth predetermined number of convolution residual module-anti-pooling layer pairs.
In some embodiments, the parameters include cross-sectional area of various body components, area index, visceral fat to subcutaneous fat area ratio (V/S), liver and muscle proton density fat fraction, muscle subcutaneous fat signal ratio, muscle fat content, and the like.
In some embodiments, the ratio of the target muscle average signal intensity to the subcutaneous fat average signal intensity in the region of interest is the muscle subcutaneous fat signal ratio; finally, the ratio of the area of the high signal region replaced by the fat tissue in the contour of the interest region of the target muscle to the area of the interest region of the target muscle is calculated as the muscle Fat Content (FC).
The system provided by the embodiment of the present invention has the same implementation principle and technical effect as the foregoing method embodiment, and for the sake of brief description, no mention is made in the system embodiment, and reference may be made to the corresponding contents in the foregoing method embodiment.
The embodiment of the invention provides equipment, which particularly comprises a processor and a storage device; the storage means has stored thereon a computer program which, when executed by the processor, performs the method of any of the above described embodiments.
As shown in fig. 5, the apparatus includes: the system comprises a processor, a memory, a bus and a communication interface, wherein the processor, the communication interface and the memory are connected through the bus; the processor is used to execute executable modules, such as computer programs, stored in the memory.
The memory may comprise high speed random access memory, and may also include non-volatile memory, such as at least one disk memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network and the like can be used.
The bus may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc.
The memory is used for storing a program, and the processor executes the program after receiving an execution instruction, and the method executed by the system defined by the flow process disclosed in any of the embodiments of the present invention may be applied to a processor, or implemented by a processor.
The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The processor can be a general processor, including a central processing unit, a network processor, etc.; but may also be a digital signal processor, an application specific integrated circuit, an off-the-shelf programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
The computer program product of the MRI-based human body composition labeling method, system, device and storage medium provided in the embodiments of the present invention includes a computer-readable storage medium storing a nonvolatile program code executable by a processor, and a computer program is stored on the computer-readable storage medium, and when the computer program is executed by the processor, the method described in the foregoing method embodiments is executed.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the system described above may refer to the corresponding process in the foregoing embodiments, and is not described herein again.
The computer program product of the readable storage medium provided in the embodiment of the present invention includes a computer readable storage medium storing a program code, where instructions included in the program code may be used to execute the method described in the foregoing method embodiment, and specific implementation may refer to the method embodiment, which is not described herein again.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the present invention in its spirit and scope. Are intended to be covered by the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. The human body component marking method based on MRI is characterized by comprising the following steps:
acquiring a Magnetic Resonance Imaging (MRI) image at a specified position;
preprocessing the MRI image to obtain a preprocessed target tissue organ image sequence;
positioning and identifying the image sequence of the target tissue organ in the MRI sagittal position to obtain a positioning image of the sagittal position of the target tissue organ;
and carrying out segmentation processing on the target tissue organ vector positioning image to obtain an MRI image segmentation result of the target tissue organ, and marking based on the MRI image segmentation result to further obtain various body constitution component parameters of the target tissue organ.
2. The MRI-based body composition labeling method according to claim 1, wherein the preprocessing the MRI image to obtain the preprocessed target tissue organ image sequence comprises:
resampling the MRI image to obtain a resampled MRI image;
extracting a target region of the resampled MRI image to obtain a target tissue organ region image;
and carrying out normalization processing on the target tissue organ region image to obtain a target tissue organ image sequence.
3. The MRI-based human body composition marking method according to claim 1, wherein the step of performing positioning identification on the target tissue organ image sequence in the MRI sagittal region to obtain a target tissue organ sagittal region positioning image comprises the steps of:
constructing and training an image positioning model, wherein the image positioning model is used for carrying out MRI sagittal position positioning identification on the target tissue organ image sequence, and comprises an input layer, a first convolution layer, a first maximum value pooling layer, a second convolution layer, a second maximum value pooling layer, a third convolution layer, a third maximum value pooling layer, a fourth convolution layer, a fourth maximum value pooling layer, a full-connection layer and an output layer which are sequentially connected;
and based on the trained image positioning model, positioning and identifying the image sequence of the target tissue organ in the MRI sagittal position to obtain a positioning image of the sagittal position of the target tissue organ.
4. The MRI-based human composition labeling method of claim 3, wherein the number of said first convolutional layers is 2, the number of said second convolutional layers is 2, the number of said third convolutional layers is 2, the number of said fourth convolutional layers is 1, and the number of fully-connected layers is 2.
5. The MRI-based body composition labeling method of claim 1, wherein the step of segmenting the target tissue organ sagittal localization image to obtain the MRI image segmentation result of the target tissue organ comprises:
constructing and training an image segmentation model, wherein the image segmentation model is used for segmenting a sagittal positioning image of the target tissue organ, and a neural network is adopted in the image segmentation model;
and inputting the sagittal positioning image of the target tissue organ into the trained image segmentation model, and performing segmentation processing on the sagittal positioning image of the target tissue organ at the axial level to obtain an MRI image segmentation result of the target tissue organ, wherein the MRI image segmentation result comprises a plurality of MRI parameters.
6. The MRI-based human composition labeling method according to claim 5, wherein the image segmentation model comprises an input layer, a forward segmentation sub-network, a backward segmentation sub-network, a convolutional layer and a softmax layer which are connected in sequence; wherein the forward partitioning sub-network comprises a convolution residual module-pooling layer pair; the inverse partitioning sub-network includes a convolution residual module-inverse pooling layer pair.
7. The MRI-based body composition labeling method of claim 1, wherein the plurality of constitutional parameters of the target tissue organ include muscles or muscle groups at designated positions, areas of adipose tissues in different regions, area index, ratio of visceral fat to subcutaneous fat area, liver and muscle proton density fat fraction, muscle subcutaneous fat signal ratio, muscle fat content.
8. An MRI-based body composition labeling system, the system comprising:
the preprocessing module is used for preprocessing the acquired Magnetic Resonance Imaging (MRI) image at the designated position to obtain a preprocessed target tissue organ image sequence;
the positioning identification module is used for positioning and identifying the target tissue organ image sequence in the MRI sagittal position to obtain a target tissue organ sagittal position positioning image;
and the image segmentation module is used for segmenting the target tissue organ vector positioning image to obtain an MRI image segmentation result of the target tissue organ, and marking the MRI image segmentation result to further obtain a plurality of body constitution component parameters of the target tissue organ.
9. An MRI-based body composition tagging device comprising a processor and a memory, said memory storing machine executable instructions executable by said processor, said processor executing said machine executable instructions to implement the MRI-based body composition tagging method according to any one of claims 1 to 7.
10. An MRI-based body composition tagging storage medium, characterized in that the storage medium stores machine-executable instructions which, when invoked and executed by a processor, cause the processor to implement the MRI-based body composition tagging method of any one of claims 1 to 7.
CN202111455156.5A 2021-12-01 2021-12-01 Method, system, device and storage medium for marking human body components based on MRI Pending CN114141336A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111455156.5A CN114141336A (en) 2021-12-01 2021-12-01 Method, system, device and storage medium for marking human body components based on MRI

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111455156.5A CN114141336A (en) 2021-12-01 2021-12-01 Method, system, device and storage medium for marking human body components based on MRI

Publications (1)

Publication Number Publication Date
CN114141336A true CN114141336A (en) 2022-03-04

Family

ID=80386735

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111455156.5A Pending CN114141336A (en) 2021-12-01 2021-12-01 Method, system, device and storage medium for marking human body components based on MRI

Country Status (1)

Country Link
CN (1) CN114141336A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110310287A (en) * 2018-03-22 2019-10-08 北京连心医疗科技有限公司 It is neural network based to jeopardize the automatic delineation method of organ, equipment and storage medium
CN111008984A (en) * 2019-12-10 2020-04-14 广州柏视医疗科技有限公司 Method and system for automatically drawing contour line of normal organ in medical image
CN111681251A (en) * 2020-05-29 2020-09-18 上海联影智能医疗科技有限公司 Tissue and organ parameter determination method and device and computer equipment
CN113409309A (en) * 2021-07-16 2021-09-17 北京积水潭医院 Muscle CT image delineation method, system, electronic equipment and machine storage medium
CN113538496A (en) * 2020-04-17 2021-10-22 成都连心医疗科技有限责任公司 Automatic brain tissue delineation method, delineation system, computing equipment and storage medium for MRI head image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110310287A (en) * 2018-03-22 2019-10-08 北京连心医疗科技有限公司 It is neural network based to jeopardize the automatic delineation method of organ, equipment and storage medium
CN111008984A (en) * 2019-12-10 2020-04-14 广州柏视医疗科技有限公司 Method and system for automatically drawing contour line of normal organ in medical image
CN113538496A (en) * 2020-04-17 2021-10-22 成都连心医疗科技有限责任公司 Automatic brain tissue delineation method, delineation system, computing equipment and storage medium for MRI head image
CN111681251A (en) * 2020-05-29 2020-09-18 上海联影智能医疗科技有限公司 Tissue and organ parameter determination method and device and computer equipment
CN113409309A (en) * 2021-07-16 2021-09-17 北京积水潭医院 Muscle CT image delineation method, system, electronic equipment and machine storage medium

Similar Documents

Publication Publication Date Title
CN113409309B (en) Muscle CT image sketching method, system, electronic equipment and machine storage medium
CN109528197B (en) Individual prediction method and system for mental diseases based on brain function map
EP2741664B1 (en) Image-based identification of muscle abnormalities
CN110610497B (en) Method for determining content of living pig carcass tissue based on CT image processing
CN107368671A (en) System and method are supported in benign gastritis pathological diagnosis based on big data deep learning
CN1502310A (en) Method and system for measuring disease relevant tissue changes
CN105579847B (en) Diseases analysis device, control method and program
CN103996196A (en) DTI image analytical method based on multiple variables
Belavy et al. Beneficial intervertebral disc and muscle adaptations in high-volume road cyclists
CN114119584A (en) Human body composition CT image marking method, system, electronic device and storage medium
CN107967686A (en) A kind of epilepsy identification device for combining dynamic brain network and long memory network in short-term
CN109472798A (en) Live pig fat content detection model training method and live pig fat content detection method
Balasooriya et al. Intelligent brain hemorrhage diagnosis using artificial neural networks
CN112164073A (en) Image three-dimensional tissue segmentation and determination method based on deep neural network
Alsahaf et al. Estimation of muscle scores of live pigs using a kinect camera
CN114305473A (en) Body composition automatic measuring system based on abdomen CT image and deep learning
CN114141336A (en) Method, system, device and storage medium for marking human body components based on MRI
CN109767448A (en) Parted pattern training method and device
CN111784652B (en) MRI (magnetic resonance imaging) segmentation method based on reinforcement learning multi-scale neural network
CN112784924A (en) Rib fracture CT image classification method based on grouping aggregation deep learning model
US20180192944A1 (en) Methods for monitoring compositional changes in a body
CN115175619A (en) Method and device for analyzing human body components by using medical image
Hernandez et al. Image analysis tool with laws' masks to bone texture
Antony et al. Fat quantification in MRI-defined lumbar muscles
US20180192945A1 (en) Methods for predicting compositional body changes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination