CN114119584A - Human body composition CT image marking method, system, electronic device and storage medium - Google Patents

Human body composition CT image marking method, system, electronic device and storage medium Download PDF

Info

Publication number
CN114119584A
CN114119584A CN202111457128.7A CN202111457128A CN114119584A CN 114119584 A CN114119584 A CN 114119584A CN 202111457128 A CN202111457128 A CN 202111457128A CN 114119584 A CN114119584 A CN 114119584A
Authority
CN
China
Prior art keywords
image
muscle
target
layer
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111457128.7A
Other languages
Chinese (zh)
Inventor
张福生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202111457128.7A priority Critical patent/CN114119584A/en
Publication of CN114119584A publication Critical patent/CN114119584A/en
Priority to CN202211108433.XA priority patent/CN116228624A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/02Devices for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computerised tomographs
    • A61B6/032Transmission computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5217Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Abstract

The invention discloses a human body composition CT image marking method, a system, electronic equipment and a storage medium, wherein the method comprises the following steps: preprocessing the acquired human body component CT image to obtain a preprocessed human body component image sequence; positioning and identifying the image sequence of the target tissue organ in the CT sagittal position to obtain a sagittal position positioning image of the target tissue organ; and (3) performing segmentation processing on the sagittal positioning image of the target tissue organ at the axial level to obtain a plurality of results of CT images of human body components at the specified position. The invention can quickly and accurately realize the measurement of the human body composition segmentation parameter information of the three-dimensional medical image, thereby improving the processing efficiency of the human body composition medical image and reducing the labor cost.

Description

Human body composition CT image marking method, system, electronic device and storage medium
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a human body component (constitution component) CT image marking method, a system, electronic equipment and a storage medium.
Background
The analysis of human body components mainly comprises the measurement of the quantity and the quality of skeletal muscle, abdominal adipose tissues and limb adipose tissues, the analysis of the human body components is taken as a prediction index of human body health, and has clear influence on the prognosis of a plurality of diseases, particularly on the aspects of the prediction of metabolic diseases, cardiovascular diseases, tumors, osteoporosis, trauma and postoperative survival rate, disease prevention and the like; in addition, the analysis of the human body components is beneficial to screening patients with increased risk of adverse outcome of the pathological changes, and is beneficial to treatment, follow-up, monitoring and curative effect evaluation of the pathological changes.
Among them, accurate and reliable adipose tissue measurement is essential for the study of human metabolic and cardiovascular and cerebrovascular diseases, and studies have shown that subcutaneous deep adipose tissue and superficial adipose tissue are structurally and functionally related to insulin resistance and triglyceride saturation, and have differences, and thigh muscle and adipose tissue are also found to be very important in metabolic studies. The currently used body weight indicators, such as Body Mass Index (BMI), when used alone, do not fully explain the more specific link between abdominal (central) obesity and the associated consequences of metabolic syndrome. Central obesity is more energetic than BMI by waist circumference measurement, but this approach has its own limitations, including poor inter-and intra-observer reproducibility, and an inability to distinguish between the intra-abdominal and subcutaneous fat regions.
In addition to fat composition analysis, muscle status is also a focus of our attention, and with the advent of aging population, people with sarcopenia are increasing. In the future, sarcopenia will increase year by year, reaching a peak in the 40 th century.
The acquisition and recording of the constitutional parameters become an essential link in the constitutional research, and the Computed Tomography (CT) can be used for the research of the constitutional components of the human body. CT scanning is often used to segment tissue organs, quantify liver fat content, calculate muscle area, and analyze adipose tissue distribution. Adipose tissue and muscle components are easily identified on CT images, and liver fat gain can be evaluated by measuring liver CT values, which are considered as key factors of dyslipidemia associated with obesity. Using CT image examination, the body composition segmentation at CT axis level is the golden standard for measuring the quantity and quality of connective tissues, and the main evaluation parameters include abdominal adipose tissue (VAT), abdominal Subcutaneous Adipose Tissue (SAT), sub-fascial adipose tissue (SFAT), inter-muscle adipose tissue (IMAT), and the area (area), Area Index (AI), CT value (CT value, RA), muscle infiltration degree (trace, percentage) of muscle or muscle group (Liver relative ratio), CT value, Liver relative ratio (MFI) of muscle or muscle group (Liver relative ratio).
Among them, VAT is classified into intraperitoneal adipose tissue (IPAT) and retroperitoneal adipose tissue (RPAT), and SAT is classified into deep adipose tissue (DSAT) and superficial adipose tissue (SSAT).
At present, the research on the constitutional components of large-scale people in China is in a primary stage, the intensive research is not carried out comprehensively, the national image inspection is indispensable, and a large amount of data image data is formed. Therefore, automated labeling and analysis of data is essential. At present, the physical constitution marking work by utilizing the CT inspection technology is mainly carried out by manual or semi-manual methods, the methods consume a large amount of manpower, the marking result has large individual difference and is easy to fatigue, and the work is slow in progress due to the heavy and single work and is not beneficial to the development of the measurement work. The full-automatic segmentation tool is researched and designed, manual participation is reduced, the marking efficiency can be improved, and the difference of artificial subjective measurement is eliminated.
Disclosure of Invention
The invention aims to provide a body composition CT image marking method, a body composition CT image marking system, electronic equipment and a storage medium, which can quickly and accurately realize body composition segmentation parameter information measurement of a three-dimensional medical image, thereby improving the processing efficiency of the medical image (image) of human body composition and reducing the labor cost.
In order to realize the scheme, the invention adopts the following technical scheme:
a method for labeling a body composition CT image, the method comprising:
preprocessing the acquired human body component CT image to obtain a preprocessed target tissue organ image sequence;
positioning and identifying the image sequence of the target tissue organ in the CT sagittal position to obtain a sagittal position positioning image of the target tissue organ; wherein the target comprises muscles or muscle groups of a designated position, liver, fat tissues of different target areas;
and performing segmentation processing on the sagittal positioning image of the target tissue organ at the axial level to obtain a segmentation result of the CT image of the tissue organ at the specified position.
Preferably, the step of preprocessing the acquired CT images of the tissue and organ to obtain an image sequence of the muscle or muscle group, the liver, and the adipose tissues of different target regions at a specific position includes:
resampling the CT image of the target tissue organ to obtain a resampled CT image of the tissue organ; the CT image comprises a plurality of image images of vertebral body level and appointed limb level;
extracting the region of interest of the resampled human body component CT image to obtain a target tissue organ region image; wherein the target tissue organ region image comprises a target region to be segmented;
and carrying out normalization processing on the target tissue organ region image to obtain an image sequence of the muscle or muscle group, the liver and the adipose tissues of different target regions at the specified position.
Preferably, the step of locating and identifying the image sequence of the target tissue and organ in the CT sagittal region to obtain a sagittal region locating image of the target region includes:
positioning and identifying the image sequence of the target tissue organ in the CT sagittal position based on a pre-trained positioning neural network to obtain a sagittal position positioning image of the target tissue organ;
the pre-trained positioning neural network comprises an input layer, a first specified number of first convolutional layers, a first maximum value pooling layer, a first specified number of second convolutional layers, a second maximum value pooling layer, a second specified number of third convolutional layers, a third maximum value pooling layer, a second specified number of fourth convolutional layers, a fourth maximum value pooling layer, a second specified number of fifth convolutional layers, a fifth maximum value pooling layer, a first specified number of sixth convolutional layers, a sixth maximum value pooling layer, a third specified number of fully-connected layers and an output layer which are connected in sequence.
Preferably, the first specified number is 2; the second specified number is 3; the third specified number is 2.
Preferably, the step of obtaining a segmentation result of the CT image of the target region at the specified position by performing segmentation processing on the sagittal positioning image of the target region at the axial level includes:
inputting the sagittal positioning image of the target tissue organ into a pre-trained segmentation neural network to obtain a plurality of segmentation results of CT images of muscles or muscle groups, livers and adipose tissues of different target areas at the specified position;
the pre-trained segmented neural network comprises an input layer, a forward segmented sub-network, a reverse segmented sub-network, a convolutional layer and a sigmoid layer which are connected in sequence; wherein the forward segmentation subnetwork comprises a fourth preset number of convolution residual modules-pooling layer pairs; the inverse partitioning sub-network includes a fifth preset number of convolution residual module-inverse pooling layer pairs.
Preferably, the segmentation result includes a plurality of target parameters, and the target parameters include: abdominal adipose tissue, abdominal subcutaneous adipose tissue, limb subcutaneous adipose tissue, sub-fascial adipose tissue of a limb, intersomatic adipose tissue of a limb, area of a muscle or muscle group of a target site, area index, CT value of a muscle or muscle group, muscle fat infiltration degree percentage, liver CT value, liver relative CT ratio.
Preferably, the degree of muscle fat infiltration is characterized by an inter-muscle fat CT value range and a muscle CT value range; firstly, performing morphological erosion on a muscle region by using a structural element with the radius of 3 pixels to remove artifacts at a segmentation edge; then threshold extraction is carried out on fat pixels in the range of-190 to-30 Hu of an erosion muscle area; range of intramuscular fat CT values: -190 to 30 Hu; muscle CT value range: -29-150 Hu; the specific method comprises the following steps: after the target muscle or muscle group is segmented according to the continuous region, the area of the region is calculated, then scattered adipose tissues contained in the region are segmented and extracted by a threshold processing method (-190 to-30 Hu), small areas of each adipose region are calculated, and the small areas are added to obtain the sum of the scattered adipose tissue areas. Finally, the ratio of the sum of the areas of the scattered adipose tissues among the muscles to the area of the target muscle or muscle group is the degree of infiltration of muscle fat, expressed as a percentage.
The invention also provides a human body composition CT image marking system, which comprises:
the preprocessing module is used for preprocessing the acquired CT images of the target tissue organs to obtain preprocessed muscle or muscle group, liver and adipose tissue image sequences of different target areas at specified positions;
the positioning identification module is used for positioning and identifying the image sequence of the target tissue organ in the CT sagittal position to obtain sagittal position positioning images of muscles or muscle groups, livers and adipose tissues of different target areas at specified positions; wherein the target tissue organ comprises muscles or muscle groups at specified positions, liver, fat tissue tissues or organs at different target areas;
the segmentation processing module is used for performing segmentation processing on the sagittal positioning image of the target tissue organ at the axial level to obtain the segmentation result of the CT image of the muscle or muscle group, the liver and the adipose tissues of different target areas at the specified position; the segmentation result includes a plurality of CT parameters.
The invention further provides an electronic device, which comprises a processor and a memory, wherein the memory stores machine executable instructions capable of being executed by the processor, and the processor executes the machine executable instructions to realize the CT image marking method for muscles or muscle groups, livers, fat tissues in different regions and the like in any one of the above embodiments.
The present invention also provides a storage medium storing machine executable instructions, which when invoked and executed by a processor, cause the processor to implement the method for labeling a body composition CT image according to any one of the foregoing embodiments.
The method comprises the steps of preprocessing an acquired constitution component CT image to obtain a preprocessed constitution component image sequence, then positioning and identifying the constitution component image sequence in a CT sagittal position to obtain a sagittal position positioning image of a target constitution component (comprising muscles or muscle groups at a designated position, livers and adipose tissues in different regions), and performing segmentation processing on the sagittal position positioning image of the target tissue at an axial level to obtain a segmentation result of the CT image of the muscles or muscle groups at the designated position, the livers and the adipose tissues in the different regions, wherein the segmentation result comprises various target parameters. According to the method, the processed body composition images are positioned, identified and segmented to obtain various body composition parameters, such as parameters of cross-sectional areas, area indexes, average CT values, muscle fat infiltration degrees and the like of different body compositions, so that the body composition segmentation parameter information measurement of the three-dimensional medical images can be rapidly and accurately realized, the processing efficiency of the medical images (images) aiming at the body compositions is improved, and the labor cost is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic flow chart of a method for marking a body composition CT image according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a positioning neural network according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a specific segmented neural network according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a body composition CT image labeling system according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Considering that the existing CT inspection technology is mainly used for marking the human body components by manual or semi-manual methods, the method consumes a large amount of manpower, has large individual difference of manual operation, is easy to fatigue, has poor standard consistency in different areas, is heavy in data quantification work, and is not beneficial to the development of the measurement work of the physical component markers. Embodiments of the present invention provide a method, a system, an electronic device, and a storage medium for marking a body composition image, which can quickly and accurately measure body composition segmentation parameter information of a three-dimensional medical image, thereby improving processing efficiency of medical images (images) for muscles or muscle groups, livers, adipose tissues in different regions, and the like, and reducing labor cost.
For convenience of understanding, firstly, a detailed description is given to a method for marking a body composition CT image according to an embodiment of the present invention, referring to a flow chart of the method for marking a body composition CT image shown in fig. 1, which includes the following steps:
step S101, preprocessing the acquired human body component CT image to obtain preprocessed image sequences of muscles or muscle groups, livers, adipose tissues in different regions and the like.
The acquired CT image of the human body composition is a CT image acquired by utilizing an electronic computed tomography image examination technology, and the CT images of muscles or muscle groups, livers, fat tissues in different areas and the like are originally acquired three-dimensional medical images.
In order to ensure the accuracy of the positioning identification and the segmentation processing of the subsequent human body composition CT image, the preprocessing operation can comprise the steps of carrying out data sampling processing on the obtained human body composition CT image and extracting the region containing the region to be processed in the human body composition CT image, so that the accuracy of the subsequent positioning segmentation can be ensured by the human body composition image sequence obtained after preprocessing.
The human body composition image sequence is a three-dimensional medical image sequence, namely the three-dimensional medical image after preprocessing operation. The human body composition image sequence can be a slice sequence in a CT three-dimensional image, and the slice sequence in the CT three-dimensional image can comprise a sequence formed by medical images with various slice intervals, different slice numbers and various CT resolutions.
And S102, positioning and identifying the human body component image sequence in the CT sagittal position to obtain a sagittal position positioning image of the target human body component.
The target body composition includes muscles or muscle groups at a designated position, liver, adipose tissues at different regions, and the like. The positioning identification is executed by inputting the human body component image sequence into a pre-trained positioning neural network. The pre-trained positioning neural network adopts the ideas of convolution, pooling, full connection and coding and decoding, and the specific network structure is determined according to the characteristics of tests and medical image sequences.
The obtained human body composition positioning image is a two-dimensional medical image which is a CT slice, namely the CT slice is determined through a positioning neural network so as to carry out subsequent analysis processing on the CT slice.
Step S103, the human body component positioning image is segmented, and segmentation results of images of muscles or muscle groups, livers, adipose tissues in different regions and the like are obtained.
The human body component positioning image obtained in the mode is input into the image segmentation neural network, so that an accurate segmentation result can be obtained. The segmented neural network may include a modified U-net neural network or may be a modified Full Convolutional Network (FCN). The segmentation result comprises a plurality of target parameters, wherein the main parameters comprise: abdominal adipose tissue, abdominal subcutaneous adipose tissue, limb subcutaneous adipose tissue, sub-fascial adipose tissue of limbs, intersomatic adipose tissue of limbs, and the area, area index, CT value of muscle or muscle group, muscle fat infiltration percentage, liver CT value, and liver CT relative ratio of the muscle or muscle group at the designated position. It can be understood that the segmentation result can also be automatically labeled according to different CT parameters of the body composition image to obtain parameters of each segmented region.
The segmentation result obtained by the image segmentation neural network can replace the work of manual and semi-manual human body component marking, and because the positioning of the human body component slices and the segmentation of the human body components are carried out by adopting the mode of combining the positioning neural network and the image segmentation neural network, the efficiency of intelligent identification and the accuracy of segmentation can be improved, the requirement of end-to-end identification by adopting a three-dimensional neural network on a three-dimensional medical image is overcome, and the segmentation accuracy of the human body component image is improved.
According to the method for marking the human body composition CT image, provided by the embodiment of the invention, the processed human body composition image is subjected to positioning identification and segmentation treatment to obtain relevant parameters including the areas, the area indexes, the CT values, the muscle fat infiltration degree percentages, the liver CT values and the liver CT relative ratio of the fat tissues in abdominal cavity, abdominal subcutaneous fat tissues, limb sub-fascia fat tissues, limb intermuscular fat tissues and muscles or muscle groups at specified positions, and the human body composition segmentation parameter information measurement of the three-dimensional medical image can be rapidly and accurately realized, so that the processing efficiency of the medical image (image) aiming at the muscles or muscle groups, the liver, fat tissues in different areas and the like is improved, and the labor cost is reduced.
In one embodiment, to ensure that the quality of the input to the positioning neural network meets the requirement of positioning segmentation, the acquired CT images of the body composition are first preprocessed to obtain image sequences of muscles or muscle groups, liver, adipose tissues in different regions, and the like. In specific implementation, the following steps 2.1) to 2.3) may be employed:
step 2.1), resampling the human body component CT image to obtain a resampled human body component CT image; the human body composition CT image comprises a plurality of image images of vertebral body level, appointed limb level and limb specific level;
step 2.2), extracting the region of interest of the resampled body composition CT image to obtain a body composition region image; the human body composition region image comprises regions such as muscles or muscle groups to be segmented, livers, adipose tissues of different regions and the like;
and 2.3) carrying out normalization processing on the human body composition region images to obtain image sequences of muscles or muscle groups, livers, adipose tissues of different regions and the like.
Aiming at the step 2.1), when resampling is performed, the resampling can be performed according to a preselected sampling interval, so as to ensure the data sampling quality of the CT images of muscles or muscle groups, livers, fat tissues in different regions and the like.
In addition, the adopted CT modes can be chest CT, abdomen CT, lumbar CT, pelvis CT, pelvic CT and limb CT, and the vertebral bodies of the image images of a plurality of vertebral body levels and the designated limb level can be T1-T12 (thoracic vertebra), L1-L5 (vertebra), S1 (lumbar vertebra) and hip S1 to 5cm below the femoral tuberosity; for the limb, the muscle cross-section of the specified part of the limb (e.g. one axial level of one half of the thigh, three axial levels of one quarter) is calculated.
As for the step 2.2), the region-of-interest extraction may be a method of selecting a fixed slice region in different types of CT images, and by performing the extraction of the slice region, it is ensured that the body composition region image to be segmented includes the body composition region to be segmented. The human body component region to be segmented needs to contain an accurate segmentation area for segmentation so that a complete and accurate segmentation result can be obtained during subsequent segmentation.
In addition, this embodiment provides another specific example of preprocessing a body composition CT image, which first processes an acquired body composition image to determine a reference axis. Taking CT image as an example, taking normalized DICOM image as a starting point, detecting and segmenting the spine, the cone is segmented into independent units, taking cranio-caudal longitudinal axis as a direction, and a scaled reference axis can be formed, which includes a threshold range, morphological characteristics, and the like. The reference axis is formed for positioning to the correct level of the cone in preparation for segmentation of muscles or muscle groups, liver, different regions of adipose tissue, etc.
Furthermore, the uncompressed DICOM data is preprocessed by the medical image module processing platform and converted into data which can be directly input to the positioning neural network of the embodiment for processing. Processing collected data in advance, artificially marking a target organization, making a standard database, randomly dividing the standard database into 5 subsets for analysis, wherein 4 subsets are set as training sets, 1 subset is set as a test set, and training a deep learning system consisting of a positioning neural network and a dividing neural network by adopting a 5-fold cross test so as to divide the constitutional components. By the method, the muscle or muscle group, the liver, the adipose tissues in different areas and the like are segmented on the level of a plurality of cones, the segmentation precision is analyzed, and the spatial overlapping degree between manual segmentation and automatic segmentation is measured.
Further, after the human body component image sequence is obtained, the human body component image sequence can be input into a pre-trained positioning neural network, and the human body component image sequence is positioned and identified in the CT sagittal position based on the pre-trained positioning neural network, so that a target tissue positioning image is obtained. The pre-trained positioning neural network comprises an input layer, a first specified number of first convolutional layers, a first maximum value pooling layer, a first specified number of second convolutional layers, a second maximum value pooling layer, a second specified number of third convolutional layers, a third maximum value pooling layer, a second specified number of fourth convolutional layers, a fourth maximum value pooling layer, a second specified number of fifth convolutional layers, a fifth maximum value pooling layer, a first specified number of sixth convolutional layers, a sixth maximum value pooling layer, a third specified number of fully-connected layers and an output layer which are connected in sequence. Fig. 2 shows a specific structure of a positioning neural network, and for the positioning neural network shown in fig. 2, the first specified number is 2, the second specified number is 3, and the third specified number is 2.
Further, the specific parameters of each layer in the above-mentioned positioning neural network are as follows:
the first layer is an input layer, and the input is a slice sequence in a single-channel CT three-dimensional image.
The second layer is convolutional layer Conv1 with convolution kernel 3 x 3, number of input channels 1, number of output channels 6, and shift step s of 1.
The third layer is convolutional layer Conv2, with convolution kernel 3 × 3, number of input channels 6, number of output channels 6, shift step s of 1, followed by ReLU function active layer.
The fourth layer is the max pooling layer MaxP3, using a filter of 2 x 2, with a moving step s of 2.
The fifth layer is convolutional layer Conv4, with convolution kernel 3 × 3, number of input channels 6, number of output channels 16, and shift step s of 1.
The sixth layer is convolutional layer Conv5 with convolution kernel 3 x 3, number of input channels 16, number of output channels 16, shift step s1, followed by the ReLU function active layer.
The seventh layer is a maximum pooling layer MaxP6, using a filter of 2 x 2, with a moving step s of 2.
The eighth layer is convolutional layer Conv7 with convolution kernel 3 x 3, number of input channels 16, number of output channels 32, and shift step s of 1.
The ninth layer is convolutional layer Conv8, with a convolution kernel of 3 × 3, a number of input channels of 32, a number of output channels of 32, and a shift step s of 1.
The tenth layer is convolutional layer Conv9, with a convolutional kernel of 3 × 3, a number of input channels of 32, a number of output channels of 32, a shift step s of 1, followed by the ReLU function active layer.
The eleventh layer is max pooling layer MaxP10, using a filter of 2 x 2, with a moving step s of 2.
The twelfth layer is convolutional layer Conv11, with a convolution kernel of 3 × 3, a number of input channels of 32, a number of output channels of 48, and a shift step s of 1.
The thirteenth layer is convolutional layer Conv12, with a convolution kernel of 3 × 3, a number of input channels of 48, a number of output channels of 48, and a shift step s of 1.
The fourteenth layer is convolutional layer Conv13, with a convolution kernel of 3 × 3, a number of input channels of 48, a number of output channels of 48, a shift step s of 1, followed by the ReLU function active layer.
The fifteenth layer is a maximum pooling layer MaxP14, using a filter of 2 x 2, with a moving step s of 2.
The sixteenth layer is convolutional layer Conv15 with convolution kernel 3 x 3, number of input channels 48, number of output channels 64, and shift step s of 1.
The seventeenth layer is a convolutional layer Conv16, which has a convolution kernel of 3 × 3, a number of input channels of 64, a number of output channels of 64, and a shift step s of 1.
The eighteenth layer is convolutional layer Conv17 with convolution kernel 3 x 3, input channel number 64, output channel number 64, shift step s1, followed by the ReLU function active layer.
The nineteenth layer is a maximum pooling layer MaxP18, using a filter of 2 x 2, and a moving step s of 2.
The twentieth layer is convolutional layer Conv19, with a convolution kernel of 3 × 3, a number of input channels of 64, a number of output channels of 120, and a shift step s of 1.
The second eleventh layer is convolutional layer Conv20 with a convolutional kernel of 3 x 3, a number of input channels of 120, a number of output channels of 120, a shift step s of 1, followed by the ReLU function active layer.
The twenty-second layer is maximum pooling layer MaxP21, using a filter of 2 x 2, with a moving step s of 2.
The twenty-third layer is a full connection layer, the number of input channels is 120, and the number of output channels is 256.
The twenty-fourth layer is a fully connected layer, the number of input channels is 256, and the number of output channels is 84.
And the twenty-fifth layer is an output layer, and a sigmoid activation function is adopted for positioning output.
Further, the human body component positioning image output by the pre-trained positioning neural network is segmented. In specific implementation, the segmentation result of the CT image for the muscle or muscle group, the liver, the adipose tissue in different regions, and the like at the designated position can be obtained by inputting the sagittal positioning image of the target tissue to the pre-trained segmentation neural network. The segmentation result mainly comprises the cross-sectional areas, the area indexes, the average CT value and the fat infiltration degree of the muscles of the different human body components. Preferably, in order to obtain the variation of each parameter accurately, time-dynamic curves of four parameter values can be included.
For ease of understanding, the present embodiment provides a split neural network based on a modified U-net neural network, which may include an input layer, a forward split sub-network, a convolution residual module, a reverse split sub-network, a convolution layer, and a softmax layer connected in sequence; the forward segmentation sub-network comprises a fourth preset number of convolution residual modules-pooling layer pairs; the backward partition subnetwork includes a fifth predetermined number of convolution residual module-anti-pooling layer pairs. The convolution residual module-pooling layer pair is also a convolution residual module and a pooling layer which are connected in sequence, and the convolution residual module-inverse pooling layer pair is also an inverse pooling layer and a convolution residual module which are connected in sequence. Fig. 3 shows a structural diagram of a specific segmented neural network, in this example, the fourth preset number is 4, and the fifth preset number is 4.
Specifically, the method comprises the following steps: the number of model layers for segmenting the neural network is as follows:
the first layer is an input layer, and a single-channel CT two-dimensional slice image, namely a muscle positioning image output by a positioning neural network, is input.
The second layer is the convolution residual block ResBlock1, with an input channel number of 1 and an output channel number of 64, followed by the ReLU function activation layer (not shown). The convolution residual block1 includes a convolution path with a convolution kernel size of 3 × 3 and a residual path with a shift step s of 1, and the residual path performs element addition based on the convolution path.
The third layer is a pooling layer MaxP2, using a filter of 2 x 2, with a moving step s of 2.
The fourth layer is the convolution residual block ResBlock3, with 64 input channels and 128 output channels, followed by the prilu function activation layer (not shown). The convolution residual block3 comprises a convolution path and a residual path, the convolution kernel size of the convolution path is 3 × 3, the moving step s is 1, a 2-layer convolution neural network is adopted, and the residual path is subjected to element addition on the basis of the convolution path.
The fifth layer is a pooling layer MaxP4, using a filter of 2 x 2, with a moving step s of 2.
The sixth layer is the convolution residual block ResBlock5, with 128 input channels and 256 output channels, followed by the ReLU function activation layer (not shown). The convolution residual block5 comprises a convolution path and a residual path, the convolution kernel size of the convolution path is 3 × 3, the moving step s is 1, a 2-layer convolution neural network is adopted, and the residual path is subjected to element addition on the basis of the convolution path.
The seventh layer is a pooling layer MaxP6, using a filter of 2 x 2, with a moving step s of 2.
The eighth layer is a convolution residual block ResBlock7, with an input channel number of 256 and an output channel number of 512, followed by a ReLU function activation layer (not shown). The convolution residual block7 comprises a convolution path and a residual path, the convolution kernel size of the convolution path is 3 × 3, the moving step s is 1, a 3-layer convolution neural network is adopted, and the residual path is subjected to element addition on the basis of the convolution path.
The ninth layer is a pooling layer MaxP8, using a filter of 2 x 2, with a moving step s of 2.
The tenth layer is a convolution residual module ResBlock9, with an input channel number of 512 and an output channel number of 1024, followed by a prilu function activation layer (not shown). The convolution residual block9 comprises a convolution path and a residual path, the convolution kernel size of the convolution path is 3 × 3, the moving step s is 1, a 3-layer convolution neural network is adopted, and the residual path is subjected to element addition on the basis of the convolution path.
The eleventh layer is an anti-pooling layer MaxP10, with a filter of 2 x 2 and a moving step s of 2.
The twelfth layer is a full-scale connection module ResBlock11, with a number of input channels 1984 and a number of output channels 320, followed by a ReLU function activation layer (not shown). The full-scale connection module FCBlock11 inputs outputs of a second layer, a fourth layer, a sixth layer, an eighth layer and a tenth layer, the convolution kernel size of a convolution path is 3 x 3, the moving step s is 1, a 3-layer convolution neural network is adopted, and element addition is carried out on a residual error path on the basis of the convolution path.
The thirteenth layer is an anti-pooling layer MaxP12, which uses a filter of 2 × 2 and moves by step s of 2.
The fourteenth layer is a full-scale connection module FCBlock13, the number of input channels is 1792, the number of output channels is 320, and the full-scale connection module is followed by a ReLU function activation layer (not shown in the figure). The full-scale connection module FCBlock13 inputs outputs of a second layer, a fourth layer, a sixth layer, a tenth layer and a twelfth layer, the convolution kernel size of the convolution path is 3 x 3, the moving step s is 1, a 2-layer convolution neural network is adopted, and element addition is carried out on the residual error path on the basis of the convolution path.
The fifteenth layer is an anti-pooling layer MaxP14, using a filter of 2 x 2, and a moving step s of 2.
The sixteenth layer is the full-scale connection module FCBlock15, the number of input channels is 1856, the number of output channels is 320, and then the ReLU function activation layer (not shown). The full-scale connection module FCBlock15 inputs outputs of a second layer, a fourth layer, a tenth layer, a twelfth layer and a fourteenth layer, the convolution kernel size of the convolution path is 3 x 3, the moving step s is 1, a 2-layer convolution neural network is adopted, and element addition is carried out on the residual error path on the basis of the convolution path.
The seventeenth layer is an anti-pooling layer MaxP16, with a filter of 2 x 2 and a moving step s of 2.
The eighteenth layer is a full-scale connection module FCBlock17, the number of input channels is 2048, the number of output channels is 320, and then a ReLU function activation layer (not shown in the figure) follows. The input of the full-scale connection module FCBlock17 is the output of the second layer, the tenth layer, the twelfth layer, the fourteenth layer and the sixteenth layer, the convolution kernel size of the convolution path is 3 × 3, the moving step s is 1, 1 layer of convolution neural network is adopted, and the elements of the residual error path are added on the basis of the convolution path.
The nineteenth layer is convolutional layer Conv18, the number of input channels is 320, the number of output channels is 10, the size of the convolutional kernel is 3 × 3, and the shift step s is 1.
The twentieth layer is a sogmoid layer, and the weights of the respective channels of the convolutional layer Conv18 are normalized to output the above-described division result.
Through the positioning neural network and the segmentation neural network, the outlines of different human body components are marked according to the difference of CT values of different tissues of a human body, so that the areas and the CT values of axial muscles, muscle groups, the liver and adipose tissues in different regions are calculated in the constitution component segmentation, then, the marked threshold range of the CT value of the muscle tissue is controlled, the sum of all adipose tissue regions in the marked outline of the muscle tissue is removed, and the simple muscle area is obtained, so that the percentage content of the fat infiltration degree of the muscle in the muscle tissue segmentation region is calculated. The target parameters are used for accurately evaluating the quality of the human body components.
In quantitative evaluation of body composition, the area of a target region cross-sectional area (muscle cross-sectional area) is corrected by height to obtain an area index AI (area index), liver, adipose tissue area of different regions, and the like. The muscle cross-sectional area is the area of the designated muscle or muscle group, liver and adipose tissues in different areas obtained by automatic segmentation of software, and then the area value is subjected to standardized conversion by using the height to calculate the area index of the muscle or muscle group and the adipose tissues in different areas, wherein the area index is area/height2(cm2/m2)。
When muscle tissue quality evaluation in body composition is performed, all fat tissue area areas in a muscle or muscle group segmentation result are removed through threshold processing (threshold processing), the fat infiltration degree (MFI) of the muscle is calculated, and the MFI (%) is the sum of all removed fat tissue areas in the segmentation result area and/or is automatically segmented to obtain the muscle cross-sectional area multiplied by 100% (m2/m2)。
The body composition image applied in this embodiment may include CT images of a specified number and a specified location of different adipose tissues, muscles and muscle groups, and liver. In one embodiment, taking abdominal and lumbar CT as an example, 24 sets of data can be included, as follows: skeletal muscle; bilateral psoas major, left psoas major, right psoas major; a bilateral posterior spine muscle group, a left posterior spine muscle group, and a right posterior spine muscle group; bilateral quadratus lumbalis, left quadratus lumbalis, and right quadratus lumbalis; bilateral paraspinal muscle groups, left paraspinal muscle groups, and right paraspinal muscle groups; bilateral rectus abdominis, left rectus abdominis, right rectus abdominis; bilateral abdominal sidewall muscle groups, left abdominal sidewall muscle groups, right abdominal sidewall muscle groups; liver (Liver), intraperitoneal adipose tissue (VAT), subcutaneous abdominal adipose tissue (SAT), inter-muscular adipose tissue (IMAT), Liver (Liver). And calculating the area, the area index, the fat infiltration degree and other parameters of the above 24 groups of tissues and organs.
Among them, VAT is classified into intraperitoneal adipose tissue (IPAT) and retroperitoneal adipose tissue (RPAT), and SAT is classified into deep adipose tissue (DSAT) and superficial adipose tissue (SSAT).
After the continuous region is segmented, fat tissues with lower CT values are possibly contained inside, all fat tissue regions inside the segmentation result are removed through threshold processing, the fat infiltration degree (MFI) of muscles is calculated, and expression is carried out in a percentage mode. Obtaining muscle cross-sectional area x 100% (m) by automatic segmentation and the sum of all the removed adipose tissue areas in the segmentation result region2/m2). For IMAT area we first perform morphological erosion in the muscle region using a structuring element of radius 3 pixels to remove artifacts at the segmentation edges. Pixels were then thresholded in the-190-30 Hu range of the erosive muscle region. Wherein, the CT value range of the intramuscular fat is as follows: -190 to 30 Hu; muscle CT value range: 29-150Hu, and calculating the sum of the fragmentary areas of the adipose tissues between the muscles of the limbs.
In conclusion, the human body composition parameter quantitative analysis method based on deep learning human body composition marking and feature analysis can effectively establish a human body composition image database, obtain the cut-off value (cutoff value) of the human body composition of normal healthy people and realize the quantification and standardization of the human body composition parameters. Further, an image and a clinical system can be established, system monitoring and risk assessment can be carried out on cardiovascular diseases, metabolic related diseases, tumors, sarcopenia and other serious diseases and degenerative diseases, the prognosis condition of a patient can be accurately estimated, manual intervention is carried out to a limited extent, and the survival rate and the life quality of the patient are improved. And compared with manual or semi-manual operation, the consistency of the marking standard can be realized, the speed is high, the fatigue is avoided, the data collection is easy, a large amount of scientific research data can be quickly obtained, the crowd constitution component parameters are obtained, and the crowd standard parameters are obtained.
In view of the above method for marking a body composition CT image, an embodiment of the present invention further provides a system for marking a body composition CT image, as shown in fig. 4, the system mainly includes the following components:
a preprocessing module 402, configured to preprocess the acquired CT images of the body composition to obtain preprocessed image sequences of muscles or muscle groups, the liver, adipose tissues in different target regions, and the like;
a positioning identification module 404, configured to perform positioning identification on the human body composition image sequence at a CT sagittal position to obtain a sagittal position positioning image of the target tissue; wherein, the target tissue comprises muscles or muscle groups at specified positions, liver, fat tissues in different areas and the like;
a segmentation processing module 406, configured to perform segmentation processing on the sagittal location image of the target tissue at an axial level to obtain a segmentation result of a CT image for a muscle or a muscle group, a liver, fat tissues in different regions, and the like at a specified position; the segmentation result includes a plurality of body composition parameters.
The human body composition CT image marking system provided by the embodiment of the invention can obtain tissue parameters including cross-sectional areas, muscle area indexes, average CT values, fat infiltration degrees of muscles and the like of different human body compositions by positioning, identifying and segmenting the processed human body composition images, and can quickly and accurately realize the measurement of human body tissue segmentation parameter information of three-dimensional medical images, thereby improving the processing efficiency of the medical images (images) aiming at the muscles or muscle groups, livers, fat tissues of different regions and the like and reducing the labor cost.
In some embodiments, the preprocessing module 402 is configured to resample the CT image of the human tissue to obtain a resampled CT image of the human tissue; the human tissue CT image comprises a plurality of image images of vertebral body level and appointed limb level; extracting the region of interest of the resampled muscle CT image to obtain region images of muscles or muscle groups, livers, adipose tissues of different regions and the like; the target area image comprises a target area to be segmented; and carrying out normalization processing on the human tissue area image to obtain a human tissue image sequence.
In some embodiments, the positioning and identifying module 404 is further configured to perform positioning and identifying on the human tissue image sequence in a CT sagittal position based on a pre-trained positioning neural network, so as to obtain a human tissue positioning image; the pre-trained positioning neural network comprises an input layer, a first specified number of first convolutional layers, a first maximum value pooling layer, a first specified number of second convolutional layers, a second maximum value pooling layer, a first specified number of third convolutional layers, a third maximum value pooling layer, a second specified number of fourth convolutional layers, a fourth maximum value pooling layer, a third specified number of full-connection layers and an input layer which are connected in sequence.
In some embodiments, the first specified number is 2; the second specified number is 3; the third specified number is 2.
In some embodiments, the segmentation processing module 406 is further configured to input the sagittal positioning image of the target tissue into a pre-trained segmentation neural network, so as to obtain a segmentation result of the CT image for the muscle or muscle group, the liver, the adipose tissues of different target regions, and the like at a specified position; the pre-trained segmented neural network comprises an input layer, a forward segmented sub-network, a reverse segmented sub-network, a convolutional layer and a sigmold layer which are connected in sequence; the forward segmentation sub-network comprises a fourth preset number of convolution residual modules-pooling layer pairs; the reverse split sub-network includes a fifth predetermined number of full-scale connected module-reverse pooling layer pairs.
In some embodiments, the parameters include cross-sectional area of different body components, muscle area index, mean CT value, degree of fat infiltration of muscle.
In some embodiments, the degree of muscle fat infiltration is characterized by a range of inter-muscle fat CT values and a range of muscle CT values; firstly, performing morphological erosion on a muscle region by using a structural element with the radius of 3 pixels to remove artifacts at a segmentation edge; then threshold extraction is carried out on fat pixels in the range of-190 to-30 Hu of an erosion muscle area; range of intramuscular fat CT values: -190 to 30 Hu; muscle CT value range: 29-150 Hu.
The specific method comprises the following steps: after the target muscle or muscle group is segmented according to the continuous region, the area of the region is calculated, then scattered adipose tissues contained in the region are segmented and extracted by a threshold processing method (-190 to-30 Hu), small areas of each adipose region are calculated, and the small areas are added to obtain the sum of the scattered adipose tissue areas. Finally, the ratio of the sum of the areas of the scattered adipose tissues among the muscles to the area of the target muscle or muscle group is the degree of infiltration of muscle fat, expressed as a percentage.
The system provided by the embodiment of the present invention has the same implementation principle and technical effect as the foregoing method embodiment, and for the sake of brief description, no mention is made in the system embodiment, and reference may be made to the corresponding contents in the foregoing method embodiment.
The embodiment of the invention provides electronic equipment, which particularly comprises a processor and a storage device; the storage means has stored thereon a computer program which, when executed by the processor, performs the method of any of the above described embodiments.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, where the electronic device 100 includes: the device comprises a processor 50, a memory 51, a bus 52 and a communication interface 53, wherein the processor 50, the communication interface 53 and the memory 51 are connected through the bus 52; the processor 50 is arranged to execute executable modules, such as computer programs, stored in the memory 51.
The Memory 51 may include a high-speed Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 53 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used.
The bus 52 may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 5, but this does not indicate only one bus or one type of bus.
The memory 51 is used for storing a program, the processor 50 executes the program after receiving an execution instruction, and the method executed by the system defined by the flow process disclosed in any embodiment of the invention can be applied to the processor 50, or implemented by the processor 50.
The processor 50 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 50. The Processor 50 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA), or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 51, and the processor 50 reads the information in the memory 51 and completes the steps of the method in combination with the hardware thereof.
The method, system, electronic device and computer program product for marking CT images of human tissues provided in the embodiments of the present invention include a computer readable storage medium storing a non-volatile program code executable by a processor, wherein the computer readable storage medium stores a computer program, and the computer program is executed by the processor to perform the method described in the foregoing method embodiments.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the system described above may refer to the corresponding process in the foregoing embodiments, and is not described herein again.
The computer program product of the readable storage medium provided in the embodiment of the present invention includes a computer readable storage medium storing a program code, where instructions included in the program code may be used to execute the method described in the foregoing method embodiment, and specific implementation may refer to the method embodiment, which is not described herein again.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A human body composition CT image marking method is characterized by comprising the following steps:
preprocessing the acquired human body component CT image to obtain a preprocessed target tissue organ image sequence;
positioning and identifying the image sequence of the target tissue organ in the CT sagittal position to obtain a sagittal position positioning image of the target tissue organ; wherein the target comprises muscles or muscle groups of a designated position, liver, fat tissues of different target areas;
and performing segmentation processing on the sagittal positioning image of the target tissue organ at the axial level to obtain a segmentation result of the CT image of the tissue organ at the specified position.
2. The method for labeling body composition CT images according to claim 1, wherein the step of preprocessing the acquired tissue and organ CT images to obtain the image sequences of the muscles or muscle groups, the liver, and the adipose tissues of different target regions at the designated positions comprises:
resampling the CT image of the target tissue organ to obtain a resampled CT image of the tissue organ; the CT image comprises a plurality of image images of vertebral body level and appointed limb level;
extracting the region of interest of the resampled human body component CT image to obtain a target tissue organ region image; wherein the target tissue organ region image comprises a target region to be segmented;
and carrying out normalization processing on the target tissue organ region image to obtain an image sequence of the muscle or muscle group, the liver and the adipose tissues of different target regions at the specified position.
3. The method for labeling body composition CT images according to claim 1, wherein the step of locating and identifying the target tissue organ image sequence in CT sagittal region to obtain sagittal region locating image of the target region comprises:
positioning and identifying the image sequence of the target tissue organ in the CT sagittal position based on a pre-trained positioning neural network to obtain a sagittal position positioning image of the target tissue organ;
the pre-trained positioning neural network comprises an input layer, a first specified number of first convolutional layers, a first maximum value pooling layer, a first specified number of second convolutional layers, a second maximum value pooling layer, a second specified number of third convolutional layers, a third maximum value pooling layer, a second specified number of fourth convolutional layers, a fourth maximum value pooling layer, a second specified number of fifth convolutional layers, a fifth maximum value pooling layer, a first specified number of sixth convolutional layers, a sixth maximum value pooling layer, a third specified number of fully-connected layers and an output layer which are connected in sequence.
4. The method according to claim 3, wherein the first predetermined number is 2; the second specified number is 3; the third specified number is 2.
5. The method of claim 1, wherein the step of obtaining the segmentation result of the CT image of the target region at the specified position by performing segmentation processing on the sagittal positioning image of the target region at an axial level comprises:
inputting the sagittal positioning image of the target tissue organ into a pre-trained segmentation neural network to obtain a plurality of segmentation results of CT images of muscles or muscle groups, livers and adipose tissues of different target areas at the specified position;
the pre-trained segmented neural network comprises an input layer, a forward segmented sub-network, a reverse segmented sub-network, a convolutional layer and a sigmoid layer which are connected in sequence; wherein the forward segmentation subnetwork comprises a fourth preset number of convolution residual modules-pooling layer pairs; the inverse partitioning sub-network includes a fifth preset number of convolution residual module-inverse pooling layer pairs.
6. The method of claim 1, wherein the segmentation result comprises a plurality of target parameters, and the target parameters comprise: abdominal adipose tissue, abdominal subcutaneous adipose tissue, limb subcutaneous adipose tissue, sub-fascial adipose tissue of a limb, intersomatic adipose tissue of a limb, area of a muscle or muscle group of a target site, area index, CT value of a muscle or muscle group, muscle fat infiltration degree percentage, liver CT value, liver relative CT ratio.
7. The method for labeling body composition CT images according to claim 6, wherein the degree of muscle fat infiltration is characterized by an inter-muscle fat CT value range and a muscle CT value range; firstly, performing morphological erosion on a muscle region by using a structural element with the radius of 3 pixels to remove artifacts at a segmentation edge; then threshold extraction is carried out on fat pixels in the range of-190 to-30 Hu of an erosion muscle area; range of intramuscular fat CT values: -190 to 30 Hu; muscle CT value range: -29-150 Hu; the specific method comprises the following steps: dividing target muscles or muscle groups according to continuous regions, calculating the area of the region, then carrying out threshold processing on scattered adipose tissues contained in the region, dividing and extracting the scattered adipose tissue regions in the region, calculating the small area of each adipose region, and adding to obtain the total sum of the scattered adipose tissue areas; finally, the ratio of the sum of the areas of the scattered adipose tissues among the muscles to the area of the target muscle or muscle group is the degree of infiltration of muscle fat, expressed as a percentage.
8. A body composition CT image labeling system, the system comprising:
the preprocessing module is used for preprocessing the acquired CT images of the target tissue organs to obtain preprocessed muscle or muscle group, liver and adipose tissue image sequences of different target areas at specified positions;
the positioning identification module is used for positioning and identifying the image sequence of the target tissue organ in the CT sagittal position to obtain sagittal position positioning images of muscles or muscle groups, livers and adipose tissues of different target areas at specified positions; wherein the target tissue organ comprises muscles or muscle groups at specified positions, liver, fat tissue tissues or organs at different target areas;
the segmentation processing module is used for performing segmentation processing on the sagittal positioning image of the target tissue organ at the axial level to obtain the segmentation result of the CT image of the muscle or muscle group, the liver and the adipose tissues of different target areas at the specified position; the segmentation result includes a plurality of CT parameters.
9. An electronic device, comprising a processor and a memory, wherein the memory stores machine executable instructions capable of being executed by the processor, and the processor executes the machine executable instructions to implement the method for labeling body composition CT images according to any one of claims 1 to 7.
10. A storage medium storing machine executable instructions which, when invoked and executed by a processor, cause the processor to implement the method of CT image tagging of body components as claimed in any one of claims 1 to 7.
CN202111457128.7A 2021-12-01 2021-12-01 Human body composition CT image marking method, system, electronic device and storage medium Pending CN114119584A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111457128.7A CN114119584A (en) 2021-12-01 2021-12-01 Human body composition CT image marking method, system, electronic device and storage medium
CN202211108433.XA CN116228624A (en) 2021-12-01 2022-09-13 Multi-mode constitution component marking and analyzing method based on artificial intelligence technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111457128.7A CN114119584A (en) 2021-12-01 2021-12-01 Human body composition CT image marking method, system, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN114119584A true CN114119584A (en) 2022-03-01

Family

ID=80369480

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202111457128.7A Pending CN114119584A (en) 2021-12-01 2021-12-01 Human body composition CT image marking method, system, electronic device and storage medium
CN202211108433.XA Pending CN116228624A (en) 2021-12-01 2022-09-13 Multi-mode constitution component marking and analyzing method based on artificial intelligence technology

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202211108433.XA Pending CN116228624A (en) 2021-12-01 2022-09-13 Multi-mode constitution component marking and analyzing method based on artificial intelligence technology

Country Status (1)

Country Link
CN (2) CN114119584A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115187512A (en) * 2022-06-10 2022-10-14 珠海市人民医院 Hepatocellular carcinoma great vessel invasion risk prediction method, system, device and medium
CN116309385A (en) * 2023-02-27 2023-06-23 之江实验室 Abdominal fat and muscle tissue measurement method and system based on weak supervision learning

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170079603A1 (en) * 2015-09-22 2017-03-23 Siemens Healthcare Gmbh Visualizing different types of airway wall abnormalities
US20190076103A1 (en) * 2017-09-13 2019-03-14 LiteRay Medical, LLC Systems and methods for ultra low dose ct fluoroscopy
CN110310287A (en) * 2018-03-22 2019-10-08 北京连心医疗科技有限公司 It is neural network based to jeopardize the automatic delineation method of organ, equipment and storage medium
CN110544245A (en) * 2019-08-30 2019-12-06 北京推想科技有限公司 Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN110610181A (en) * 2019-09-06 2019-12-24 腾讯科技(深圳)有限公司 Medical image identification method and device, electronic equipment and storage medium
CN111311705A (en) * 2020-02-14 2020-06-19 广州柏视医疗科技有限公司 High-adaptability medical image multi-plane reconstruction method and system based on webgl
WO2021178632A1 (en) * 2020-03-04 2021-09-10 The Trustees Of The University Of Pennsylvania Deep learning network for the analysis of body tissue composition on body-torso-wide ct images
CN113409309A (en) * 2021-07-16 2021-09-17 北京积水潭医院 Muscle CT image delineation method, system, electronic equipment and machine storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170079603A1 (en) * 2015-09-22 2017-03-23 Siemens Healthcare Gmbh Visualizing different types of airway wall abnormalities
US20190076103A1 (en) * 2017-09-13 2019-03-14 LiteRay Medical, LLC Systems and methods for ultra low dose ct fluoroscopy
CN110310287A (en) * 2018-03-22 2019-10-08 北京连心医疗科技有限公司 It is neural network based to jeopardize the automatic delineation method of organ, equipment and storage medium
CN110544245A (en) * 2019-08-30 2019-12-06 北京推想科技有限公司 Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN110610181A (en) * 2019-09-06 2019-12-24 腾讯科技(深圳)有限公司 Medical image identification method and device, electronic equipment and storage medium
CN111311705A (en) * 2020-02-14 2020-06-19 广州柏视医疗科技有限公司 High-adaptability medical image multi-plane reconstruction method and system based on webgl
WO2021178632A1 (en) * 2020-03-04 2021-09-10 The Trustees Of The University Of Pennsylvania Deep learning network for the analysis of body tissue composition on body-torso-wide ct images
CN113409309A (en) * 2021-07-16 2021-09-17 北京积水潭医院 Muscle CT image delineation method, system, electronic equipment and machine storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115187512A (en) * 2022-06-10 2022-10-14 珠海市人民医院 Hepatocellular carcinoma great vessel invasion risk prediction method, system, device and medium
CN115187512B (en) * 2022-06-10 2024-01-30 珠海市人民医院 Method, system, device and medium for predicting invasion risk of large blood vessel of hepatocellular carcinoma
CN116309385A (en) * 2023-02-27 2023-06-23 之江实验室 Abdominal fat and muscle tissue measurement method and system based on weak supervision learning
CN116309385B (en) * 2023-02-27 2023-10-10 之江实验室 Abdominal fat and muscle tissue measurement method and system based on weak supervision learning

Also Published As

Publication number Publication date
CN116228624A (en) 2023-06-06

Similar Documents

Publication Publication Date Title
CN113409309B (en) Muscle CT image sketching method, system, electronic equipment and machine storage medium
He et al. A review on automatic mammographic density and parenchymal segmentation
CN106815481B (en) Lifetime prediction method and device based on image omics
CN114119584A (en) Human body composition CT image marking method, system, electronic device and storage medium
CN100463655C (en) Image measuring device, method and image instrumentation system of glomerular filtration rate
JP5079008B2 (en) Lesion display measure related to cartilage structure and its automatic quantification
CN110610497B (en) Method for determining content of living pig carcass tissue based on CT image processing
CN103249358B (en) Medical image-processing apparatus
CN1502310A (en) Method and system for measuring disease relevant tissue changes
US8649843B2 (en) Automated calcium scoring of the aorta
CN104838422B (en) Image processing equipment and method
CN107368671A (en) System and method are supported in benign gastritis pathological diagnosis based on big data deep learning
EP3471054B1 (en) Method for determining at least one object feature of an object
WO2016060557A1 (en) Image analysis method supporting illness development prediction for a neoplasm in a human or animal body
JP2013526992A (en) Computer-based analysis of MRI images
Balasooriya et al. Intelligent brain hemorrhage diagnosis using artificial neural networks
CN111340825A (en) Method and system for generating mediastinal lymph node segmentation model
CN112614126A (en) Magnetic resonance image brain region dividing method, system and device based on machine learning
CN111329488A (en) Gait feature extraction and generation method and system for ankle ligament injury
CN110874860A (en) Target extraction method of symmetric supervision model based on mixed loss function
CN112164073A (en) Image three-dimensional tissue segmentation and determination method based on deep neural network
Köse et al. An automatic diagnosis method for the knee meniscus tears in MR images
CN110689550A (en) Efficient and automatic screening system and method for lumbar vertebra sagittal plane CT (computed tomography) images
EP3501399B1 (en) Method of quantification of visceral fat mass
Ang et al. An algorithm for automated separation of trabecular bone from variably thick cortices in high-resolution computed tomography data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220301

RJ01 Rejection of invention patent application after publication