CN111862070A - Method for measuring subcutaneous fat thickness based on CT image - Google Patents
Method for measuring subcutaneous fat thickness based on CT image Download PDFInfo
- Publication number
- CN111862070A CN111862070A CN202010741385.2A CN202010741385A CN111862070A CN 111862070 A CN111862070 A CN 111862070A CN 202010741385 A CN202010741385 A CN 202010741385A CN 111862070 A CN111862070 A CN 111862070A
- Authority
- CN
- China
- Prior art keywords
- image
- matrix
- pixel
- value
- points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 210000004003 subcutaneous fat Anatomy 0.000 title claims abstract description 32
- 230000002093 peripheral effect Effects 0.000 claims abstract description 18
- 238000012549 training Methods 0.000 claims abstract description 15
- 238000013135 deep learning Methods 0.000 claims abstract description 13
- 230000011218 segmentation Effects 0.000 claims abstract description 9
- 238000007781 pre-processing Methods 0.000 claims abstract description 6
- 239000011159 matrix material Substances 0.000 claims description 47
- 238000004364 calculation method Methods 0.000 claims description 19
- 230000009466 transformation Effects 0.000 claims description 14
- 238000006243 chemical reaction Methods 0.000 claims description 9
- 238000012545 processing Methods 0.000 claims description 9
- 230000003187 abdominal effect Effects 0.000 claims description 7
- 238000013528 artificial neural network Methods 0.000 claims description 7
- 238000013507 mapping Methods 0.000 claims description 7
- 230000008569 process Effects 0.000 claims description 7
- 230000003416 augmentation Effects 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 6
- 230000006870 function Effects 0.000 claims description 5
- 238000010586 diagram Methods 0.000 claims description 4
- 230000007246 mechanism Effects 0.000 claims description 4
- 230000008602 contraction Effects 0.000 claims description 3
- 230000001186 cumulative effect Effects 0.000 claims description 2
- 238000005315 distribution function Methods 0.000 claims description 2
- 210000004204 blood vessel Anatomy 0.000 claims 5
- 238000013461 design Methods 0.000 abstract description 3
- 230000000694 effects Effects 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 201000010099 disease Diseases 0.000 description 2
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 206010012601 diabetes mellitus Diseases 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 230000010339 dilation Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 208000017169 kidney disease Diseases 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000011164 ossification Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20132—Image cropping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Geometry (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a method for measuring subcutaneous fat thickness based on CT images, which comprises the following steps: step 1: preprocessing a CT image to obtain a training set; step 2: carrying out image block clipping operation on the training set to obtain a data set; and step 3: performing peripheral curve segmentation through deep learning; and 4, step 4: edge connection; and 5: performing skeletonization operation on the peripheral curve and the inner curve of the CT image; step 6: acquiring pixel points between a peripheral curve and an inner curve of the CT image from 12 directions; and 7: and converting the peripheral curve and the inner curve to obtain the subcutaneous fat thickness. The invention designs a method for measuring the thickness of subcutaneous fat, which is based on the idea of firstly segmenting and then calculating, and can calculate the thickness of the subcutaneous fat more accurately.
Description
Technical Field
The invention relates to the technical field of abdominal measurement, in particular to a method for measuring subcutaneous fat thickness based on a CT image.
Background
The subcutaneous fat thickness is often associated with diseases such as diabetes, nephropathy and the like, and is critical to health, so that the subcutaneous fat thickness can be measured by a simple method, and the method has a certain practical significance clinically.
The traditional method for measuring the thickness of the subcutaneous fat of the human body has the problems of low precision, high cost and the like. The method provided by the patent carries out digital image processing by deep learning of a computer directly based on CT images, and subcutaneous fat thickness is very conveniently obtained for doctors to carry out disease diagnosis.
Disclosure of Invention
The invention aims to provide a method for measuring subcutaneous fat thickness based on a CT image, which comprises the steps of preprocessing the CT image, segmenting a subcutaneous fat layer through deep learning, performing skeletonization on a segmentation result to obtain a fat layer peripheral curve and an inner curve, sequentially taking 12 directions at intervals of 30 degrees from the horizontal direction to calculate the distance between the two curves, and averaging the obtained 12 values to obtain the subcutaneous fat thickness, so as to solve the problems in the background technology.
In order to achieve the purpose, the invention provides the following technical scheme: a method for measuring subcutaneous fat thickness based on CT images, comprising the steps of:
step 1: preprocessing a CT image to obtain a training set;
step 2: carrying out image block clipping operation on the training set to obtain a data set;
and step 3: performing peripheral curve segmentation through deep learning;
and 4, step 4: edge connection;
and 5: performing skeletonization operation on the peripheral curve and the inner curve of the CT image;
step 6: acquiring pixel points between a peripheral curve and an inner curve of the CT image from 12 directions;
and 7: and converting the peripheral curve and the inner curve to obtain the subcutaneous fat thickness.
Preferably, the step 1 comprises the following steps:
step 1.1: carrying out histogram equalization on the image by using a CLAHE algorithm;
step 1.2: adjusting the integral gray scale of the image by adopting gamma conversion;
step 1.3: normalized image pixel values are between 0 and 1;
the step 1.1 comprises the following steps: in the CLAHE algorithm, for a pixel neighborhood, contrast is obtained by calculating the slope of a transformation function, the slope of the transformation function is in direct proportion to the slope of a cumulative distribution function CDF of the pixel neighborhood, before the CDF of the pixel neighborhood is calculated, the CLAHE algorithm cuts a histogram according to a specified threshold value, and a cut part is uniformly distributed in the histogram.
Preferably, the step 1.2 comprises: the gamma conversion realizes the gray stretching by carrying out nonlinear operation on the gray value to ensure that the gray value of the processed image and the gray value of the image before processing have a nonlinear win-win exponential relationship;
the gamma transformation formula is as follows:
IOUT=cIIN γ
wherein IOUTFor the gray value of the processed image, IINThe gray value of the image before processing, c is a gray scaling coefficient, and gamma is a transformation index;
when gamma is different, the input gray value is 0 to 255 and the input gray value and the output gray value are normalized to be between 0 and 1, and when gamma is smaller than 1, the gray value of the image is improved; when gamma is larger than 1, the overall brightness of the image is lowered; when gamma is equal to 1, the overall brightness is consistent with that of the original image, and the gamma value is 0.5.
Preferably, the step 1.3 comprises: the normalization of the pixels is achieved by dividing all pixel values by the maximum pixel value, which is 255;
the calculation formula is as follows:
x'=(x-X_min)/(X_max-X_min);
where X' is the normalization result, X is the input pixel value, X _ min is the minimum value among all input image pixels, and X _ max is the maximum value among all input image pixels.
Preferably, the step 2 comprises: and for the training set, generating a group of random coordinates during clipping, and clipping the image blocks with the size of 48 x 48 by taking the random coordinates as a central point to obtain a data set.
Preferably, the step 3 comprises: adding an R2 module and an Attention augmentation module into the Unet;
the Unet structure is a symmetrical U-shaped structure overall and comprises 12 units F1-F12, the left side F1-F6 are contraction paths, and the right side F6-F12 are expansion paths;
the R2 module includes a residual learning unit and a recursive convolution:
a residual learning unit: setting an input x of a neural network unit, an expected output H (x), defining a residual mapping F (x) ═ H (x) -x, and directly transmitting x to the output, wherein the target learned by the neural network unit is the residual mapping F (x) ═ H (x) -x, the residual learning unit consists of a series of convolution layers and a shortcut, the input x is transmitted to the output of the residual learning unit through the shortcut, and the output of the residual learning unit is z ═ F (x) + x;
and (3) recursive convolution: setting the input as x, performing continuous convolution on the input x, and adding the current input to the convolution output of each time to be used as the input of the next convolution;
the R2 module replaces the normal convolution in the residual learning unit with a recursive convolution;
the attribute authority is a mapping for obtaining a series of key-value pairs through query, and the implementation steps of the attribute authority module comprise the following steps:
by inputting the value of (w, h, c)in) The signature of (2) is subjected to a 1 x 1 convolution output QKV matrix with a size of (w, h,2 x d)k+dv) Wherein w, h,2 x dk+dVRespectively representing the width, length and depth of the matrix, CinThe number of characteristic layers of the input image is set;
the QKV matrix is segmented from the depth channel to obtain Q, K, V three matrices, and the depth channel sizes of the Q, K, V three matrices are d respectivelyk、dk、dv;
The method is characterized in that a multi-head attention mechanism structure is adopted, and Q, K, V three matrixes are respectively divided into N equal matrixes from a depth channel;
flattening the divided Q, K, V matrix to generate three matrixes of Flat _ Q, Flat _ K, Flat _ V, namely keeping the depth channel of Q, K, V matrix unchanged, and compressing Q, K, V matrix from length direction to 1 dimension, wherein two matrixes of Flat _ Q, Flat _ K have the size of (w × h, d)k) The size of the Flat _ V matrix is (w × h, d)v);
The Attention augmentation uses two matrixes of Flat _ Q, Flat _ K to perform matrix multiplication, calculates a weight matrix, adds calculation of embedding relative position on the basis, and obtains the relative position information of each point on the characteristic diagram by performing weight calculation of length and width directions on the Q matrix;
splicing the attention feature matrix O with a normal convolution process according to the depth direction to obtain an attention feature result, wherein a calculation formula of the attention feature matrix is as follows:
where Q is the query matrix of the input image data, K is the target matrix of the input image data, SHAnd SWRespectively a logarithmic matrix of the relative position of the image along the length and width dimensions,for scale, V is a matrix of values of the input image data.
Preferably, the step 4 comprises: connecting edge points is to analyze the characteristics of pixels in a small neighborhood of each point (x, y), connect all similar points according to a preset criterion, and form an edge meeting the pixels with the same characteristics according to the preset criterion;
two properties of edge pixel similarity are determined:
- -intensity of gradient vector
|M(s,t)-M(x,y)|≤E;
- -direction of gradient vector
|α(s,t)-α(x,y)|≤A;
Wherein (x, y) represents a pixel point, (s, t) represents all neighborhood points centered on (x, y), E is a non-negative threshold, A is a non-negative angle threshold;
when the (s, t) size and direction criteria are met, connecting (s, t) to each pixel point of the (x, y) image, and repeating the connecting operation at each position in the image;
when the center of the neighborhood is shifted from one pixel to another, both pixels are recorded.
Preferably, the skeletonization treatment in the step 5 comprises:
step 5.1: circulating all boundary points, and recording each boundary point as a center P1, wherein 8 points in the neighborhood clockwise around the center point from the upper part of P1 are respectively recorded as P2 and P3.. P9;
the following boundary points are marked while satisfying:
--2≤N(P1)≤6;
--S(P1)=1;
--P2*P4*P6=0;
--P4*P6*P8=0;
wherein N (P1) is the number of nonzero adjacent points of P1, and S (P1) is the number of times that the value of the pixel points is changed from 0 to 1 after the pixel points are sorted according to P2 and P3.. P9;
step 5.2: circulating all boundary points, and recording each boundary point as a center P1, wherein 8 points in the neighborhood clockwise around the center point from the upper part of P1 are respectively recorded as P2 and P3.. P9;
the following boundary points are marked while satisfying:
--2≤N(P1)≤6;
--S(P1)=1;
--P2*P4*P8=0;
--P2*P6*P8=0;
after executing all boundary points in the image, setting marked points as background points;
step 5.3: and 5.1 and 5.2 are used as an iteration until no point meets the selection condition of the boundary point in the step 5.1 and the step 5.2, and the obtained image is the skeletonized skeleton map.
Preferably, the step 6 includes:
6.1, traversing from the horizontal direction to the right by taking the middle point of the image as an origin to find a white pixel point, and then starting counting by a counter;
step 6.2: after counting, adding 1 to a black pixel point until a white pixel point is met again, stopping counting, and storing a total count value;
step 6.3: the clockwise rotation of 30 degrees is again performed according to step 6.1 and step 6.2 for a total of 12 runs to obtain 12 total counts.
Preferably, the step 7 comprises: the distances of the inner and outer curves in 12 directions are obtained in the step 6, the average value is taken, the average value is multiplied by the length of each pixel point, and the subcutaneous fat thickness is obtained, wherein the specific calculation formula is as follows:
d=n×l
wherein n is1To n12Inner obtained for 12 directionsThe number of pixels between the outer curves, n is the average value of the number of pixels between the inner and outer curves in 12 directions, l is the length of each pixel point, and d is the subcutaneous fat thickness.
Compared with the prior art, the invention has the beneficial effects that:
the invention designs a method for measuring the thickness of subcutaneous fat, which is based on the thought of firstly segmenting and then calculating, adopts deep learning to accurately extract the curve characteristics of the inner layer and the outer layer of the fat, and then calculates the thickness of the subcutaneous fat more accurately in a mode of averaging through multi-direction calculation.
Drawings
FIG. 1 is a flow chart of fat thickness estimation provided by an embodiment of the present invention;
FIG. 2 is a block diagram of the AA R2U-Net model of the present invention;
FIG. 3 is a flowchart of an algorithm for calculating pixel points of a peripheral curve and an inner curve of a fat layer according to the present invention;
FIG. 4 is a schematic diagram of an algorithm for calculating pixel points of a peripheral curve and an inner curve of a fat layer according to the present invention;
FIG. 5 is a selected CT original image;
FIG. 6 is a graph of the segmentation results after the segmentation process;
FIG. 7 is a graph of the results after edge joining;
FIG. 8 is a graph of the effect of skeletonization;
fig. 9 is a graph of output gray level versus input gray level.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1 to 9, the present invention provides a technical solution: a method for measuring subcutaneous fat thickness based on CT images, the method comprising the steps of:
the method comprises the following steps: image pre-processing
The following operations are performed on the CT image in the preprocessing aspect:
1. carrying out histogram equalization on the image by using a CLAHE algorithm;
the CLAHE is an improvement of the AHE, and the improvement is mainly embodied in that the local contrast is limited, and the degree of noise amplification is effectively reduced; in the CLAHE algorithm, for a certain pixel neighborhood, the contrast is obtained by calculating the slope of a transformation function, and the slope is in direct proportion to the CDF slope of the neighborhood; CLAHE would crop the histogram according to a specified threshold and distribute the cropped portion evenly into the histogram before computing the CDF for that neighborhood.
2. Adjusting the integral gray scale of the image by adopting gamma conversion;
gamma transformation (gamma transform) is a common power-law transformation operation in image processing. The gamma conversion realizes the gray stretching by carrying out nonlinear operation on the gray value to ensure that the gray value of the processed image and the gray value of the image before processing have a nonlinear win-exponent relationship.
The gamma transformation formula is as follows:
Iout=cIin γ
wherein IinFor gray values of the pre-processed image, IOUTThe gray scale value of the processed image, c is a gray scale coefficient, and gamma is a transformation index.
When gamma takes different values, the input gray value takes 0 to 255 and the input gray value and the output gray value are normalized to be between 0 and 1, when gamma is smaller than 1, the gamma conversion improves the gray value of the image, and the image is visually brightened; when gamma is larger than 1, the gray value of the image is lowered by gamma conversion, the image becomes dark visually, and when gamma is equal to 1, the overall brightness is consistent with that of the original image; the method takes the gamma value as 0.5.
3. Normalized image pixel values are between 0 and 1;
first, it is to be appreciated that for most image data, the pixel values are integers between 0 and 255.
In deep neural network training, a smaller weight value is generally used for fitting, and when the value of the training data is a larger integer value, the process of model training may be slowed down. Therefore, it is generally necessary to normalize the pixels of the image so that each pixel value of the image is between 0 and 1. When the pixels of the image are in the 0-1 range, the image is still valid and can be viewed normally, as it is still between 0-255.
The normalization of the pixels may be achieved by dividing all pixel values by the maximum pixel value, which is typically 255. It should be noted that this method can be used regardless of whether the picture is a monochrome picture in one channel or a color picture in multiple channels; regardless of whether the maximum pixel value of the picture is 255, it is divided by 255.
The calculation formula is as follows:
x'=(x-X_min)/(X_max-X_min)
where X' is the normalization result, X is the input pixel value, X _ min is the minimum value among all input image pixels, and X _ max is the maximum value among all input image pixels.
After the processing of the algorithm, the overall contrast of the original CT image is enhanced, better fitting of the experimental model training can be ensured, and a better segmentation effect is realized.
Step two: image block cropping
Because the CT original image data quantity is not enough, image block clipping is carried out to expand the training data set. For the training set, a set of random coordinates is generated during clipping, and the coordinates are used as a central point to clip image blocks with the size of 48 × 48, so that a large number of data sets are obtained. Naturally, the corresponding standard graph is also clipped by the same method, so as to ensure that the original graph clipping graph and the standard graph clipping graph are in one-to-one correspondence, and ensure the accuracy of the subsequent model training.
Step three: peripheral curve segmentation by deep learning
The deep learning network can be selected autonomously, and a scheme is provided, but not exclusively, so that the more accurate the image peripheral curve segmentation is, the more accurate the abdominal circumference measurement is finally obtained.
An R2 module and an Attention increment module (namely an AA R2U-Net model) are added into the Unet; the Unet structure is a symmetrical U-shaped structure overall and comprises 12 units (F1-F12) during design, wherein the left side F1-F6 are contraction paths and are used for feature extraction; the right side F6-F12 are dilation paths for recovery of details to achieve accurate prediction.
Wherein the R2 module includes a residual learning unit and a recursive convolution.
(1) A residual learning unit: assuming that an input of a neural network unit is x, an expected output is h (x), and a residual map f (x) ═ h (x) — x is defined, if x is directly transmitted to the output, the target to be learned by the neural network unit is the residual map f (x) ═ h (x) — x, the residual learning unit is composed of a series of convolution layers and a shortcut, and the input x is transmitted to the output of the residual learning unit through the shortcut, and the output of the residual learning unit is z ═ f (x) + x.
(2) And (3) recursive convolution: assuming that the input is x, successive convolutions are performed on the input x, and the current input is added to the output of each convolution as the input for the next convolution.
The R2 module replaces the normal convolution in the residual learning unit with a recursive convolution.
The attribute authority essentially obtains a series of key-value pair mappings through query; first, the input size is (w, h, c)in) The signature of (2) is subjected to a 1 x 1 convolution output QKV matrix with a size of (w, h,2 x d)k+dv) Wherein w, h,2 x dk+dVRespectively representing the width, length and depth of the matrix, CinThe number of characteristic layers of the input image is set; and then, QKV matrixes are segmented from the depth channels to obtain Q, K, V three matrixes with the depth channel sizes dk、dk、dv(ii) a Then, a structure of a multi-head attention mechanism is adopted, Q, K, V three matrixes are respectively divided into N equal matrixes from a depth channel for subsequent calculation, and the multi-head attention mechanism expands the original single attention calculation into a plurality of calculations which are smaller and independent in parallel, so that the model can learn feature information in different subspaces.
Flattening the divided Q, K, V matrix to generate three matrixes of Flat _ Q, Flat _ K, Flat _ V, i.e. Q, K, V keeps the depth channel unchanged and compresses the matrix from length to width to 1 dimension, wherein the sizes of the first two matrixes are (w × h, d)k) The latter matrix size is (w x h, d)v) (ii) a Then, the attribute evaluation stores the original Self-attribute method, performs matrix multiplication by using two matrixes of Flat _ Q, Flat _ K to calculate a weight matrix, adds calculation of embedding relative position on the basis of the matrix, and performs weight calculation in two directions of length and width on the Q matrix to obtain relative position information of each point on the feature map, thereby preventing the transformation of the feature position and reducing the final effect of the model.
Splicing (concat) the Attention feature matrix O and the normal convolution process according to the depth direction to obtain the result of the Attention augmentation;
the formula for the calculation of the attention characterization matrix O is as follows:
wherein Q is a query matrix of the input image data, K is a target matrix of the input image data, V is a numerical matrix of the input image data, SHAnd SWRespectively a logarithmic matrix of the relative position of the image along the length and width dimensions,on a scale.
Step four: edge connection
Local processing is one of the simplest methods of connecting edge points, where the characteristics of a pixel are analyzed in a small neighborhood at each point (x, y), and all similar points are connected according to a certain criterion to form an edge that satisfies the same characteristic pixel according to the certain criterion.
Two main properties that determine edge pixel similarity:
(1) strength of gradient vector
|M(s,t)-M(x,y)|≤E
(2) Direction of gradient vector
|α(s,t)-α(x,y)|≤A
Where (x, y) represents one pixel (s, t) representing all neighborhood points centered on (x, y). E is a non-negative threshold and a is a non-negative corner threshold.
If the (s, t) size and orientation criteria are satisfied, then (s, t) is connected to (x, y), and this operation is repeated at each pixel point in the image, at each location in the image. When the center of the neighborhood shifts from one pixel to another, the two phase connection points must be recorded.
Step five: ossification treatment
The skeletonization process can reduce the width to one pixel without changing the core properties of the original curve direction, shape, and connectivity. The skeletonization operation is carried out on the peripheral curve and the inner curve of the CT image, so that the complexity of an original image can be greatly reduced, convenience is provided for next step of accurately calculating pixel points, and the specific steps are as follows:
all boundary points are cycled, for each boundary point, the boundary point is marked as a center P1, 8 points in the neighborhood are marked as P2 and P3.. P9 from the upper part of P1 clockwise around the center point, and the boundary points which simultaneously satisfy the following are marked firstly: (1) n (P1) is more than or equal to 2 and less than or equal to 6; (2) s (P1) ═ 1; (3) P2P 4P 6 ═ 0; (4) P4P 6P 8 ═ 0; where N (P1) is the number of non-zero neighbors of P1 and S (P1) is the number of times the value of the pixel points changes from 0 to 1 after sorting according to P2, P3.. P9.
As in the first step, only the previous condition 3 is changed to: P2P 4P 8 ═ 0; condition 4 is changed to P2P 6P 8 to 0, and after all boundary points in the image have been executed, the marked points are set as background points.
And finally, taking the two steps as an iteration until no point meets the requirements again, wherein the obtained image is the skeletonized skeleton image.
Step six: obtaining pixel points between two curves from 12 directions
And traversing from the horizontal direction to the right by taking the middle point of the image as an origin to find a white pixel point, then adding 1 to a black pixel point after the counter starts counting until the white pixel point is met again, stopping counting, storing a total count value, then rotating clockwise by 30 degrees, performing once again according to the above operation, and obtaining 12 total count values after 12 times of total experience.
Step seven: conversion to subcutaneous fat thickness
And fifthly, obtaining the distances of the inner and outer curves in 12 directions, then taking an average value, and multiplying the average value by the length of each pixel point to finally obtain the subcutaneous fat thickness. The specific calculation formula is as follows:
d=n×l
wherein n is1To n12The number of pixels between the inner and outer curves obtained for 12 directions; n is the average of the number of pixels between the 12 direction inner and outer curves; l is the length of each pixel point; d is the subcutaneous fat thickness.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (10)
1. A method for measuring subcutaneous fat thickness based on CT images is characterized by comprising the following steps:
step 1: preprocessing a CT image to obtain a training set;
step 2: carrying out image block clipping operation on the training set to obtain a data set;
and step 3: performing peripheral curve segmentation through deep learning;
and 4, step 4: edge connection;
and 5: performing skeletonization operation on the peripheral curve and the inner curve of the CT image;
step 6: acquiring pixel points between a peripheral curve and an inner curve of the CT image from 12 directions;
and 7: and converting the peripheral curve and the inner curve to obtain the subcutaneous fat thickness.
2. The method for measuring subcutaneous fat thickness based on CT image as claimed in claim 1, wherein said step 1 comprises the steps of:
step 1.1: carrying out histogram equalization on the image by using a CLAHE algorithm;
step 1.2: adjusting the integral gray scale of the image by adopting gamma conversion;
step 1.3: normalized image pixel values are between 0 and 1;
the step 1.1 comprises the following steps: in the CLAHE algorithm, for a pixel neighborhood, contrast is obtained by calculating the slope of a transformation function, the slope of the transformation function is in direct proportion to the slope of a cumulative distribution function CDF of the pixel neighborhood, before the CDF of the pixel neighborhood is calculated, the CLAHE algorithm cuts a histogram according to a specified threshold value, and a cut part is uniformly distributed in the histogram.
3. The method for measuring subcutaneous fat thickness based on CT image as claimed in claim 2, wherein said step 1.2 includes: the gamma conversion realizes the gray stretching by carrying out nonlinear operation on the gray value to ensure that the gray value of the processed image and the gray value of the image before processing have a nonlinear win-win exponential relationship;
the gamma transformation formula is as follows:
IOUT=cIIN γ
wherein IOUTFor the gray value of the processed image, IINThe gray value of the image before processing, c is a gray scaling coefficient, and gamma is a transformation index;
when gamma is different, the input gray value is 0 to 255 and the input gray value and the output gray value are normalized to be between 0 and 1, and when gamma is smaller than 1, the gray value of the image is improved; when gamma is larger than 1, the overall brightness of the image is lowered; when gamma is equal to 1, the overall brightness is consistent with that of the original image, and the gamma value is 0.5.
4. The method for measuring subcutaneous fat thickness based on CT image as claimed in claim 2, wherein said step 1.3 comprises: the normalization of the pixels is achieved by dividing all pixel values by the maximum pixel value, which is 255;
the calculation formula is as follows:
x'=(x-X_min)/(X_max-X_min);
where X' is the normalization result, X is the input pixel value, X _ min is the minimum value among all input image pixels, and X _ max is the maximum value among all input image pixels.
5. The method for measuring subcutaneous fat thickness based on CT image as claimed in claim 1, wherein said step 2 comprises: and for the training set, generating a group of random coordinates during clipping, and clipping the image blocks with the size of 48 x 48 by taking the random coordinates as a central point to obtain a data set.
6. The CT abdominal blood vessel grading identification method based on deep learning according to claim 1, wherein the step 3 comprises: adding an R2 module and an Attention augmentation module into the Unet;
the Unet structure is a symmetrical U-shaped structure overall and comprises 12 units F1-F12, the left side F1-F6 are contraction paths, and the right side F6-F12 are expansion paths;
the R2 module includes a residual learning unit and a recursive convolution:
a residual learning unit: setting an input x of a neural network unit, an expected output H (x), defining a residual mapping F (x) ═ H (x) -x, and directly transmitting x to the output, wherein the target learned by the neural network unit is the residual mapping F (x) ═ H (x) -x, the residual learning unit consists of a series of convolution layers and a shortcut, the input x is transmitted to the output of the residual learning unit through the shortcut, and the output of the residual learning unit is z ═ F (x) + x;
and (3) recursive convolution: setting the input as x, performing continuous convolution on the input x, and adding the current input to the convolution output of each time to be used as the input of the next convolution;
the R2 module replaces the normal convolution in the residual learning unit with a recursive convolution;
the attribute authority is a mapping for obtaining a series of key-value pairs through query, and the implementation steps of the attribute authority module comprise the following steps:
by inputting the value of (w, h, c)in) The signature of (2) is subjected to a 1 x 1 convolution output QKV matrix with a size of (w, h,2 x d)k+dv) Wherein w, h,2 x dk+dVRespectively representing the width, length and depth of the matrix, CinThe number of characteristic layers of the input image is set;
the QKV matrix is segmented from the depth channel to obtain Q, K, V three matrices, and the depth channel sizes of the Q, K, V three matrices are d respectivelyk、dk、dv;
The method is characterized in that a multi-head attention mechanism structure is adopted, and Q, K, V three matrixes are respectively divided into N equal matrixes from a depth channel;
flattening the divided Q, K, V matrix to generate three matrixes of Flat _ Q, Flat _ K, Flat _ V, namely keeping the depth channel of Q, K, V matrix unchanged, and compressing Q, K, V matrix from length direction to 1 dimension, wherein two matrixes of Flat _ Q, Flat _ K have the size of (w × h, d)k) The size of the Flat _ V matrix is (w × h, d)v);
The Attention augmentation uses two matrixes of Flat _ Q, Flat _ K to perform matrix multiplication, calculates a weight matrix, adds calculation of embedding relative position on the basis, and obtains the relative position information of each point on the characteristic diagram by performing weight calculation of length and width directions on the Q matrix;
splicing the Attention feature matrix O with a normal convolution process according to the depth direction to obtain an Attention augmentation result, wherein the calculation formula of the Attention feature matrix is as follows:
7. The CT abdominal blood vessel grading identification method based on deep learning according to claim 1, wherein the step 4 comprises: connecting edge points is to analyze the characteristics of pixels in a small neighborhood of each point (x, y), connect all similar points according to a preset criterion, and form an edge meeting the pixels with the same characteristics according to the preset criterion;
two properties of edge pixel similarity are determined:
- -intensity of gradient vector
|M(s,t)-M(x,y)|≤E;
- -direction of gradient vector
|α(s,t)-α(x,y)|≤A;
Wherein (x, y) represents a pixel point, (s, t) represents all neighborhood points centered on (x, y), E is a non-negative threshold, A is a non-negative angle threshold;
when the (s, t) size and direction criteria are met, connecting (s, t) to each pixel point of the (x, y) image, and repeating the connecting operation at each position in the image;
when the center of the neighborhood is shifted from one pixel to another, both pixels are recorded.
8. The CT abdominal blood vessel grading identification method based on deep learning of claim 1, wherein the skeletonization process in the step 5 comprises:
step 5.1: circulating all boundary points, and recording each boundary point as a center P1, wherein 8 points in the neighborhood clockwise around the center point from the upper part of P1 are respectively recorded as P2 and P3.. P9;
the following boundary points are marked while satisfying:
--2≤N(P1)≤6;
--S(P1)=1;
--P2*P4*P6=0;
--P4*P6*P8=0;
wherein N (P1) is the number of nonzero adjacent points of P1, and S (P1) is the number of times that the value of the pixel points is changed from 0 to 1 after the pixel points are sorted according to P2 and P3.. P9;
step 5.2: circulating all boundary points, and recording each boundary point as a center P1, wherein 8 points in the neighborhood clockwise around the center point from the upper part of P1 are respectively recorded as P2 and P3.. P9;
the following boundary points are marked while satisfying:
--2≤N(P1)≤6;
--S(P1)=1;
--P2*P4*P8=0;
--P2*P6*P8=0;
after executing all boundary points in the image, setting marked points as background points;
step 5.3: and 5.1 and 5.2 are used as an iteration until no point meets the selection condition of the boundary point in the step 5.1 and the step 5.2, and the obtained image is the skeletonized skeleton map.
9. The CT abdominal blood vessel grading identification method based on deep learning according to claim 1, wherein the step 6 comprises:
6.1, traversing from the horizontal direction to the right by taking the middle point of the image as an origin to find a white pixel point, and then starting counting by a counter;
step 6.2: after counting, adding 1 to a black pixel point until a white pixel point is met again, stopping counting, and storing a total count value;
step 6.3: the clockwise rotation of 30 degrees is again performed according to step 6.1 and step 6.2 for a total of 12 runs to obtain 12 total counts.
10. The CT abdominal blood vessel grading identification method based on deep learning according to claim 1, wherein the step 7 comprises: the distances of the inner and outer curves in 12 directions are obtained in the step 6, the average value is taken, the average value is multiplied by the length of each pixel point, and the subcutaneous fat thickness is obtained, wherein the specific calculation formula is as follows:
d=n×l
wherein n is1To n12The number of pixels between the inner and outer curves obtained for 12 directions, n is the average value of the number of pixels between the inner and outer curves for 12 directions, l is the length of each pixel point, and d is the subcutaneous fat thickness.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010741385.2A CN111862070A (en) | 2020-07-29 | 2020-07-29 | Method for measuring subcutaneous fat thickness based on CT image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010741385.2A CN111862070A (en) | 2020-07-29 | 2020-07-29 | Method for measuring subcutaneous fat thickness based on CT image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111862070A true CN111862070A (en) | 2020-10-30 |
Family
ID=72948560
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010741385.2A Pending CN111862070A (en) | 2020-07-29 | 2020-07-29 | Method for measuring subcutaneous fat thickness based on CT image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111862070A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107203701A (en) * | 2017-07-24 | 2017-09-26 | 广东工业大学 | A kind of measuring method of fat thickness, apparatus and system |
CN109685770A (en) * | 2018-12-05 | 2019-04-26 | 合肥奥比斯科技有限公司 | Retinal vessel curvature determines method |
EP3501399A1 (en) * | 2017-12-21 | 2019-06-26 | Cetir Centre Medic, S.A. | Method of quantification of visceral fat mass |
CN110415246A (en) * | 2019-08-06 | 2019-11-05 | 东北大学 | A kind of analysis method of stomach fat ingredient |
CN110853049A (en) * | 2019-10-17 | 2020-02-28 | 上海工程技术大学 | Abdominal ultrasonic image segmentation method |
-
2020
- 2020-07-29 CN CN202010741385.2A patent/CN111862070A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107203701A (en) * | 2017-07-24 | 2017-09-26 | 广东工业大学 | A kind of measuring method of fat thickness, apparatus and system |
EP3501399A1 (en) * | 2017-12-21 | 2019-06-26 | Cetir Centre Medic, S.A. | Method of quantification of visceral fat mass |
CN109685770A (en) * | 2018-12-05 | 2019-04-26 | 合肥奥比斯科技有限公司 | Retinal vessel curvature determines method |
CN110415246A (en) * | 2019-08-06 | 2019-11-05 | 东北大学 | A kind of analysis method of stomach fat ingredient |
CN110853049A (en) * | 2019-10-17 | 2020-02-28 | 上海工程技术大学 | Abdominal ultrasonic image segmentation method |
Non-Patent Citations (3)
Title |
---|
MD ZAHANGIR ALOM等: "Recurrent Residual Convolutional Neural Network based on U-Net (R2U-Net) for Medical Image Segmentation", ARXIV:1802.06955, pages 1 - 12 * |
张堃: "大视场大规模目标精确检测算法应用研究", 仪器仪表学报, vol. 41, no. 4, pages 191 - 199 * |
陈虹伶,张建勋: "基于深度学习的B超生猪脂肪含量检测", 重庆理工大学学报, vol. 33, no. 6, pages 112 - 116 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107680054B (en) | Multi-source image fusion method in haze environment | |
CN104463804B (en) | Image enhancement method based on intuitional fuzzy set | |
CN111145181A (en) | Skeleton CT image three-dimensional segmentation method based on multi-view separation convolutional neural network | |
CN109919859B (en) | Outdoor scene image defogging enhancement method, computing device and storage medium thereof | |
CN113592794B (en) | Spine graph segmentation method of 2D convolutional neural network based on mixed attention mechanism | |
CN115496771A (en) | Brain tumor segmentation method based on brain three-dimensional MRI image design | |
CN108364297B (en) | Blood vessel image segmentation method, terminal and storage medium | |
Liu et al. | Automatic lung segmentation based on image decomposition and wavelet transform | |
CN112085714A (en) | Pulmonary nodule detection method, model training method, device, equipment and medium | |
CN117474823B (en) | CT data processing system for pediatric infectious inflammation detection assistance | |
CN111047559A (en) | Method for rapidly detecting abnormal area of digital pathological section | |
CN114677525B (en) | Edge detection method based on binary image processing | |
CN111862071B (en) | Method for measuring CT value of lumbar 1 vertebral body based on CT image | |
CN111862123A (en) | CT abdominal artery blood vessel classification identification method based on deep learning | |
CN116503426B (en) | Ultrasonic image segmentation method based on image processing | |
CN112686336A (en) | Burn surface of a wound degree of depth classification system based on neural network | |
CN111798463A (en) | Method for automatically segmenting multiple organs in head and neck CT image | |
Malik et al. | Comparative study of digital image enhancement approaches | |
Kalhor et al. | Assessment of histogram-based medical image contrast enhancement techniques; an implementation | |
CN111489318A (en) | Medical image enhancement method and computer-readable storage medium | |
CN111862070A (en) | Method for measuring subcutaneous fat thickness based on CT image | |
Mikhov et al. | Fuzzy logic approach to improving the digital images contrast | |
Zhou et al. | An improved algorithm using weighted guided coefficient and union self‐adaptive image enhancement for single image haze removal | |
CN113269788B (en) | Guide wire segmentation method based on depth segmentation network and shortest path algorithm under X-ray perspective image | |
CN112819838B (en) | Image enhancement method, electronic device, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |