CN111047605B - Construction method and segmentation method of vertebra CT segmentation network model - Google Patents

Construction method and segmentation method of vertebra CT segmentation network model Download PDF

Info

Publication number
CN111047605B
CN111047605B CN201911234498.7A CN201911234498A CN111047605B CN 111047605 B CN111047605 B CN 111047605B CN 201911234498 A CN201911234498 A CN 201911234498A CN 111047605 B CN111047605 B CN 111047605B
Authority
CN
China
Prior art keywords
data
data set
segmented
network model
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911234498.7A
Other languages
Chinese (zh)
Other versions
CN111047605A (en
Inventor
周明全
闫峰
田丰源
杨嘉楠
耿国华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwest University
Original Assignee
Northwest University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwest University filed Critical Northwest University
Priority to CN201911234498.7A priority Critical patent/CN111047605B/en
Publication of CN111047605A publication Critical patent/CN111047605A/en
Application granted granted Critical
Publication of CN111047605B publication Critical patent/CN111047605B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • G06T2207/30012Spine; Backbone

Abstract

The invention discloses a construction method and a segmentation method of a vertebra CT segmentation network model. The method comprises the following steps: firstly, preprocessing a CT data set; training an Attention-Net network to obtain a spinal position pixel distribution model, and then training a DenseUnet network to obtain a model for predicting prior information; then, using the splicing data to train the DenseUnet network to obtain a multi-channel splicing DenseUnet network model; and (4) utilizing the three trained models to perform CT data segmentation. The invention solves the problems of small quantity of vertebra CT, low segmentation efficiency, manual intervention, excessive parameters required by the traditional DenseUnet and the like; and meanwhile, the segmentation accuracy is improved.

Description

Construction method and segmentation method of vertebra CT segmentation network model
Technical Field
The invention belongs to the field of medical image segmentation, and particularly relates to a construction method and a segmentation method of a vertebra CT segmentation network model.
Background
The development of spondylopathy is in the direction of younger generation, and the current medical procedure usually adopts CT images to diagnose spondylopathy. The CT image has high resolution, and the focus position can be well determined; the imaging is fast, and the characteristics of timeliness are met; the organization structure may also be better displayed. Generally, a doctor refers to a CT image to make a diagnosis by his or her own experience. However, diagnosis using CT images solely based on experience may cause deviation of the lesion position and may be over-subjective. To prevent this phenomenon, methods of computer-aided diagnosis and treatment have been proposed by scientists.
The computer aided diagnosis method includes that an interested region in CT imaging of the spine (the interested region refers to a region needing to be extracted medically, a corresponding non-interested region is a background; in a segmentation image, because the image is a binary file, the gray value of a spine part is nonzero and is called as a foreground, a black part in the image is a background; and the interested region in the diagnosis and treatment field of the spine is a spine part in CT original data) is segmented, and through a series of three-dimensional reconstruction and visualization technologies, the tissue structure of the spine is displayed in a three-dimensional model mode, so that a doctor can observe the position of a focus conveniently and formulate a treatment scheme.
Due to the complex structure of the spine, in addition to the fact that noise is easily generated in the process of CT imaging, there may be a case that the boundary of the spine is not obvious, wherein the segmentation problem of the spine is the key.
Conventional image segmentation algorithms use thresholds, use regions to segment using edges, and the like. However, these methods have the disadvantage of being sensitive to noise. Although the subsequent related improved method solves the problem of sensitivity to noise, the method still has the problems of manual intervention, low segmentation efficiency and incapability of realizing automatic segmentation.
Disclosure of Invention
Aiming at the defects or shortcomings of the prior art, the invention provides a construction method of a spine CT segmentation network model.
The construction method provided by the invention comprises the following steps:
(1) Preprocessing CT original data to obtain a patch data set, wherein the preprocessing comprises the steps of cutting the CT original data to obtain a plurality of two-dimensional images to form the patch data set; obtaining the interested region of each two-dimensional image in the patch data set to obtain a corresponding segmentation image, and forming a label data set;
(2) Downsampling all the segmentation images in the label data set to obtain a heat map of each segmentation image to form a first heat map set;
(3) Taking a patch data set as input, taking a first heat map set as a label to train an Attention-Net neural network, and obtaining a first network model;
(4) Training the DenseUnet neural network by taking a patch data set as input and a label data set as a label to obtain a second network model;
(5) Adopting the trained second network model in the step (4) to segment the patch data set to obtain an initial segmentation image set;
(6) Performing data splicing on the patch data set, the first heat map set and the initial segmentation image set;
(7) Training a DenseUnet neural network by taking a patch data set, a first heat map set and a spliced data set of the initial segmentation image set as input and a label data set as a label to obtain a third network model;
the first network model, the second network model and the third network model form a spine CT segmentation network model.
In some schemes, the CT raw data preprocessing includes normalizing the resolution and gray scale of the CT raw data and then performing cropping, so as to obtain a patch data set.
In some schemes, the step (1) is to preprocess mask data corresponding to CT original data to obtain a label data set, where the preprocessing includes cutting the mask data to obtain a plurality of segmented images to form the label data set, and each segmented image in the label data set corresponds to each two-dimensional image in the patch data set one by one.
In still other embodiments, the mask data preprocessing includes performing cropping after normalizing the resolution of the data, and the normalized resolution is the same as the resolution of the CT raw data or the normalized resolution.
In a preferred scheme, the DenseUnet neural network comprises nine Dense blocks, and the number of layers containing convolution in each Dense block is 5,6, 7,6 and 5; convolution operations in the Dense blocks all use convolution with a convolution kernel of 3*3, and when Concat splicing operations are performed on convolution layers in all Dense blocks, a Shuffle method is used for performing scrambling operations on layers in the blocks.
In a preferred scheme, L is adopted in the training of the step (3) 2 Loss as a constraint; and (5) adopting Diss loss as a constraint during training in the steps (4) and (7).
Furthermore, the invention provides a spine CT segmentation method. The provided segmentation method comprises
S1, preprocessing CT original data to be segmented to obtain a patch data set to be segmented, wherein the patch data set to be segmented is obtained by cutting the CT original data to be segmented to obtain a plurality of two-dimensional images;
s2, extracting a heat map set of the patch data set to be segmented by adopting the trained first network model;
s3, performing primary segmentation on the patch data set to be segmented by adopting a second network model to obtain an initial segmentation image set;
s4, carrying out data splicing on the patch data set to be segmented, the heat map set and the initial segmentation image set;
and S5, inputting the patched data set to be segmented, the heat map set and the spliced data set of the initial segmented image set into a third network model to output a segmented image set.
Furthermore, the invention also provides a spine CT segmentation system. The system provided by the invention comprises: the data preprocessing module, the data splicing module and the first network model, the second network model and the third network model;
the data preprocessing module is used for preprocessing CT original data to be segmented to obtain a patch data set to be segmented, and the patch data set to be segmented is obtained by cutting the CT original data to be segmented to obtain a plurality of two-dimensional images;
the first network model is used for extracting a heat map set of a patch data set to be segmented;
the second network model is used for carrying out primary segmentation on the patch data set to be segmented to obtain an initial segmentation image set;
the data splicing module is used for performing data splicing on the patch data set to be segmented, the heat map set and the initial segmentation image set;
and the third network model is used for segmenting the spliced data of the patch data set to be segmented, the heat map set and the initial segmented image set and outputting the segmented image set.
Compared with the prior art, the invention has the beneficial effects that:
according to the method, three-channel splicing data are used as training samples, prediction prior information and position pixel distribution information are added to original data, and therefore network segmentation accuracy is improved. In addition, on one hand, the invention adopts the Attention-Net network to obtain the approximate positioning of the spine, adds position pixel information to the original data and prevents the surrounding noise points from generating negative influence on the training of the network. On the other hand, the invention improves the traditional DenseUnet, and compared with the traditional DenseUnet, the whole segmentation process reduces 743 ten thousand parameters and increases the robustness of the network.
Drawings
FIG. 1 is a network structure of an exemplary Attention-Net neural network of the present invention; wherein N represents 3,5 or 7, nn represents 3 x 3,5 x 5 or 7*7;
FIG. 2 is a network structure of an exemplary DenseUnet neural network of the present invention;
FIG. 3 is a structure of an exemplary Dense block of the present invention;
FIG. 4 is an illustration of images involved in a model training process in an embodiment, where A is one of the images in the patch dataset, B is a segmented image corresponding to graph A in the label dataset, C is a heat map of graph B, and D is a preliminary segmentation map of graph A;
fig. 5 is an image E of a test set image F finally segmented using the model of the present invention.
Detailed Description
The sample data used for constructing the spine CT segmentation network model can be derived from network open source data, clinical cases and the like, generally comprises spine CT original data of a patient, and some open source data also comprise mask data corresponding to the CT original data, wherein the mask data are three-dimensional data obtained after medical staff mark interested areas in the CT original data. According to the characteristics of original data or the stored format of the original data, the requirement of network input and the like, the data needs to be preprocessed before training, three-dimensional data is mainly converted or cut into two-dimensional data to form a Patch two-dimensional image or a Patch data set, and resolution and gray level normalization processing needs to be carried out on some data before cutting so as to meet the requirement of subsequent network (Attention neural network or DenseUnet neural network) training input. For original data which does not contain mask data, manually or by adopting the existing segmentation method, obtaining label images which correspond to the Patch two-dimensional images one by one to form a label data set; the above processing is also required for the raw data including the mask data, and the label images corresponding to the Patch two-dimensional images one by one are obtained to form the label data set. As a general sense, the size of each image in the label data set is the same as the size of each image in the Patch data set, and the image size satisfies the requirement of network input.
On the other hand, when the spine CT data to be segmented is segmented by using the model constructed by the invention, the corresponding Patch data set to be segmented needs to be obtained by using the method, and as a general knowledge, the obtained Patch data set meets the requirement of model input.
Both the clipping and normalization methods described herein can be performed using means or methods known in the art.
The heat map described in the present invention is an image describing a dense distribution of the spine, for example, the value of each pixel point in the heat map of a certain segmented image involved in the model training process of the present invention is the proportion/proportion of the effective points (representing the spine pixel points in the patch two-dimensional image, representing the points with nonzero gray value in the label image or segmented image, i.e., the foreground points) in each unit region (e.g., 10 × 10 pixel region) in the segmented image; accordingly, each image in the label dataset is downsampled to obtain a corresponding heat map. One specific example of downsampling is: dividing a certain label image (with the size of 160 × 160) into a plurality of unit areas (such as 10 × 10 pixel areas), calculating the proportion of foreground points in each unit area in the corresponding unit area, and finally obtaining a size-scaled label (16 × 16), wherein each point is the effective point proportion of the unit area in the corresponding label. As a general knowledge, to meet the requirements of network input on image size, label needs to be upsampled to obtain a hot atlas with each image size being the same as the image size in the patch data set.
The splicing in the invention means that two data or data sets are spliced to obtain a spliced data set, and the adopted splicing method is to obtain the data set meeting a single channel. For example, the concatenate splicing method.
As an example, the Attention-Net neural network used in the present invention is based on FCN, as shown in FIG. 1, and comprises 15 layers in total, wherein the first layer is an upsampling layer to increase the image size; the second layer to the ninth layer are sequentially and alternately a scrolling layer and a pooling layer, each scrolling layer is provided with a padding operation for ensuring that the size of the picture cannot be changed, the pooling layer is used for down-sampling and is used for reducing the size of the picture by half, the tenth layer to the twelfth layer are cavity scrolling layers, the cavity rate is 2,3,5 once, the receptive field is increased on the premise that the size of the picture is not reduced, the thirteenth layer and the fourteenth layer are scrolling layers, the fifteenth layer is 1*1 scrolling layers and is used for reducing the number of channels, and the output of the network is a heat map of 16 × 16.
Besides the existing DenseUnet neural network, the invention can also adopt the improved DenseUnet neural network, and the improved DenseUnet neural network has fewer network parameters and higher robustness. As an example, as shown in fig. 2, the improved DenseUnet neural network is divided into 9 layers in total and is U-shaped according to the structure of the encoder-decoder. In a specific configuration, the number of convolution layers contained in the sense blocks of each layer is different, and the number of convolution layers contained in each sense block is set in a symmetrical mode, for example, the convolution operations in each sense block are respectively 5,6, 7,6, 5, and the convolution operations in each sense block are calculated by using the convolution with the convolution kernel 3*3, and in order to increase robustness, when each convolution layer in each sense block is subjected to Concat splicing operation, the layers in each sense block are subjected to scrambling operation by using a Shuffle method; while at the end of each Dense block, the number of channels is reduced to 24 using a convolution with a convolution kernel of 1*1. After the processing of the 9-layer Dense blocks, the convolution kernel is adopted as the convolution of 1*1, and the Sigmod activation function is used for classification, wherein parameters required by network training are 1900 ten thousand. More specifically, as shown in fig. 3, when a density block includes 5 convolutional layers, the specific implementation process is as follows:
(1) Performing operation on an input image I by using a first convolution to obtain a result F1;
(2) Splicing the I and the F1, and performing operation by using a second convolution to obtain a result F2;
(3) Splicing the I, F1 and F2, and performing operation by using a third convolution to obtain a result F3;
(4) Splicing the I, F1, F2 and F3, and performing operation by using a fourth convolution to obtain a result F4;
(5) Splicing the I, F1, F2, F3 and F4, and performing operation by using a fifth convolution to obtain a result x;
the calculation process is expressed using equation (1) as:
F i =Relu(shuffle[I,F 1 ,...,F i-1 ]) (1)
wherein Relu is the activation function, F i For the ith convolution layer in the Dense block, a shuffle method is used for disturbing the splicing sequence operation of the ith convolution layer to ensure the robustness.
The embodiment is as follows:
this embodiment is a specific construction of a spine CT segmentation network model, sample data is obtained from spine CT scan data of 10 patients and their corresponding mask data, and the adopted networks are the above example networks:
s1: preprocessing data
S1.1, because the resolution of the CT raw data and the mask data of each patient is different in size and in size in each direction, data is needed to be resampled, and the resolution is unified to 0.5 x 0.5mm 3 The distance of each voxel point in each direction is 0.5mm;
s1.2, carrying out gray level normalization processing on the data:
the gray value range of the CT original data is wide, so the minimum gray value of the CT data is set to be 400, and the maximum gray value of the CT data is set to be 3500; resetting the gray value of 400 to 400 and the gray value of 3500 to 3500; then, the whole body is normalized to ensure that the voxel value is between 0 and 1, and then the whole body is zoomed in the range of [0 to 255 ];
s1.3, extracting a patch data set and a label data set:
then, performing cutting operation on the CT original data and the mask data, and cutting in the axial direction by using 160 as a step length according to the size of 160 × 160 to obtain a patch two-dimensional image and a Label image which correspond to each other one by one; respectively forming a patch data set and a Label data set, wherein one of the two-dimensional patch images and the corresponding Label image after cutting are shown in FIGS. 4A and B;
in order to ensure the accuracy of training, some patch two-dimensional images containing spinal pixels smaller than a threshold value and corresponding label images can be ignored by setting a reasonable threshold value in the cropping process, and the threshold value is 500;
dividing the preprocessed data into a training set and a testing set according to the proportion of 8:2, and performing model training as described in the following S2-S7 by using a patch data set and a Label data set in the training set;
s2, down-sampling the images in the label data set to obtain a heat map set, wherein one heat map is shown in FIG. 4C;
s3, taking the patch data set as input, taking the heat map set obtained in the S2 as a label to train the Attention-Net neural network, and obtaining a first network model;
in the training Attention-Net process, batchSize is set to be 20, an optimizer is Adam, the initial learning rate is 0.001, every 40 rounds, the learning rate is reduced by half, and 200 rounds of training are performed; during the training process, constraint is performed by formula (2):
Figure BDA0002304514230000081
wherein: II- 2 Denotes the second order norm, min ypred Representing y satisfying a minimum second order norm pred ,y pred Representing the predicted output of the network, y true Representing the corresponding label;
s4, training a DenseUnet neural network by using a patch data set as input and a label data set to obtain a second network model;
in the training process, the BatchSize is set to 20, the optimizer is Adam, the initial learning rate is 0.001, every 40 rounds, the learning rate is reduced by half, and 100 rounds of training are performed, constrained by formula (3):
Figure BDA0002304514230000082
wherein: y is pred Representing the predicted output of the network, y true Label, y indicating the corresponding patch pred ∩y true Indicating a predicted correct portion;
s5, segmenting the patch data set by using the DenseUnet network model trained in the S4 to obtain a primary segmentation image, and forming a primary segmentation data set, wherein one primary segmentation image is shown in a figure 4D;
s6, splicing the patch data set of the training set divided in the S1.4, the heat map set in the S2 and the preliminary segmentation data set obtained in the S5 together by using a concatemate splicing method;
s7, training DenseUnet: training the DenseUnet neural network by using the spliced data in S6 as input and the label data set as output to obtain a third network model; during the course of the DenseUnet neural network training at this step, batchSize is set to 20, the optimizer is Adam, the initial learning rate is 0.001, the learning rate is reduced by half every 40 rounds, a total of 200 rounds are trained, and the constraint is performed by equation (4).
Figure BDA0002304514230000091
y pred Representing the predicted output of the network, y true Indicating label corresponding to patch. y is pred ∩y trye Indicating the part that is predicted correctly.
Comparative example 1:
this comparative example differs from example 1 in that:
and (3) directly taking a patch data set as input and a label data set as a label to train the DenseUnet neural network, training for 200 rounds, and constructing a segmentation model A.
Comparative example 2:
this comparative example differs from example 1 in that: and during the training of the DenseUnet neural network, inputting splicing data of a patch data set and a heat map set to construct a segmentation model B, wherein the segmentation model of the comparative example also comprises a first network model.
Comparative example 3:
the comparative example is different from the example 1 in that a segmentation model C is constructed by inputting the splicing data of the patch data set and the preliminary segmentation data set during the training of the DenseUnet neural network, and the segmentation model of the comparative example further comprises a second network model.
The model of the invention and the model obtained by the comparative example are respectively adopted to accurately segment the test set: the model segmentation step of the invention comprises:
(1) Inputting the test set patch data set into a first network model to obtain a heat map set corresponding to the test set patch data set, and then performing nearest neighbor interpolation processing until the size of an image is the same as that of the image in the test set;
(2) Inputting the patch data set of the test set into a second network model to obtain a primary segmentation image set;
(3) Splicing the patch data set of the test set with the heat map set obtained in the step (1) and the preliminary segmentation image set obtained in the step (2) to obtain a final test sample,
(4) And inputting the test sample into a third network model to obtain an accurate segmentation result, wherein one of the accurate segmentation results is shown in fig. 5E, and fig. 5F is a corresponding patch two-dimensional image.
And (4) performing corresponding processing and segmentation on the patch data set by using the corresponding training model to obtain a corresponding segmentation image set according to the comparative examples 1-3.
The results used Dice, IOU, and VS as evaluation indices, and VS represents the ratio of the segmentation results to the pixels performed by label in the test data set, where:
Figure BDA0002304514230000101
the results are given in the following table:
TABLE 3 comparison of segmentation results
Figure BDA0002304514230000102
/>

Claims (8)

1. A method for constructing a vertebral CT segmentation network model is characterized by comprising the following steps:
(1) Preprocessing CT original data to obtain a patch data set, wherein the preprocessing comprises the steps of cutting the CT original data to obtain a plurality of two-dimensional images to form the patch data set; acquiring the region of interest of each two-dimensional image in the patch data set to obtain a corresponding segmented image, and forming a label data set;
(2) Downsampling all the segmentation images in the label data set to obtain a heat map of each segmentation image to form a first heat map set;
(3) Taking a patch data set as input, taking a first heat map set as a label to train an Attention-Net neural network, and obtaining a first network model;
(4) Training the DenseUnet neural network by taking the patch data set as input and the label data set as a label to obtain a second network model;
(5) Adopting the trained second network model in the step (4) to segment the patch data set to obtain an initial segmentation image set;
(6) Performing data splicing on the patch data set, the first heat map set and the initial segmentation image set;
(7) Training a DenseUnet neural network by taking a patch data set, a first heat map set and a spliced data set of the initial segmentation image set as input and a label data set as a label to obtain a third network model;
the first network model, the second network model and the third network model form a spine CT segmentation network model.
2. The method of claim 1, wherein the CT raw data preprocessing comprises normalizing the resolution and gray scale of the CT raw data and then cropping, thereby obtaining a patch data set.
3. The method according to claim 1 or 2, wherein the step (1) is to preprocess mask data corresponding to CT raw data to obtain a label data set, the preprocessing includes cropping the mask data to obtain a plurality of segmented images to form the label data set, and each segmented image in the label data set corresponds to each two-dimensional image in the patch data set.
4. The method of constructing a spine CT segmentation network model according to claim 3, wherein the mask data preprocessing includes performing clipping after normalizing the resolution of the data, and the normalized resolution is the same as the resolution of the CT raw data or the normalized resolution.
5. The method of constructing a vertebral CT segmentation network model according to claim 1, wherein the DenseUnet neural network includes nine Dense blocks, and the number of layers including convolution in each Dense block is 5,6, 7,6 and 5; convolution operations in the Dense blocks all use convolution with a convolution kernel of 3*3, and when Concat splicing operations are performed on convolution layers in all Dense blocks, a Shuffle method is used for performing scrambling operations on layers in the blocks.
6. The method of claim 1, wherein the training in step (3) uses L 2 Loss as a constraint; and (4) adopting Diss loss as a constraint during training in the step (4) and the step (7).
7. A spinal CT segmentation method, comprising:
s1, preprocessing CT original data to be segmented to obtain a patch data set to be segmented, wherein the patch data set to be segmented is obtained by cutting the CT original data to be segmented to obtain a plurality of two-dimensional images;
s2, extracting a heat map set of a patch data set to be segmented by adopting the first network model of claim 1;
s3, performing primary segmentation on the patch data set to be segmented by adopting the second network model of claim 1 to obtain an initial segmentation image set;
s4, performing data splicing on the patch data set to be segmented, the heat map set and the initial segmentation image set;
and S5, inputting the patch data set to be segmented, the heat map set and the spliced data set of the initial segmented image set into the third network model output segmented image set in claim 1.
8. A spinal CT segmentation system, the system comprising: a data preprocessing module, a data splicing module and the first network model, the second network model and the third network model of claim 1;
the data preprocessing module is used for preprocessing the original CT data to be segmented to obtain a patch data set to be segmented, and the patch data set to be segmented is obtained by cutting the original CT data to be segmented to obtain a plurality of two-dimensional images;
the first network model is used for extracting a hot map set of a patch data set to be segmented;
the second network model is used for carrying out primary segmentation on the patch data set to be segmented to obtain an initial segmentation image set;
the data splicing module is used for performing data splicing on the patch data set to be segmented, the heat map set and the initial segmentation image set;
and the third network model is used for segmenting the spliced data of the patch data set to be segmented, the heat map set and the initial segmented image set and outputting the segmented image set.
CN201911234498.7A 2019-12-05 2019-12-05 Construction method and segmentation method of vertebra CT segmentation network model Active CN111047605B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911234498.7A CN111047605B (en) 2019-12-05 2019-12-05 Construction method and segmentation method of vertebra CT segmentation network model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911234498.7A CN111047605B (en) 2019-12-05 2019-12-05 Construction method and segmentation method of vertebra CT segmentation network model

Publications (2)

Publication Number Publication Date
CN111047605A CN111047605A (en) 2020-04-21
CN111047605B true CN111047605B (en) 2023-04-07

Family

ID=70234726

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911234498.7A Active CN111047605B (en) 2019-12-05 2019-12-05 Construction method and segmentation method of vertebra CT segmentation network model

Country Status (1)

Country Link
CN (1) CN111047605B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111493918B (en) * 2020-04-24 2022-08-26 杭州健培科技有限公司 Automatic positioning method, application method and equipment for observation plane of lumbar vertebra CT image
CN111862071B (en) * 2020-07-29 2024-03-05 南通大学 Method for measuring CT value of lumbar 1 vertebral body based on CT image
CN114170128B (en) * 2020-08-21 2023-05-30 张逸凌 Bone segmentation method and system based on deep learning
CN113506308B (en) * 2021-07-06 2023-03-28 同济大学 Deep learning-based vertebra positioning and spine segmentation method in medical image
CN113487591A (en) * 2021-07-22 2021-10-08 上海嘉奥信息科技发展有限公司 CT-based whole spine segmentation method and system
CN113781496B (en) * 2021-08-06 2024-02-27 北京天智航医疗科技股份有限公司 Automatic planning system and method for pedicle screw channel based on CBCT (computed tomography) spine image

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108038860A (en) * 2017-11-30 2018-05-15 杭州电子科技大学 Spine segmentation method based on the full convolutional neural networks of 3D
CN109978838A (en) * 2019-03-08 2019-07-05 腾讯科技(深圳)有限公司 Image-region localization method, device and Medical Image Processing equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110838124B (en) * 2017-09-12 2021-06-18 深圳科亚医疗科技有限公司 Method, system, and medium for segmenting images of objects having sparse distribution

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108038860A (en) * 2017-11-30 2018-05-15 杭州电子科技大学 Spine segmentation method based on the full convolutional neural networks of 3D
CN109978838A (en) * 2019-03-08 2019-07-05 腾讯科技(深圳)有限公司 Image-region localization method, device and Medical Image Processing equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于字典学习的网络社团结构探测算法;张忠元;《中国科学:信息科学》(第11期);全文 *
基于深度学习的脊柱CT图像分割;刘忠利等;《计算机应用与软件》(第10期);全文 *

Also Published As

Publication number Publication date
CN111047605A (en) 2020-04-21

Similar Documents

Publication Publication Date Title
CN111047605B (en) Construction method and segmentation method of vertebra CT segmentation network model
CN111784671B (en) Pathological image focus region detection method based on multi-scale deep learning
CN112150428B (en) Medical image segmentation method based on deep learning
CN110889853B (en) Tumor segmentation method based on residual error-attention deep neural network
CN113674253B (en) Automatic segmentation method for rectal cancer CT image based on U-transducer
CN110599528A (en) Unsupervised three-dimensional medical image registration method and system based on neural network
US10366488B2 (en) Image processing used to estimate abnormalities
WO2021151275A1 (en) Image segmentation method and apparatus, device, and storage medium
CN111539956B (en) Cerebral hemorrhage automatic detection method based on brain auxiliary image and electronic medium
CN111369574B (en) Thoracic organ segmentation method and device
CN113744271B (en) Neural network-based automatic optic nerve segmentation and compression degree measurement and calculation method
CN112465754B (en) 3D medical image segmentation method and device based on layered perception fusion and storage medium
CN113393469A (en) Medical image segmentation method and device based on cyclic residual convolutional neural network
CN113436173A (en) Abdomen multi-organ segmentation modeling and segmentation method and system based on edge perception
CN114882048A (en) Image segmentation method and system based on wavelet scattering learning network
CN116758087B (en) Lumbar vertebra CT bone window side recess gap detection method and device
CN116977338B (en) Chromosome case-level abnormality prompting system based on visual semantic association
CN114119515A (en) Brain tumor detection method based on attention mechanism and MRI multi-mode fusion
Alsenan et al. A Deep Learning Model based on MobileNetV3 and UNet for Spinal Cord Gray Matter Segmentation
CN116433654A (en) Improved U-Net network spine integral segmentation method
CN115294023A (en) Liver tumor automatic segmentation method and device
CN111598904B (en) Image segmentation method, device, equipment and storage medium
CN114283406A (en) Cell image recognition method, device, equipment, medium and computer program product
CN113177938A (en) Method and device for segmenting brain glioma based on circular convolution kernel and related components
Wei et al. Application of U-net with variable fractional order gradient descent method in rectal tumor segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant