CN113205535B - X-ray film spine automatic segmentation and identification method - Google Patents

X-ray film spine automatic segmentation and identification method Download PDF

Info

Publication number
CN113205535B
CN113205535B CN202110583100.1A CN202110583100A CN113205535B CN 113205535 B CN113205535 B CN 113205535B CN 202110583100 A CN202110583100 A CN 202110583100A CN 113205535 B CN113205535 B CN 113205535B
Authority
CN
China
Prior art keywords
spine
segmentation
vertebral body
image
cutting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN202110583100.1A
Other languages
Chinese (zh)
Other versions
CN113205535A (en
Inventor
杨环
西永明
迟晓帆
杜钰堃
师文博
徐同帅
郭建伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao University
Original Assignee
Qingdao University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao University filed Critical Qingdao University
Priority to CN202110583100.1A priority Critical patent/CN113205535B/en
Publication of CN113205535A publication Critical patent/CN113205535A/en
Application granted granted Critical
Publication of CN113205535B publication Critical patent/CN113205535B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • G06T2207/30012Spine; Backbone

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention belongs to the technical field of medical image segmentation, and relates to an X-ray film spine automatic segmentation and identification method, which adopts a segmentation strategy of firstly coarse and then fine, can quickly position a spine region by utilizing a constructed deep neural network, and then realizes subsequent fine segmentation of a vertebral body; the segmentation precision is high, the constructed neural network takes the spine semantics and the edge characteristics into consideration, and the image morphological operation optimization processing is used on the basis, so that the segmented vertebral bodies are independent and can keep complete edges, and a foundation is laid for the intelligent measurement of the medical parameters of the subsequent spine X-ray film.

Description

X-ray film spine automatic segmentation and identification method
The technical field is as follows:
the invention belongs to the technical field of medical image segmentation, and relates to an X-ray film spine automatic segmentation and identification method based on a deep neural network.
Background art:
the spine is an important component of a human body, has a complex anatomical structure, mainly comprises three important parts, namely a vertebral body, an intervertebral disc and a spinal cord, and is a structural basis of a plurality of spinal diseases, such as adolescent idiopathic scoliosis, lumbar degenerative scoliosis, lumbar disc herniation, lumbar spinal stenosis, osteoporosis, hyperosteogeny, spinal tuberculosis, spinal tumors and the like. Spinal diseases have become one of several stubborn diseases affecting public health, and bring huge economic burden to society. In the conventional diagnosis of spine diseases, it is necessary to make a diagnosis by considering the symptoms of patients and the imaging examination, wherein different imaging reports, such as Computed Tomography (CT) images, magnetic resonance imaging (MR) images, and X-ray transmission images, are combined for different diseases.
At present, a spine X-ray film mainly comprises 24 vertebral bodies (cervical vertebra 1-7, thoracic vertebra 1-12 and lumbar vertebra 1-5), sacrum and ilium parts, medical parameters of each part are still manually measured and deduced and calculated, however, the manual measurement has the following problems: 1) the spine X-ray film diagnosis relates to measurement and derivation calculation of a large number of medical parameters, the process is complex, and the film reading time is long; 2) compared with CT and MR images, the X-ray film has poor imaging definition, easy blurring of the edge of the spine, more interference components such as ribs, organs and soft tissues and the like, and inevitable errors in manual measurement; 3) the special expertise is strong, the study is difficult, the period is long, the number of spine surgeons who can master the standard measurement and diagnosis technology is very small, the spine malformation diseases are usually wide in disease incidence, correct diagnosis guidance is difficult to give, and the disease condition is delayed; 4) poor repeatability, various symptoms, large amount of repeated labor such as manual measurement and calculation, measurement errors caused by forgetfulness or negligence, and influence on subsequent treatment.
With the development of Artificial Intelligence (AI) technology, particularly deep learning, AI-assisted spine X-ray diagnostic has gained more and more attention, and it is only necessary to input a spine X-ray image into a computer, automatically locate a spine region by the computer, measure and calculate required medical parameters, and then complete intelligent diagnosis, where automatic accurate location and segmentation of the spine is the primary core step of AI-assisted diagnosis, and only if accurate segmentation of each vertebral body is achieved, the required medical indexes, such as cobb angle, cervical 7 plumb line, sacral midperpendicular, sacral offset, sacral inclination, coronal plane balance, trunk inclination, etc., can be measured using medical image measurement criteria on this basis. Currently, there are only reports on the fully automatic and accurate vertebral body segmentation technology aiming at the spine X-ray film.
The invention content is as follows:
the invention aims to overcome the defects in the prior art, and provides an X-ray film spine automatic segmentation and identification method based on a deep neural network, which is designed and provided, adopts a coarse-to-fine segmentation strategy to construct a light neural network model to quickly position a complete spine region, performs sample cutting and model retraining on a vertebral body part on the basis, adopts self-adaptive connection to check the segmentation result of the vertebral body for splicing optimization, and finally accurately identifies 18 independent vertebral bodies (cervical vertebra 7 to lumbar vertebra 5) with complete edges so as to lay a foundation for automatic measurement of subsequent medical parameters.
In order to achieve the aim, the X-ray film spine automatic segmentation and identification method based on the deep neural network comprises four processes of spine segmentation, column extraction, vertebral body segmentation and vertebral body identification, and specifically comprises the following steps:
s1 spine segmentation:
s101, obtaining a spine X-ray film data set (SpineXdataset), labeling the data set picture to obtain a segmentation mask picture of a spine region, wherein the mask comprises 18 vertebral bodies, sacrum and ilium parts, for convenient and accurate labeling, the vertebral bodies and the sacrum are labeled as a communicating region, and the ilium parts on both sides are labeled as two communicating regions;
s102, comprehensively considering spinal semantic characteristics and edge characteristics, constructing a deep neural Network (SEDNet), wherein the constructed deep neural Network adopts an encoder-decoder (encoder-decoder) architecture, and after an input image is given, an encoder learns a characteristic map of the input image through the neural Network; the decoder gradually realizes the class marking of each pixel according to the obtained characteristic map, namely, the semantic segmentation is realized;
s103, training the deep neural network (SEDNet) constructed in the S102 based on a spine X-ray film data set (SpineXdataset) to obtain a neural network special for spine rough segmentation, and naming the neural network as SEDNet-S;
column extraction of S2: obtaining a spine integral segmentation result through SEDNet-S, then extracting a column body and a sacrum communicating part, carrying out edge and center line detection on the part, removing a sacrum area by utilizing edge change, and intercepting a vertebral body area by adopting a minimum circumscribed rectangle;
s3 vertebral body segmentation:
s301, processing all spine X-ray films in S1, segmenting a cylinder part through S2, performing non-overlapping cutting on the cone part by adopting a non-overlapping cone cutting method based on a central line to obtain a cone block (vertex Patch) image set (VP dataset), labeling the image set, drawing the edge of a cone in an image, and obtaining segmentation mask images of all cones;
s302, retraining the SEDNet based on the cone block image set (VP dataset) and the segmentation mask maps of all the cones to obtain a deep neural network special for cone fine segmentation, which is named as SEDNet-V;
s303, performing non-overlapping cutting on any input cylinder image according to the same cutting size in the S301 to obtain a corresponding vertebral body block, and performing semantic segmentation on each vertebral body block by using SEDNet-V to obtain a corresponding segmentation mask map;
s4 vertebral body labeling: and (4) optimizing all the cone mask images obtained in the step (S303) by using image morphological operation and concave-convex detection, splicing each optimized mask to obtain 18 independent cones with complete edges, wherein the 18 cones sequentially correspond to cervical vertebrae 7 to lumbar vertebrae 5 from top to bottom, and accurate segmentation and identification of the cones in the spine X-ray film are realized.
In the invention, the encoder in S102 adopts a multi-scale convolution to extract features and pool, is used for extracting comprehensive multi-scale feature maps, sets 5 times of continuous convolution operation (Conv), sets the number of channels of the convolution feature maps to be 32,64,128,256 and 512 respectively, the sizes of convolution kernels are all 3 multiplied by 3, and adopts a 2 multiplied by 2 maximum pooling strategy (Max Pooling) to aggregate the feature maps after nonlinear transformation is carried out on convolution results through an LRelu activation function, thereby improving the robustness of the model.
In the invention, in S102, a decoder performs layer-by-layer upsampling (4 × 4up-sampling) on a feature map to enlarge the image size, extracts the semantic segmentation features of the image, performs 4-layer operation, sets the number of image channels to be 256, 128, 64 and 32 respectively, simultaneously obtains feature maps of each scale at an encoder end in a Skip connection (Skip connection) mode, performs multilayer weighted Fusion on an upsampled signal and a Skip signal by using a feature Fusion Mechanism (BFM) of edge preservation, performs upsampling on a Fusion result and transmits the upsampled signal to the next scale for processing, and finally obtains a segmentation mask with the same size as an input image; meanwhile, 3 Extra connections (Extra connections) are adopted, namely up × 8, up × 4 and up × 2, semantic features of three different scales of the encoder are expanded by adopting larger convolution kernels (16 × 16, 8 × 8 and 4 × 4), so that the edge information of the image of each scale is enriched, and three Extra segmentation masks are obtained; and finally, performing superposition averaging processing on the four obtained segmentation masks to obtain a final spine semantic segmentation result.
The process of detecting the edge and the central point in S2 of the present invention is: aiming at the integral spine segmentation result (spine region value is 1, otherwise 0), detecting the spine edge by adopting a 5 multiplied by 1 double Sliding Window (SW), traversing each longitudinal coordinate value from top to bottom, performing Sliding detection on a left side Window from left to right, performing Sliding detection on a right side Window from right to left on the same horizontal line, if the sum of the Window pixel values is 3, judging that the current pixel point is an edge point, the connecting line of the central points of the left and right edge points is the spine central line, the mutation positions of the left and right edge points are the connecting regions of the vertebral body and the sacrum, connecting the mutation positions, removing the sacrum region, and intercepting the vertebral body region by adopting a minimum external rectangle.
The centerline-based non-overlapping vertebral body cutting method of the invention S301 specifically comprises the following steps: estimating the maximum width W of the vertebral body by using the maximum distance between the edge points in S2maxWidth of cutting window WsSet as a multiple of 4, length HsFor half the width, the settings are as follows:
Ws=λ*Wmax-mod(λ*Wmax,4)
Figure BDA0003086840930000041
and λ is a proportionality coefficient and is 1.5, mod is a modulus operator, and a cutting window is moved along the central line without overlapping from the upper part of the minimum circumscribed rectangle of the vertebral body area to finish the cutting of the vertebral body block.
The specific process of optimizing by using image morphological operation and concave-convex detection in S4 of the present invention is as follows: firstly, performing edge smoothing treatment on each cone mask by adopting image morphological opening operation, then, applying concave-convex detection on the cone masks to segment the cones (the cones which are partially adhered) with larger depressions, firstly, detecting a convex hull (convex hull) of each connected region by the concave-convex detection, then detecting all defect hulls (convex defects), counting the farthest point (farpoint) of each defect hull from the convex hull and the farthest distance of each defect hull, and if the farthest distance is greater than the minimum width of the external rectangle of the connected region
Figure BDA0003086840930000042
And if the vertical distance between the cutting points and the existing cutting points is more than 30 pixel points, the cutting points are regarded as the cutting points, and the cutting is carried out along the transverse direction of the minimum circumscribed rectangle through the cutting points to break the adhered centrum.
Compared with the prior art, the invention has the following advantages:
1) aiming at a spine X-ray film, a feasible automatic spine segmentation and identification method is provided, so that accurate semantic segmentation and identification of 18 vertebral bodies (from cervical vertebra 7 to lumbar vertebra 5), sacrum and ilium can be realized, no man-machine interaction operation is needed, and the automatic spine segmentation in the true sense is realized;
2) the calculation complexity is low, the real-time performance is high, a thick-first-thin segmentation strategy is adopted, the constructed deep neural network can be used for quickly positioning the spine region, and then the subsequent vertebral body fine segmentation is realized;
3) the segmentation precision is high, the constructed neural network takes the spine semantics and the edge characteristics into consideration, and the image morphological operation optimization processing is used on the basis, so that the segmented vertebral bodies are independent and can keep complete edges, and a foundation is laid for the intelligent measurement of the medical parameters of the subsequent spine X-ray film.
Description of the drawings:
fig. 1 is a schematic diagram of the working principle of the automatic segmentation and identification of the X-ray image spine based on the deep neural network.
Fig. 2 is a structural diagram of a deep neural network SEDNet constructed by the present invention.
Fig. 3 is a structural diagram of the edge-enhanced feature fusion mechanism BFM according to the present invention.
Fig. 4 is a diagram of the detection of the edge and centerline of the spine and the post-sacrectomy cylinder area in accordance with an embodiment of the present invention.
Fig. 5 is an exemplary centerline-based vertebral body cut of an embodiment of the present invention.
Fig. 6 is a diagram illustrating the results of detection and segmentation of convexo-concave portions of a partially adhered vertebral body according to an embodiment of the present invention.
FIG. 7 is an exemplary illustration of a partial vertebral body segmentation result according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example (b):
the process of fully-automatic spine segmentation and identification based on the deep neural network described in this embodiment is shown in fig. 1, and includes 4 processes: 1) spine segmentation; 2) extracting a column body; 3) dividing a vertebral body; 4) the vertebral body identification specifically comprises the following steps:
s1, spine segmentation:
s101, obtaining a spine X-ray film data set (SpineX dataset) which comprises 60 spine X-ray films in total, marking the data set pictures to obtain a segmentation mask map of a spine region, wherein the segmentation mask map comprises a cylinder, a sacrum and ilium parts as shown in a spine segmentation module in fig. 1, the cylinder and the sacrum are marked as a communication region for convenient and accurate marking, and the ilium parts at two sides are marked as two communication regions;
s102, comprehensively considering the semantic characteristics and the edge characteristics of the spine, and constructing a deep neural Network (SEDNet), wherein the Network adopts an encoder-decoder (encoder-decoder) architecture, and the overall architecture is shown in FIG. 2. After an input X-ray image is given, a coder obtains a characteristic map of the input image through neural network learning; the decoder gradually realizes the class marking of each pixel according to the obtained feature map, namely segmentation, wherein the encoder adopts a multi-scale convolution feature extraction mode, 5 layers of continuous convolution operation (Conv) are set, the sizes of convolution templates are all 3 multiplied by 3, the convolution step length is 1, the number of convolution channels is respectively 32,64,128,256 and 512, LRelu neuron activation functions are adopted to carry out nonlinear transformation on convolution features, comprehensive multi-scale image features are deeply mined, and after each layer of convolution operation, 2 multiplied by 2 maximum pooling (Max boosting) is adopted to carry out compression mapping on the features, so that the model robustness is improved, and overfitting is reduced;
a decoder main channel performs up-sampling (up-sampling) on a feature map obtained after 5 layers of convolution to enlarge the image size, an up-sampling signal gradually extracts semantic information of an image, and finally a segmentation mask with the same size as an input image is obtained, meanwhile, the decoder acquires a corresponding scale feature map (jump signal) at a decoder end by using more original image information in a skip connection mode (skip connection) and fuses the jump signal and the up-sampling signal, the jump signal generally retains more image position information, the up-sampling signal contains more semantic information, an edge-preserving feature Fusion Mechanism (Boundyawarefore Fusion Mechanism Mechanism, BFM) is adopted at a SEDNet decoding end to perform multilayer weighted Fusion on the up-sampling signal and the scale jump signal, the BFM structure is shown in FIG. 3, firstly, convolution and nonlinear transformation (1 × 1 convolution + LR) are performed on an up-sampling signal u 'and a jump signal p', and obtaining transformed signals u and p, wherein the sizes of the transformed signals u and p are w multiplied by h multiplied by n, wherein w multiplied by h is the image size, and n is the number of signal channels. And subtracting u and p between channels, and then solving the global average of the channels to obtain residual error information X of the up-sampling information and the jump signal, wherein the calculation mode of X is as follows:
Figure BDA0003086840930000061
where c denotes the c-th channel, c ═ 1, …, n. And performing signal conversion on the X by adopting a bottleeck two-layer fully-connected network structure to obtain a weight distribution vector S of the signal difference, wherein the calculation mode of S is as follows:
Figure BDA0003086840930000066
wherein, W1,W2Respectively weighting two full-connection layers, wherein delta is an LRelu activating function, and sigma is a sigmod activating function; then multiplying S with the up-sampling signal u, and after convolution Conv and LRelu conversion, obtaining the edge enhanced position information
Figure BDA0003086840930000062
The calculation method is as follows:
Figure BDA0003086840930000063
wherein V1Represents the connection weight in a 3 × 3 convolution operation, δ being the LRelu activation function; handle bar
Figure BDA0003086840930000064
And carrying out aggregation operation with the jump signal p, and carrying out convolution transformation again to obtain an enhanced signal O simultaneously containing semantic and edge information as the output of the BFM module, wherein the calculation mode is as follows:
Figure BDA0003086840930000065
where concat () is an inter-channel aggregate operation, V2Representing the connection weight in this convolution operation, c is 1, …,2 n.
Meanwhile, the decoder end adopts 3 extra connections (Extraconnection) to expand the feature maps of the encoder in three different scales by adopting larger convolution kernels, wherein the convolution kernels are respectively 16 × 16, 8 × 8 and 4 × 4, and the step lengths are respectively as follows: 8,4,2, obtaining 3 segmentation masks with different scales, and carrying out average processing on the 3 segmentation masks and the segmentation mask of the up-sampling channel to obtain a final spine segmentation mask image;
s103, training the SEDNet constructed in S102 based on SpineX dataset to obtain a neural network special for spine rough segmentation, and naming the neural network as SEDNet-S, wherein the training uses an Nvidia GeForce TRX 2080 display card, the learning rate is 0.001, the total iteration is 200 rounds, and the loss function adopted by the training is a cross entropy loss function:
Figure BDA0003086840930000071
wherein n is the number of channels of each characteristic map, K is the number of the characteristic maps,
Figure BDA0003086840930000072
representing the true class value of each pixel point i of each channel,
Figure BDA0003086840930000073
the probability that this pixel belongs to class c;
s2, column extraction: obtaining the integral segmentation result of the spine through SEDNet-S, extracting a vertebral body and a sacrum communicating part, detecting the edge and the central line of the part, removing the sacrum area, as shown in fig. 4, aiming at the whole spine segmentation mask (spine area value is 1, the rest is 0), firstly adopting 5 x 1 double Sliding Windows (SW) to detect the spine edge, traversing each longitudinal coordinate value from top to bottom, performing Sliding detection on the left Window from left to right, performing Sliding detection on the right Window from right to left on the same horizontal line, if the sum of the window pixel values is 3, the current pixel point is judged to be an edge point, the connecting line of the central points of the left edge point and the right edge point is the central line of the spine, the mutation positions of the left edge point and the right edge point are the connecting areas of the vertebral body and the sacrum, the mutation positions are connected, the sacrum area is removed, and the vertebral body area is intercepted by adopting a minimum circumscribed rectangle.
S3: and (3) vertebral body segmentation:
s301, adopting a centerline-based non-overlapping vertebral body cutting method and utilizing the maximum between the edge points in S2Distance estimation vertebral body maximum width WmaxCutting the window width W for subsequent processingsSet as a multiple of 4, length HsFor half the width, the settings are as follows:
Ws=λ*Wmax-mod(λ*Wmax,4)
Figure BDA0003086840930000074
λ is a proportionality coefficient and is set to be 1.5, mod is a modulus operator, a cutting window is moved from the upper part of the minimum circumscribed rectangle of the vertebral body region along the central line without overlapping, and vertebral body block cutting is completed, and the result is shown in fig. 5;
s302, processing all spine X-ray pictures in S101, segmenting all vertebral body images through S2 and S301 to obtain a vertebral body (vertex Patch) image set (VP dataset), wherein the VPdataset comprises 360 vertebral body blocks in total, artificially labeling the image set, drawing the edge of a vertebral body in an image to obtain segmented mask images of all vertebral bodies, and the vertebral body block and the mask image thereof are shown in a vertebral body segmentation module in FIG. 1;
s303, retraining the SEDNet based on the VP dataset and the segmentation mask maps of all the vertebral bodies, and setting the training as S103 to obtain a deep neural network special for vertebral body fine segmentation, which is named as SEDNet-V;
s304, obtaining spine positioning by using a spine segmentation network SEDNet-S for any input X-ray spine image, and then performing non-overlapping cutting on the image according to the vertebral body cutting methods in S4 and S5 to obtain a corresponding vertebral body block. Semantic segmentation is carried out on each cone block by using a cone segmentation network SEDNet-V to obtain a corresponding cone segmentation mask image;
s4: and (3) vertebral body identification: optimizing all the cone masks obtained in step S304, namely performing edge smoothing on each cone mask by using image morphology opening operation, and then performing concave-convex detection on the cone masks to segment the cones with larger depressions (the cones with partial adhesion), as shown in fig. 6, wherein the image morphology operation used here is opening operation according to the minimum of the connected regionThe length and the width of the external rectangle are provided with self-adaptive connection kernels for disconnecting the tiny connection between different connected areas; as shown in fig. 6, the concave-convex detection firstly detects the convex hull (convex hull) of each connected region, then detects all the defect hulls (convex defects), and counts the farthest point (farpoint) and the farthest distance between each defect hull and the convex hull. If the farthest distance is greater than the minimum bounding rectangle width of the connected region
Figure BDA0003086840930000081
And the distance between the minimum external rectangle and the existing dividing point is more than half of the length of the minimum external rectangle, the minimum external rectangle is taken as the dividing point, and the transverse direction of the minimum external rectangle is divided through the dividing point to break the adhered vertebral body; finally, splicing operation is carried out on all the obtained vertebral body masks, 18 independent vertebral bodies with complete edges can be obtained, as shown in fig. 7, the 18 vertebral bodies sequentially correspond to the cervical vertebra 7, the thoracic vertebra 1-12 and the lumbar vertebra 1-5 from top to bottom, and accurate segmentation and identification of the vertebral bodies in the spine X-ray film are achieved.

Claims (6)

1. An X-ray film spine automatic segmentation and identification method is characterized by comprising four processes of spine segmentation, column extraction, vertebral body segmentation and vertebral body identification, and specifically comprises the following steps:
s1 spine segmentation:
s101, acquiring a spine X-ray film data set, labeling a data set picture to obtain a segmentation mask picture of a spine region, wherein the mask comprises 18 vertebral bodies, sacrum and ilium parts, the vertebral bodies and the sacrum are labeled as a communicating region for convenient and accurate labeling, and the ilium parts on both sides are labeled as two communicating regions;
s102, comprehensively considering spinal semantic characteristics and edge characteristics, constructing a deep neural network, wherein the constructed deep neural network adopts a coding-decoding framework, and a coder obtains a characteristic map of an input image through neural network learning after the input image is given; the decoder gradually realizes the class marking of each pixel according to the obtained characteristic map, namely, the semantic segmentation is realized;
s103, training the deep neural network constructed in the S102 based on the spine X-ray film data set to obtain a neural network special for spine rough segmentation, and naming the neural network as SEDNet-S;
column extraction of S2: obtaining a spine integral segmentation result through SEDNet-S, then extracting a column body and a sacrum communication part, carrying out edge and center line detection on the part, removing a sacrum area by using edge change, and intercepting a vertebral body area by using a minimum external rectangle;
s3 vertebral body segmentation:
s301, processing all the spine X-ray films in the S1, segmenting a cylinder part through the S2, cutting the cone part in a non-overlapping mode by adopting a non-overlapping cone cutting method based on a central line to obtain a cone block image set, marking the image set, drawing the edge of a cone in the image and obtaining segmentation mask images of all the cones;
s302, retraining the deep neural network based on the image set of the vertebral body blocks and the segmentation mask images of all the vertebral bodies to obtain the deep neural network special for the vertebral body fine segmentation, which is named as SEDNet-V;
s303, performing non-overlapping cutting on any input cylinder image according to the same cutting size in the S301 to obtain a corresponding vertebral body block, and performing semantic segmentation on each vertebral body block by using SEDNet-V to obtain a corresponding segmentation mask map;
s4 vertebral body labeling: and (4) optimizing all the cone mask images obtained in the step (S303) by using image morphological operation and concave-convex detection, splicing each optimized mask to obtain 18 independent cones with complete edges, wherein the 18 cones sequentially correspond to cervical vertebrae 7 to lumbar vertebrae 5 from top to bottom, and accurate segmentation and identification of the cones in the spine X-ray film are realized.
2. The method for automatic segmentation and identification of X-ray film spine according to claim 1, wherein in S102, the encoder adopts a multi-scale convolution to extract features and pool, and is configured to extract a comprehensive multi-scale feature map, 5 times of continuous convolution operations are set, the number of channels of the convolution feature map is respectively set to 32,64,128,256 and 512, the size of convolution kernel is 3 × 3, and after the convolution results are all subjected to nonlinear transformation by LRelu activation function, 2 × 2 maximal pool strategy is adopted to aggregate the feature maps, thereby improving model robustness.
3. The method for automatically segmenting and identifying the spine of the X-ray film according to claim 1, wherein in S102, the decoder performs layer-by-layer upsampling on the feature map to enlarge the image size, extracts the semantic segmentation features of the image, performs 4-layer operation, sets the number of image channels to be 256, 128, 64 and 32 respectively, simultaneously acquires the feature map of each scale at the encoder end in a jump connection mode, performs multilayer weighted fusion on an upsampled signal and a jump signal by using a feature fusion mechanism of edge retention, performs upsampling on the fusion result and transmits the upsampled signal to the next scale for processing, and finally obtains a segmentation mask with the same size as the input image; meanwhile, 3 extra connections, namely, upx 8, upx 4 and upx 2 are adopted, semantic features of three different scales of the encoder are expanded by adopting larger convolution kernels of 16 x 16, 8 x 8 and 4 x 4, the edge information of the image of each scale is enriched, and three extra segmentation masks are obtained; and finally, performing superposition averaging processing on the four obtained segmentation masks to obtain a final spine semantic segmentation result.
4. The method for automatically segmenting and labeling the spine according to claim 1, wherein the detecting of the edge and the center point in S2 comprises: aiming at the integral segmentation result of the spine, wherein the value of the spine region is 1, otherwise, the spine region is 0, the edge of the spine is detected by adopting 5 multiplied by 1 double sliding windows, each longitudinal coordinate value is traversed from top to bottom, the sliding detection is carried out on the left side window from left to right, the sliding detection is carried out on the right side window from right to left on the same horizontal line, if the sum of the pixel values of the windows is 3, the current pixel point is judged to be an edge point, the connecting line of the central points of the left edge point and the right edge point is the central line of the spine, the mutation position of the left edge point and the right edge point is the connecting region of the vertebral body and the sacrum, the connecting mutation position is removed, and the vertebral body region is intercepted by adopting a minimum external rectangle.
5. The method of claim 4 for automatic segmentation and identification of X-ray spineThe centerline-based non-overlapping vertebral body cutting method is characterized in that the centerline-based non-overlapping vertebral body cutting method in step S301 specifically comprises the following steps: estimating the maximum width W of the vertebral body by using the maximum distance between the edge points in S2maxWidth of cutting window WsSet as a multiple of 4, length HsFor half the width, the settings are as follows:
Ws=λ*Wmax-mod(λ*Wmax,4)
Figure FDA0003508968020000021
and λ is a proportionality coefficient and is 1.5, mod is a modulus operator, and a cutting window is moved along the central line without overlapping from the upper part of the minimum circumscribed rectangle of the vertebral body area to finish the cutting of the vertebral body block.
6. The method for automatically segmenting and identifying the spine according to claim 1, wherein the specific process of performing the optimization processing by using the image morphology operation and the concave-convex detection in the step S4 is as follows: firstly, performing edge smoothing treatment on each cone mask by adopting image morphology opening operation, then, applying concave-convex detection on the cone mask, segmenting the cone with larger depression, namely the cone with partial adhesion, firstly, detecting the convex hull of each communicated area by the concave-convex detection, then, detecting all defect hulls, counting the farthest point and the farthest distance of each defect hull from the convex hull, and if the farthest distance is greater than the minimum external rectangle width of the communicated area
Figure FDA0003508968020000031
And if the vertical distance between the cutting points and the existing cutting points is more than 30 pixel points, the cutting points are regarded as the cutting points, and the cutting is carried out along the transverse direction of the minimum circumscribed rectangle through the cutting points to break the adhered centrum.
CN202110583100.1A 2021-05-27 2021-05-27 X-ray film spine automatic segmentation and identification method Expired - Fee Related CN113205535B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110583100.1A CN113205535B (en) 2021-05-27 2021-05-27 X-ray film spine automatic segmentation and identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110583100.1A CN113205535B (en) 2021-05-27 2021-05-27 X-ray film spine automatic segmentation and identification method

Publications (2)

Publication Number Publication Date
CN113205535A CN113205535A (en) 2021-08-03
CN113205535B true CN113205535B (en) 2022-05-06

Family

ID=77023791

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110583100.1A Expired - Fee Related CN113205535B (en) 2021-05-27 2021-05-27 X-ray film spine automatic segmentation and identification method

Country Status (1)

Country Link
CN (1) CN113205535B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115861154A (en) * 2021-09-24 2023-03-28 杭州朝厚信息科技有限公司 Method for determining development stage based on X-ray head shadow image
CN114187320B (en) * 2021-12-14 2022-11-08 北京柏惠维康科技股份有限公司 Spine CT image segmentation method and spine imaging identification method and device
CN114693604A (en) * 2022-03-07 2022-07-01 北京医准智能科技有限公司 Spine medical image processing method, device, equipment and storage medium
CN114723683B (en) * 2022-03-22 2023-02-17 推想医疗科技股份有限公司 Head and neck artery blood vessel segmentation method and device, electronic device and storage medium
CN115713661B (en) * 2022-11-29 2023-06-23 湘南学院 Scoliosis Lenke parting system
CN117745722B (en) * 2024-02-20 2024-04-30 北京大学 Medical health physical examination big data optimization enhancement method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107644421A (en) * 2016-07-20 2018-01-30 上海联影医疗科技有限公司 Medical image cutting method and system
CN109493317A (en) * 2018-09-25 2019-03-19 哈尔滨理工大学 The more vertebra dividing methods of 3D based on concatenated convolutional neural network
CN110599508A (en) * 2019-08-01 2019-12-20 平安科技(深圳)有限公司 Spine image processing method based on artificial intelligence and related equipment
CN112700448A (en) * 2021-03-24 2021-04-23 成都成电金盘健康数据技术有限公司 Spine image segmentation and identification method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8437521B2 (en) * 2009-09-10 2013-05-07 Siemens Medical Solutions Usa, Inc. Systems and methods for automatic vertebra edge detection, segmentation and identification in 3D imaging
CN108230301A (en) * 2017-12-12 2018-06-29 哈尔滨理工大学 A kind of spine CT image automatic positioning dividing method based on active contour model
US10902587B2 (en) * 2018-05-31 2021-01-26 GE Precision Healthcare LLC Methods and systems for labeling whole spine image using deep neural network
CN111260650A (en) * 2018-11-15 2020-06-09 刘华清 Spine CT sequence image segmentation method and system
CN111265351B (en) * 2020-01-19 2021-08-27 国家康复辅具研究中心 Design method of personalized 3D printing scoliosis orthosis

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107644421A (en) * 2016-07-20 2018-01-30 上海联影医疗科技有限公司 Medical image cutting method and system
CN109493317A (en) * 2018-09-25 2019-03-19 哈尔滨理工大学 The more vertebra dividing methods of 3D based on concatenated convolutional neural network
CN110599508A (en) * 2019-08-01 2019-12-20 平安科技(深圳)有限公司 Spine image processing method based on artificial intelligence and related equipment
CN112700448A (en) * 2021-03-24 2021-04-23 成都成电金盘健康数据技术有限公司 Spine image segmentation and identification method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
FusionNet:Edge Aware Deep Convolutional Networks for Semantic Segmentation of Remote Sensing Harbor Images;Dongcal Cheng 等;《IEEE Journal of Selected Tropics in Applied Earth Observations and Remote Sensing》;20170930;第1-15页 *
基于深度学习的椎骨实例分割算法研究;吴宇;《中国优秀博硕士学位论文全文数据库(硕士)医药卫生科技辑》;20200715;正文第4章 *

Also Published As

Publication number Publication date
CN113205535A (en) 2021-08-03

Similar Documents

Publication Publication Date Title
CN113205535B (en) X-ray film spine automatic segmentation and identification method
Huang et al. Anatomical prior based vertebra modelling for reappearance of human spines
CN110599528A (en) Unsupervised three-dimensional medical image registration method and system based on neural network
CN111047605B (en) Construction method and segmentation method of vertebra CT segmentation network model
CN112349392B (en) Human cervical vertebra medical image processing system
CN108309334B (en) Data processing method of spine X-ray image
CN113077479A (en) Automatic segmentation method, system, terminal and medium for acute ischemic stroke focus
Nie et al. Automatic detection of standard sagittal plane in the first trimester of pregnancy using 3-D ultrasound data
CN115830016B (en) Medical image registration model training method and equipment
CN111415361B (en) Method and device for estimating brain age of fetus and detecting abnormality based on deep learning
US20210271914A1 (en) Image processing apparatus, image processing method, and program
CN115512110A (en) Medical image tumor segmentation method related to cross-modal attention mechanism
JP3234668U (en) Image recognition system for scoliosis by X-ray
CN114287915A (en) Noninvasive scoliosis screening method and system based on back color image
CN114170150A (en) Retina exudate full-automatic segmentation method based on curvature loss function
CN116758087A (en) Lumbar vertebra CT bone window side recess gap detection method and device
CN115311258B (en) Method and system for automatically segmenting organs in SPECT planar image
CN104484874B (en) Living animal lower limb vascular dividing method based on CT contrast imagings
CN116152235A (en) Cross-modal synthesis method for medical image from CT (computed tomography) to PET (positron emission tomography) of lung cancer
WO2007095284A2 (en) Systems and methods for automatic symmetry identification and for quantification of asymmetry for analytic, diagnostic and therapeutic purposes
US20210307610A1 (en) Methods and systems for precise quantification of human sensory cortical areas
CN114693928A (en) Blood vessel segmentation method and imaging method of OCTA image
CN114581459A (en) Improved 3D U-Net model-based segmentation method for image region of interest of preschool child lung
NL2028748B1 (en) Automatic segmentation and identification method of spinal vertebrae based on X-ray film
CN115272386A (en) Multi-branch segmentation system for cerebral hemorrhage and peripheral edema based on automatic generation label

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220506