Osteoporosis artificial intelligence diagnostic device capable of identifying hip joint X-ray
Technical Field
The invention belongs to the technical field of medical image processing, and particularly relates to an artificial intelligent diagnosis device for identifying the osteoporosis of a hip joint X-ray.
Background
Fracture risk increases exponentially with age, and after 50 years of age, almost 1 in every 2 women or every 5 men suffers from brittle fractures, with osteoporosis being an important cause of susceptibility to fracture.
At present, the early diagnosis of osteoporosis is mainly based on dual energy X-ray absorptiometry (DXA) and bone density measurement, and the main detection sites include hip and lumbar anteroposterior, which can be said to be important criteria for assessing osteoporosis and fracture risk. Quantitative Computed Tomography (QCT) is a relatively advanced detection method that can measure not only the bone density of cortical bone, but also the bone density of cancellous bone. Numerous studies have shown that BMD measurements using QCT are highly consistent with DXA, with the advantage of no overlap effect, but the patient receives a higher dose of radiation and the examination costs are relatively high, and therefore less clinically useful. Bone density is usually expressed as T-value (T-Score), which is the standard deviation of (measured-peak bone density of normal young people of the same race/normal young people of the same race). The diagnosis standard of the bone density of the middle axis bone (lumbar vertebra 1-4, femoral neck or full hip) or the bone density of the distal radius 1/3 based on DXA measurement on the osteoporosis is that the T-value is less than or equal to-2.5. For postmenopausal women and men aged 50 years and older, the diagnosis standard recommended by WHO is recommended, and the bone density value is lower than 1 standard deviation of the bone peak value of healthy adults of the same sex and the same ethnic group and is within normal; reducing the bone mass to be low (or low) by 1-2.5 standard deviations; a decrease equal to and exceeding 2.5 standard deviations is osteoporosis; the degree of bone density reduction meets the osteoporosis diagnosis standard, and meanwhile, one or more brittle fractures are accompanied to severe osteoporosis. For children, premenopausal women and men under 50 years old, the judgment of the bone density level is suggested to be expressed by the value of Z-value of the same race, wherein the Z-value is (bone density measurement value-bone density average value of the same race and the same sex and the same age) and the standard deviation of the bone density of the same race and the same sex and the same age. A Z-value of ≦ -2.0 is considered "below the expected range for the same age group" or low bone mass.
Clinical routine imaging examinations are of limited value for assessing bone trabecular structure and lack the relevant quantitative parameters, and are not ideal for diagnosis of bone mass loss. With the development of theory and the advancement of signal processing technology, computer-aided diagnosis (CAD) has been increasingly applied to the study of medical images. In recent years, more and more people have come to pay attention to texture analysis of bone images. The research result shows that the texture analysis has practical potential value for bone disease diagnosis. Recently, deep learning models, represented by convolutional neural networks, have enjoyed great success in medical image processing. Reviewing a large number of osteoporosis classification in femoral neck texture analysis, finding that the convolutional neural network has a good classification effect, and improving the classification effect after fusing deep features and bottom features.
The Ward triangle is the area of weakness of the trabecular bone that exists between the greater and lesser trochanters and the trochanter crest, i.e., the supportive and trochanter fascicles of the pressure curve, with relatively minimal bone density, known as the Ward triangle, which is the area of incidence of femoral neck fractures, and is also identified on the hip X-ray orthodontics. Although the Ward triangular area has high cancellous bone content and high sensitivity to bone density change, the accuracy is poor due to small projection area and sampling and repeatability errors, so that the clinical application of the Ward triangular area is limited. However, the Ward triangle contains more than 80% of cancellous bone, is a calcium reserve area, and is considered by scholars to have high sensitivity to bone density change and better prediction of fracture risk.
Since DXA is a two-dimensional overlay image, it has the disadvantages of not being able to distinguish between cancellous bone and cortical bone, and not being able to deal with artifacts caused by spinal overlay tissue, such as aortic calcification, abdominal cavity calcification, etc., thereby affecting the accuracy of BMD measurement. And there is a limitation in diagnosing osteoporosis with reduced hip and lumbar vertebral density. Because the course of disease of the senile osteoporosis patient is long, the chronic inflammation or injury repair causes compensatory hyperosteogeny of different degrees, and the bone density of the corresponding part is not obviously reduced. There is a significant leak rate in diagnosing osteoporosis with only conventional DXA detection and hip and lumbar vertebral density.
Meanwhile, in the existing diagnosis and treatment system, the patient often needs to perform X-ray and DXA examination simultaneously during bone examination, so that the burden of the patient is increased, the workload of a doctor is increased due to excessive image data, and missed diagnosis and misdiagnosis are easy to occur.
Disclosure of Invention
The invention provides an artificial intelligence diagnosis device for identifying the osteoporosis of a hip joint X-ray, which can predict the shape, the position and the bone density values of a femoral neck area of the hip joint and a Ward triangular area inside the femoral neck area of the hip joint on an X-ray image of the hip joint by labeling the femoral neck area and the Ward triangular area inside the femoral neck area in the X-ray image of the hip joint and training a deep learning model, thereby achieving the purpose of primarily screening the osteoporosis only by identifying the X-ray image of the hip joint, simply and effectively improving the diagnosis accuracy of the osteoporosis, reducing missed diagnosis and misdiagnosis, improving the diagnosis efficiency of clinical osteoporosis, and reducing the economic burden of a patient and the workload of a doctor.
In order to achieve the purpose, the invention provides the following scheme:
an osteoporosis artificial intelligence diagnostic device for identifying hip joint X-ray, comprising: the system comprises a data acquisition module, a model construction module, a model training module and an image detection module;
the data acquisition module is used for acquiring a medical image marked on the femoral neck area and the Ward triangular area inside the femoral neck area;
the model construction module is used for constructing an image segmentation neural network model and an image classification neural network model; the image segmentation neural network model is used for acquiring a segmentation image of the medical image; the classification neural network model is used for classifying and identifying the segmented images to obtain a bone condition information numerical value;
the model training module is used for training the image segmentation neural network model and the image classification neural network model;
the image detection module is used for processing the medical image to be detected without marking the femoral neck region and the Ward triangular region through the trained image segmentation neural network model to obtain a segmented image, and classifying and identifying the segmented image by using the trained image classification neural network model to obtain a bone condition information value.
Preferably, the medical image is a hip joint X-ray image.
Preferably, the image segmentation neural network model obtains the segmented image by acquiring an annotated image of the medical image and by performing image processing on the annotated image and the medical image.
Preferably, the image segmentation neural network model extracts and maps the image features of the input medical image by adopting an image segmentation neural network to obtain the labeled image; the image segmentation neural network includes, but is not limited to, a U-Net convolutional neural network;
the image features include femoral neck region and Ward's triangle annotation images inside it.
Preferably, the image segmentation neural network comprises an encoder and a decoder connected with each other;
the encoder is provided with a convolution layer for extracting the image features of the medical image;
the decoder is provided with an deconvolution layer and is used for acquiring the labeling images of the femoral neck region and the Ward triangular region in the femoral neck region from the image characteristics;
preferably, the image classification neural network model adopts an image classification neural network to classify and judge the segmented images; the image classification neural network includes but is not limited to a DenseNet dense convolutional neural network,
preferably, the DenseNet dense convolutional neural network comprises a plurality of dense blocks and a plurality of transition layers;
the transition layer is connected with the dense block in an interphase mode.
Preferably, the DenseNet dense convolutional neural network further comprises a bottleneck layer;
the bottleneck layer is used for reducing the input quantity of the segmentation images;
the bottleneck layer is arranged at an input port of the image classification neural network model, and the output of the bottleneck layer is connected with the first dense block in the DenseNet dense convolutional neural network.
Preferably, the DenseNet dense convolutional neural network further comprises a global average pooling layer and a classifier;
the global average pooling layer and the classifier are connected with the last of the dense blocks in the DenseNet dense convolutional neural network.
The invention has the beneficial effects that:
the invention discloses an artificial intelligent osteoporosis diagnosis device for identifying hip joint X-ray, which is manufactured based on the existing DXA inspection numerical value result, and realizes intelligent osteoporosis primary screening and auxiliary diagnosis by identifying a femoral neck region and a Ward triangle region inside the femoral neck region on a hip joint X-ray image; through constantly optimizing the recognition performance of the device, the diagnosis accuracy of osteoporosis is simply and effectively improved, missed diagnosis and misdiagnosis are reduced, and the diagnosis efficiency of clinical osteoporosis is improved. Furthermore, the patient can obtain related bone health information only through ordinary hip joint X-ray examination, so that the economic pressure of the patient is reduced, the workload of a doctor is reduced, the diagnosis and treatment efficiency of a hospital is improved, and secondary examination and radioactive ray exposure are reduced.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings needed to be used in the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a schematic structural diagram of an embodiment of the present invention;
FIG. 2 is a schematic view of a triangle of a hip Ward in accordance with an embodiment of the present invention;
FIG. 3 is a schematic diagram of a U-Net convolutional neural network model in an embodiment of the present invention;
fig. 4 is a schematic flow chart of the osteoporosis diagnosis using the device of the present embodiment.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
As shown in fig. 1, a schematic structural diagram of an artificial intelligence diagnosis device for identifying osteoporosis of a hip joint X-ray provided by the invention includes: the system comprises a data acquisition module, a model construction module, a model training module and an image detection module;
the data module is used for acquiring a medical image marked on a femoral neck region and a Ward triangle region inside the femoral neck region in the hip joint X-ray image.
Fig. 2 is a schematic diagram of the triangle of the hip Ward, which is located in the middle ridges of the greater trochanter, the lesser trochanter and the trochanter, and has high cancellous bone content, high sensitivity to bone density change and better prediction of fracture risk.
The model building module is used for building an image segmentation neural network model and an image classification neural network model;
in this embodiment, a U-Net convolutional neural network is used to extract and map image features of an input hip joint X-ray image to obtain the labeled image, and the image processing on the labeled image is performed to obtain a segmented image, which is used as input data of a subsequent image classification neural network model. In the present embodiment, the segmented image is acquired by means of cropping and scaling.
In the embodiment, the U-Net convolutional neural network comprises an encoder and a decoder which are connected with each other; the encoder is provided with a convolution layer and is used for extracting image characteristics of the medical image; the decoder is provided with an deconvolution layer and is used for acquiring the annotated image from the image characteristics;
in the present embodiment, in order to better acquire the image detail information, a cross-layer connection for acquiring the hip annotation image information is provided between the layers of the encoder and the decoder.
The U-Net whole neural network mainly comprises two parts: a contracted path and an expanded path. The contraction path is mainly used to capture the context information in the picture, and the symmetrical expansion path is used to precisely locate the portion of the picture that needs to be segmented. In the embodiment, the image segmentation neural network model adopts a U-Net convolution neural network to realize the feature extraction and mapping of the hip joint X-ray image, the hip joint X-ray image is input into the image segmentation neural network model, and the labeled image corresponding to the hip joint X-ray image is output, so that the attention area in the image can be well and accurately positioned.
In this embodiment, the encoder of the U-Net convolutional neural network extracts image features from input data using nine convolutional layers, as shown in FIG. 3. The original image size is i × 512 × 512 × 1, where i is the batch size of the training data. The first three convolutional layers are 64, 128 and 256 channels respectively, the convolutional kernel size is 4 multiplied by 4, and the overlap step size is 2. The total of 512 channels from the fourth convolutional layer to the ninth convolutional layer, the convolutional kernel size is 4 × 4, and the overlap step size is 2. The activation function of the convolutional layer was LeakyReLU with a slope of 0.2. The purpose of the decoder is to obtain the labeling image of the femoral neck region and the Ward triangle inside the femoral neck region from the obtained hip joint X-ray image characteristic information. The decoder consists of the corresponding eight deconvolution layers. The first six deconvolution layers have 512 channels, the convolution kernel size is 4 × 4, and the overlap step size is 2. The seventh deconvolution layer to the last deconvolution layer are 256, 128, 64 channels, respectively, the convolution kernel size is 4 × 4, and the overlap step size is 2. In this embodiment, in order to help the decoder to better acquire the details of the annotated image, a cross-layer connection is introduced to connect the corresponding encoder and decoder layers. The output image size of the convolutional neural network is the same as the input image size.
In this embodiment, the classified neural network model adopts a DenseNet dense convolutional neural network, and according to the bone condition of the segmented image, the classified recognition of the segmented image is realized to obtain a bone condition information value;
in this embodiment, the DenseNet dense convolutional neural network includes a number of dense blocks and a number of transition layers; the dense blocks are connected with the transition layers in an alternating mode, namely the dense blocks are located between the two transition layers, and therefore the number of the dense blocks is one more than that of the transition layers.
In this embodiment, in order to reduce the input number of the segmented images, the DenseNet dense convolutional neural network is further provided with a bottleneck layer, including a convolutional layer and an average pooling layer; the bottleneck layer is located at the input port of the image classification neural network model, and the output of the bottleneck layer is connected with the first dense block.
In this embodiment, in the structure of the DenseNet dense convolutional neural network, the last dense block is also connected with a global average pooling layer and a classifier.
In this embodiment, the bottleneck layer-DenseNet dense convolutional neural network-global average pooling layer and the classifier are adopted, so that the number of input feature maps can be reduced more effectively, and the calculation efficiency is improved.
The model training module trains and optimizes the image segmentation neural network model and the image classification neural network model by using sample data, so that the diagnosis accuracy is improved.
The image detection module is used for inputting the medical image to be detected without marking the femoral neck area and the Ward triangular area through the trained image segmentation neural network model, acquiring a marked image of the X-ray image of the hip joint to be detected, cutting and zooming the X-ray image of the hip joint to be detected and the marked image to obtain a segmented image, classifying and identifying the obtained segmented image by using the trained image classification neural network model, acquiring the bone condition information corresponding to the image, and evaluating the osteoporosis degree through the numerical value output by the model.
Fig. 4 is a schematic flow chart of the osteoporosis diagnosis using the device of the embodiment, which includes the following steps:
s1, obtaining a hip joint X-ray image and bone density DXA inspection image data, manually marking a femoral neck region and a Ward triangular region inside the femoral neck region in the hip joint X-ray image, and performing image assignment by using a corresponding bone density DXA inspection value result;
s2, constructing an image segmentation neural network model and an image classification neural network model, wherein the image segmentation neural network model is used for obtaining a hip joint X-ray image through training and learning, the hip joint X-ray image is cut and zoomed to obtain a segmentation image, and the segmentation image is used as input data of the image classification neural network model; the classification neural network model is used for carrying out bone condition classification and identification on the input segmentation images through training and learning to obtain a bone condition information value;
s3, training the image segmentation neural network model and the image classification neural network model by using sample data;
s4, taking the X-ray image of the hip joint to be detected as the input of the trained image segmentation neural network model, acquiring an annotated image of the X-ray image of the hip joint to be detected, and cutting and zooming the X-ray image of the hip joint to be detected and the annotated image to obtain a segmented image;
and S5, classifying and distinguishing the obtained segmented images by using the trained image classification neural network model to obtain bone condition information corresponding to the images.
The above-described embodiments are merely illustrative of the preferred embodiments of the present invention, and do not limit the scope of the present invention, and various modifications and improvements of the technical solutions of the present invention can be made by those skilled in the art without departing from the spirit of the present invention, and the technical solutions of the present invention are within the scope of the present invention defined by the claims.