CN113222951A - Osteoporosis artificial intelligence diagnostic device capable of identifying hip joint X-ray - Google Patents

Osteoporosis artificial intelligence diagnostic device capable of identifying hip joint X-ray Download PDF

Info

Publication number
CN113222951A
CN113222951A CN202110549854.5A CN202110549854A CN113222951A CN 113222951 A CN113222951 A CN 113222951A CN 202110549854 A CN202110549854 A CN 202110549854A CN 113222951 A CN113222951 A CN 113222951A
Authority
CN
China
Prior art keywords
image
neural network
hip joint
osteoporosis
ray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110549854.5A
Other languages
Chinese (zh)
Other versions
CN113222951B (en
Inventor
秦彦国
何猛
唐雄风
郭德明
沈先月
陈博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN202110549854.5A priority Critical patent/CN113222951B/en
Publication of CN113222951A publication Critical patent/CN113222951A/en
Application granted granted Critical
Publication of CN113222951B publication Critical patent/CN113222951B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/50Clinical applications
    • A61B6/505Clinical applications involving diagnosis of bone
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5217Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone

Abstract

The invention discloses an artificial intelligence diagnosis device for identifying osteoporosis of hip joint X-ray, comprising: the system comprises a data acquisition module, a model construction module, a model training module and an image detection module; the data acquisition module is used for acquiring medical images marked on a Ward triangle of the hip joint, the model construction module is used for constructing an image segmentation neural network model and an image classification neural network model, the model training module is used for training the two neural network models, and the image detection module is used for processing an X-ray film of the hip joint to be detected through the trained image segmentation neural network model and the trained image classification neural network model to acquire a bone condition information value. By adopting the device, the aim of assisting in prescreening osteoporosis can be fulfilled only by identifying the hip joint X-ray film, and the diagnosis rate of osteoporosis is simply and effectively improved.

Description

Osteoporosis artificial intelligence diagnostic device capable of identifying hip joint X-ray
Technical Field
The invention belongs to the technical field of medical image processing, and particularly relates to an artificial intelligent diagnosis device for identifying the osteoporosis of a hip joint X-ray.
Background
Fracture risk increases exponentially with age, and after 50 years of age, almost 1 in every 2 women or every 5 men suffers from brittle fractures, with osteoporosis being an important cause of susceptibility to fracture.
At present, the early diagnosis of osteoporosis is mainly based on dual energy X-ray absorptiometry (DXA) and bone density measurement, and the main detection sites include hip and lumbar anteroposterior, which can be said to be important criteria for assessing osteoporosis and fracture risk. Quantitative Computed Tomography (QCT) is a relatively advanced detection method that can measure not only the bone density of cortical bone, but also the bone density of cancellous bone. Numerous studies have shown that BMD measurements using QCT are highly consistent with DXA, with the advantage of no overlap effect, but the patient receives a higher dose of radiation and the examination costs are relatively high, and therefore less clinically useful. Bone density is usually expressed as T-value (T-Score), which is the standard deviation of (measured-peak bone density of normal young people of the same race/normal young people of the same race). The diagnosis standard of the bone density of the middle axis bone (lumbar vertebra 1-4, femoral neck or full hip) or the bone density of the distal radius 1/3 based on DXA measurement on the osteoporosis is that the T-value is less than or equal to-2.5. For postmenopausal women and men aged 50 years and older, the diagnosis standard recommended by WHO is recommended, and the bone density value is lower than 1 standard deviation of the bone peak value of healthy adults of the same sex and the same ethnic group and is within normal; reducing the bone mass to be low (or low) by 1-2.5 standard deviations; a decrease equal to and exceeding 2.5 standard deviations is osteoporosis; the degree of bone density reduction meets the osteoporosis diagnosis standard, and meanwhile, one or more brittle fractures are accompanied to severe osteoporosis. For children, premenopausal women and men under 50 years old, the judgment of the bone density level is suggested to be expressed by the value of Z-value of the same race, wherein the Z-value is (bone density measurement value-bone density average value of the same race and the same sex and the same age) and the standard deviation of the bone density of the same race and the same sex and the same age. A Z-value of ≦ -2.0 is considered "below the expected range for the same age group" or low bone mass.
Clinical routine imaging examinations are of limited value for assessing bone trabecular structure and lack the relevant quantitative parameters, and are not ideal for diagnosis of bone mass loss. With the development of theory and the advancement of signal processing technology, computer-aided diagnosis (CAD) has been increasingly applied to the study of medical images. In recent years, more and more people have come to pay attention to texture analysis of bone images. The research result shows that the texture analysis has practical potential value for bone disease diagnosis. Recently, deep learning models, represented by convolutional neural networks, have enjoyed great success in medical image processing. Reviewing a large number of osteoporosis classification in femoral neck texture analysis, finding that the convolutional neural network has a good classification effect, and improving the classification effect after fusing deep features and bottom features.
The Ward triangle is the area of weakness of the trabecular bone that exists between the greater and lesser trochanters and the trochanter crest, i.e., the supportive and trochanter fascicles of the pressure curve, with relatively minimal bone density, known as the Ward triangle, which is the area of incidence of femoral neck fractures, and is also identified on the hip X-ray orthodontics. Although the Ward triangular area has high cancellous bone content and high sensitivity to bone density change, the accuracy is poor due to small projection area and sampling and repeatability errors, so that the clinical application of the Ward triangular area is limited. However, the Ward triangle contains more than 80% of cancellous bone, is a calcium reserve area, and is considered by scholars to have high sensitivity to bone density change and better prediction of fracture risk.
Since DXA is a two-dimensional overlay image, it has the disadvantages of not being able to distinguish between cancellous bone and cortical bone, and not being able to deal with artifacts caused by spinal overlay tissue, such as aortic calcification, abdominal cavity calcification, etc., thereby affecting the accuracy of BMD measurement. And there is a limitation in diagnosing osteoporosis with reduced hip and lumbar vertebral density. Because the course of disease of the senile osteoporosis patient is long, the chronic inflammation or injury repair causes compensatory hyperosteogeny of different degrees, and the bone density of the corresponding part is not obviously reduced. There is a significant leak rate in diagnosing osteoporosis with only conventional DXA detection and hip and lumbar vertebral density.
Meanwhile, in the existing diagnosis and treatment system, the patient often needs to perform X-ray and DXA examination simultaneously during bone examination, so that the burden of the patient is increased, the workload of a doctor is increased due to excessive image data, and missed diagnosis and misdiagnosis are easy to occur.
Disclosure of Invention
The invention provides an artificial intelligence diagnosis device for identifying the osteoporosis of a hip joint X-ray, which can predict the shape, the position and the bone density values of a femoral neck area of the hip joint and a Ward triangular area inside the femoral neck area of the hip joint on an X-ray image of the hip joint by labeling the femoral neck area and the Ward triangular area inside the femoral neck area in the X-ray image of the hip joint and training a deep learning model, thereby achieving the purpose of primarily screening the osteoporosis only by identifying the X-ray image of the hip joint, simply and effectively improving the diagnosis accuracy of the osteoporosis, reducing missed diagnosis and misdiagnosis, improving the diagnosis efficiency of clinical osteoporosis, and reducing the economic burden of a patient and the workload of a doctor.
In order to achieve the purpose, the invention provides the following scheme:
an osteoporosis artificial intelligence diagnostic device for identifying hip joint X-ray, comprising: the system comprises a data acquisition module, a model construction module, a model training module and an image detection module;
the data acquisition module is used for acquiring a medical image marked on the femoral neck area and the Ward triangular area inside the femoral neck area;
the model construction module is used for constructing an image segmentation neural network model and an image classification neural network model; the image segmentation neural network model is used for acquiring a segmentation image of the medical image; the classification neural network model is used for classifying and identifying the segmented images to obtain a bone condition information numerical value;
the model training module is used for training the image segmentation neural network model and the image classification neural network model;
the image detection module is used for processing the medical image to be detected without marking the femoral neck region and the Ward triangular region through the trained image segmentation neural network model to obtain a segmented image, and classifying and identifying the segmented image by using the trained image classification neural network model to obtain a bone condition information value.
Preferably, the medical image is a hip joint X-ray image.
Preferably, the image segmentation neural network model obtains the segmented image by acquiring an annotated image of the medical image and by performing image processing on the annotated image and the medical image.
Preferably, the image segmentation neural network model extracts and maps the image features of the input medical image by adopting an image segmentation neural network to obtain the labeled image; the image segmentation neural network includes, but is not limited to, a U-Net convolutional neural network;
the image features include femoral neck region and Ward's triangle annotation images inside it.
Preferably, the image segmentation neural network comprises an encoder and a decoder connected with each other;
the encoder is provided with a convolution layer for extracting the image features of the medical image;
the decoder is provided with an deconvolution layer and is used for acquiring the labeling images of the femoral neck region and the Ward triangular region in the femoral neck region from the image characteristics;
preferably, the image classification neural network model adopts an image classification neural network to classify and judge the segmented images; the image classification neural network includes but is not limited to a DenseNet dense convolutional neural network,
preferably, the DenseNet dense convolutional neural network comprises a plurality of dense blocks and a plurality of transition layers;
the transition layer is connected with the dense block in an interphase mode.
Preferably, the DenseNet dense convolutional neural network further comprises a bottleneck layer;
the bottleneck layer is used for reducing the input quantity of the segmentation images;
the bottleneck layer is arranged at an input port of the image classification neural network model, and the output of the bottleneck layer is connected with the first dense block in the DenseNet dense convolutional neural network.
Preferably, the DenseNet dense convolutional neural network further comprises a global average pooling layer and a classifier;
the global average pooling layer and the classifier are connected with the last of the dense blocks in the DenseNet dense convolutional neural network.
The invention has the beneficial effects that:
the invention discloses an artificial intelligent osteoporosis diagnosis device for identifying hip joint X-ray, which is manufactured based on the existing DXA inspection numerical value result, and realizes intelligent osteoporosis primary screening and auxiliary diagnosis by identifying a femoral neck region and a Ward triangle region inside the femoral neck region on a hip joint X-ray image; through constantly optimizing the recognition performance of the device, the diagnosis accuracy of osteoporosis is simply and effectively improved, missed diagnosis and misdiagnosis are reduced, and the diagnosis efficiency of clinical osteoporosis is improved. Furthermore, the patient can obtain related bone health information only through ordinary hip joint X-ray examination, so that the economic pressure of the patient is reduced, the workload of a doctor is reduced, the diagnosis and treatment efficiency of a hospital is improved, and secondary examination and radioactive ray exposure are reduced.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings needed to be used in the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a schematic structural diagram of an embodiment of the present invention;
FIG. 2 is a schematic view of a triangle of a hip Ward in accordance with an embodiment of the present invention;
FIG. 3 is a schematic diagram of a U-Net convolutional neural network model in an embodiment of the present invention;
fig. 4 is a schematic flow chart of the osteoporosis diagnosis using the device of the present embodiment.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
As shown in fig. 1, a schematic structural diagram of an artificial intelligence diagnosis device for identifying osteoporosis of a hip joint X-ray provided by the invention includes: the system comprises a data acquisition module, a model construction module, a model training module and an image detection module;
the data module is used for acquiring a medical image marked on a femoral neck region and a Ward triangle region inside the femoral neck region in the hip joint X-ray image.
Fig. 2 is a schematic diagram of the triangle of the hip Ward, which is located in the middle ridges of the greater trochanter, the lesser trochanter and the trochanter, and has high cancellous bone content, high sensitivity to bone density change and better prediction of fracture risk.
The model building module is used for building an image segmentation neural network model and an image classification neural network model;
in this embodiment, a U-Net convolutional neural network is used to extract and map image features of an input hip joint X-ray image to obtain the labeled image, and the image processing on the labeled image is performed to obtain a segmented image, which is used as input data of a subsequent image classification neural network model. In the present embodiment, the segmented image is acquired by means of cropping and scaling.
In the embodiment, the U-Net convolutional neural network comprises an encoder and a decoder which are connected with each other; the encoder is provided with a convolution layer and is used for extracting image characteristics of the medical image; the decoder is provided with an deconvolution layer and is used for acquiring the annotated image from the image characteristics;
in the present embodiment, in order to better acquire the image detail information, a cross-layer connection for acquiring the hip annotation image information is provided between the layers of the encoder and the decoder.
The U-Net whole neural network mainly comprises two parts: a contracted path and an expanded path. The contraction path is mainly used to capture the context information in the picture, and the symmetrical expansion path is used to precisely locate the portion of the picture that needs to be segmented. In the embodiment, the image segmentation neural network model adopts a U-Net convolution neural network to realize the feature extraction and mapping of the hip joint X-ray image, the hip joint X-ray image is input into the image segmentation neural network model, and the labeled image corresponding to the hip joint X-ray image is output, so that the attention area in the image can be well and accurately positioned.
In this embodiment, the encoder of the U-Net convolutional neural network extracts image features from input data using nine convolutional layers, as shown in FIG. 3. The original image size is i × 512 × 512 × 1, where i is the batch size of the training data. The first three convolutional layers are 64, 128 and 256 channels respectively, the convolutional kernel size is 4 multiplied by 4, and the overlap step size is 2. The total of 512 channels from the fourth convolutional layer to the ninth convolutional layer, the convolutional kernel size is 4 × 4, and the overlap step size is 2. The activation function of the convolutional layer was LeakyReLU with a slope of 0.2. The purpose of the decoder is to obtain the labeling image of the femoral neck region and the Ward triangle inside the femoral neck region from the obtained hip joint X-ray image characteristic information. The decoder consists of the corresponding eight deconvolution layers. The first six deconvolution layers have 512 channels, the convolution kernel size is 4 × 4, and the overlap step size is 2. The seventh deconvolution layer to the last deconvolution layer are 256, 128, 64 channels, respectively, the convolution kernel size is 4 × 4, and the overlap step size is 2. In this embodiment, in order to help the decoder to better acquire the details of the annotated image, a cross-layer connection is introduced to connect the corresponding encoder and decoder layers. The output image size of the convolutional neural network is the same as the input image size.
In this embodiment, the classified neural network model adopts a DenseNet dense convolutional neural network, and according to the bone condition of the segmented image, the classified recognition of the segmented image is realized to obtain a bone condition information value;
in this embodiment, the DenseNet dense convolutional neural network includes a number of dense blocks and a number of transition layers; the dense blocks are connected with the transition layers in an alternating mode, namely the dense blocks are located between the two transition layers, and therefore the number of the dense blocks is one more than that of the transition layers.
In this embodiment, in order to reduce the input number of the segmented images, the DenseNet dense convolutional neural network is further provided with a bottleneck layer, including a convolutional layer and an average pooling layer; the bottleneck layer is located at the input port of the image classification neural network model, and the output of the bottleneck layer is connected with the first dense block.
In this embodiment, in the structure of the DenseNet dense convolutional neural network, the last dense block is also connected with a global average pooling layer and a classifier.
In this embodiment, the bottleneck layer-DenseNet dense convolutional neural network-global average pooling layer and the classifier are adopted, so that the number of input feature maps can be reduced more effectively, and the calculation efficiency is improved.
The model training module trains and optimizes the image segmentation neural network model and the image classification neural network model by using sample data, so that the diagnosis accuracy is improved.
The image detection module is used for inputting the medical image to be detected without marking the femoral neck area and the Ward triangular area through the trained image segmentation neural network model, acquiring a marked image of the X-ray image of the hip joint to be detected, cutting and zooming the X-ray image of the hip joint to be detected and the marked image to obtain a segmented image, classifying and identifying the obtained segmented image by using the trained image classification neural network model, acquiring the bone condition information corresponding to the image, and evaluating the osteoporosis degree through the numerical value output by the model.
Fig. 4 is a schematic flow chart of the osteoporosis diagnosis using the device of the embodiment, which includes the following steps:
s1, obtaining a hip joint X-ray image and bone density DXA inspection image data, manually marking a femoral neck region and a Ward triangular region inside the femoral neck region in the hip joint X-ray image, and performing image assignment by using a corresponding bone density DXA inspection value result;
s2, constructing an image segmentation neural network model and an image classification neural network model, wherein the image segmentation neural network model is used for obtaining a hip joint X-ray image through training and learning, the hip joint X-ray image is cut and zoomed to obtain a segmentation image, and the segmentation image is used as input data of the image classification neural network model; the classification neural network model is used for carrying out bone condition classification and identification on the input segmentation images through training and learning to obtain a bone condition information value;
s3, training the image segmentation neural network model and the image classification neural network model by using sample data;
s4, taking the X-ray image of the hip joint to be detected as the input of the trained image segmentation neural network model, acquiring an annotated image of the X-ray image of the hip joint to be detected, and cutting and zooming the X-ray image of the hip joint to be detected and the annotated image to obtain a segmented image;
and S5, classifying and distinguishing the obtained segmented images by using the trained image classification neural network model to obtain bone condition information corresponding to the images.
The above-described embodiments are merely illustrative of the preferred embodiments of the present invention, and do not limit the scope of the present invention, and various modifications and improvements of the technical solutions of the present invention can be made by those skilled in the art without departing from the spirit of the present invention, and the technical solutions of the present invention are within the scope of the present invention defined by the claims.

Claims (9)

1. An osteoporosis artificial intelligence diagnostic device for identifying hip joint X-ray, which is characterized by comprising: the system comprises a data acquisition module, a model construction module, a model training module and an image detection module;
the data acquisition module is used for acquiring a medical image marked on the femoral neck area and the Ward triangular area inside the femoral neck area;
the model construction module is used for constructing an image segmentation neural network model and an image classification neural network model; the image segmentation neural network model is used for acquiring a segmentation image of the medical image; the classification neural network model is used for classifying and identifying the segmented image to obtain a bone condition information numerical value;
the model training module is used for training the image segmentation neural network model and the image classification neural network model;
the image detection module is used for processing the medical image to be detected without marking the femoral neck region and the Ward triangular region through the trained image segmentation neural network model to obtain a segmented image, and classifying and identifying the segmented image by using the trained image classification neural network model to obtain a bone condition information value.
2. The artificial intelligence diagnosis apparatus for osteoporosis identifying hip joint X-ray according to claim 1, wherein the medical image is a hip joint X-ray image.
3. The artificial intelligence diagnosis apparatus for osteoporosis identifying the X-ray of the hip joint according to claim 1,
the image segmentation neural network model obtains the segmentation image by acquiring an annotation image of the medical image and by processing the annotation image and the medical image.
4. The artificial intelligence diagnosis apparatus for osteoporosis identifying the X-ray of the hip joint according to claim 3,
the image segmentation neural network model adopts an image segmentation neural network to extract and map the image characteristics of the input medical image to obtain the labeled image; the image segmentation neural network includes, but is not limited to, a U-Net convolutional neural network;
the image features include femoral neck region and Ward's triangle annotation images inside it.
5. The artificial intelligence diagnosis apparatus for osteoporosis identifying the X-ray of the hip joint according to claim 4,
the image segmentation neural network comprises an encoder and a decoder which are connected with each other;
the encoder is provided with a convolution layer for extracting the image features of the medical image;
the decoder is provided with an deconvolution layer and is used for acquiring the labeling images of the femoral neck region and the Ward triangular region in the femoral neck region from the image characteristics.
6. The artificial intelligence diagnosis apparatus for osteoporosis identifying the X-ray of the hip joint according to claim 1,
the image classification neural network model adopts an image classification neural network to classify and judge the segmentation images; the image classification neural network includes, but is not limited to, a DenseNet dense convolutional neural network.
7. The artificial intelligence diagnosis apparatus for osteoporosis identifying the X-ray of the hip joint according to claim 6,
the DenseNet dense convolutional neural network also comprises a plurality of dense blocks and a plurality of transition layers;
the transition layer is connected with the dense block in an interphase mode.
8. The artificial intelligence diagnosis apparatus for osteoporosis identifying the X-ray of the hip joint according to claim 7,
the DenseNet dense convolutional neural network further comprises a bottleneck layer;
the bottleneck layer is used for reducing the input quantity of the segmentation images;
the bottleneck layer is arranged at an input port of the image classification neural network model, and the output of the bottleneck layer is connected with the first dense block in the DenseNet dense convolutional neural network.
9. The artificial intelligence diagnosis apparatus for osteoporosis identifying the X-ray of the hip joint according to claim 8,
the DenseNet dense convolutional neural network further comprises a global average pooling layer and a classifier;
the global average pooling layer and the classifier are connected with the last of the dense blocks in the DenseNet dense convolutional neural network.
CN202110549854.5A 2021-05-20 2021-05-20 Osteoporosis artificial intelligent diagnosis device for recognizing hip joint X-ray Active CN113222951B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110549854.5A CN113222951B (en) 2021-05-20 2021-05-20 Osteoporosis artificial intelligent diagnosis device for recognizing hip joint X-ray

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110549854.5A CN113222951B (en) 2021-05-20 2021-05-20 Osteoporosis artificial intelligent diagnosis device for recognizing hip joint X-ray

Publications (2)

Publication Number Publication Date
CN113222951A true CN113222951A (en) 2021-08-06
CN113222951B CN113222951B (en) 2023-05-02

Family

ID=77093771

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110549854.5A Active CN113222951B (en) 2021-05-20 2021-05-20 Osteoporosis artificial intelligent diagnosis device for recognizing hip joint X-ray

Country Status (1)

Country Link
CN (1) CN113222951B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113822231A (en) * 2021-11-08 2021-12-21 中国人民解放军陆军特色医学中心 Inter-trochanter fracture operation auxiliary system based on deep learning image recognition
CN114049315A (en) * 2021-10-29 2022-02-15 北京长木谷医疗科技有限公司 Joint recognition method, electronic device, storage medium, and computer program product
CN114723763A (en) * 2022-05-24 2022-07-08 博志生物科技(深圳)有限公司 Medical image segmentation method, device, equipment and storage medium
CN117635951A (en) * 2024-01-24 2024-03-01 苏州大学附属第二医院 Determination method and system for automatically identifying hip osteoporosis based on X-ray image
CN117635951B (en) * 2024-01-24 2024-05-03 苏州大学附属第二医院 Determination method and system for automatically identifying hip osteoporosis based on X-ray image

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1633594A (en) * 2002-02-27 2005-06-29 成像治疗仪股份有限公司 Methods and devices for quantitative analysis of X-ray images
CN1682236A (en) * 2002-08-20 2005-10-12 成像治疗仪股份有限公司 Methods and devices for analysis of X-ray images
US20070047794A1 (en) * 2000-10-11 2007-03-01 Philipp Lang Methods and devices for analysis of x-ray images
CN110033438A (en) * 2019-03-14 2019-07-19 上海市第六人民医院 Hip joint tagging system and its labeling method
CN110648337A (en) * 2019-09-23 2020-01-03 武汉联影医疗科技有限公司 Hip joint segmentation method, hip joint segmentation device, electronic apparatus, and storage medium
CN110796636A (en) * 2019-09-25 2020-02-14 中国人民解放军战略支援部队信息工程大学 CT image bone condition detection method and device based on convolutional neural network
CN112396591A (en) * 2020-11-25 2021-02-23 暨南大学附属第一医院(广州华侨医院) Osteoporosis intelligent evaluation method based on lumbar X-ray image

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070047794A1 (en) * 2000-10-11 2007-03-01 Philipp Lang Methods and devices for analysis of x-ray images
CN1633594A (en) * 2002-02-27 2005-06-29 成像治疗仪股份有限公司 Methods and devices for quantitative analysis of X-ray images
CN1682236A (en) * 2002-08-20 2005-10-12 成像治疗仪股份有限公司 Methods and devices for analysis of X-ray images
CN110033438A (en) * 2019-03-14 2019-07-19 上海市第六人民医院 Hip joint tagging system and its labeling method
CN110648337A (en) * 2019-09-23 2020-01-03 武汉联影医疗科技有限公司 Hip joint segmentation method, hip joint segmentation device, electronic apparatus, and storage medium
CN110796636A (en) * 2019-09-25 2020-02-14 中国人民解放军战略支援部队信息工程大学 CT image bone condition detection method and device based on convolutional neural network
CN112396591A (en) * 2020-11-25 2021-02-23 暨南大学附属第一医院(广州华侨医院) Osteoporosis intelligent evaluation method based on lumbar X-ray image

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
S. M. NAZIA FATHIMA 等: "Estimation of t-score and BMD values from X-ray images for detection of osteoporosis" *
王立鹏 等: "深度学习与骨骼影像自动化处理", 《放射学实践》 *
袁伟健 等: "干细胞复合三维支架治疗骨质疏松性骨缺损", 《中国组织工程研究》 *
高小玲 等: "原发性骨淋巴瘤影像学特征及误诊分析" *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114049315A (en) * 2021-10-29 2022-02-15 北京长木谷医疗科技有限公司 Joint recognition method, electronic device, storage medium, and computer program product
CN114049315B (en) * 2021-10-29 2023-04-18 北京长木谷医疗科技有限公司 Joint recognition method, electronic device, storage medium, and computer program product
CN113822231A (en) * 2021-11-08 2021-12-21 中国人民解放军陆军特色医学中心 Inter-trochanter fracture operation auxiliary system based on deep learning image recognition
CN114723763A (en) * 2022-05-24 2022-07-08 博志生物科技(深圳)有限公司 Medical image segmentation method, device, equipment and storage medium
CN117635951A (en) * 2024-01-24 2024-03-01 苏州大学附属第二医院 Determination method and system for automatically identifying hip osteoporosis based on X-ray image
CN117635951B (en) * 2024-01-24 2024-05-03 苏州大学附属第二医院 Determination method and system for automatically identifying hip osteoporosis based on X-ray image

Also Published As

Publication number Publication date
CN113222951B (en) 2023-05-02

Similar Documents

Publication Publication Date Title
CN113222951B (en) Osteoporosis artificial intelligent diagnosis device for recognizing hip joint X-ray
CN1663530B (en) Methods and apparatus for processing image data to aid in detecting disease
Areeckal et al. Current and emerging diagnostic imaging-based techniques for assessment of osteoporosis and fracture risk
Tang et al. CNN-based qualitative detection of bone mineral density via diagnostic CT slices for osteoporosis screening
Yang et al. Feasibility of automatic measurements of hip joints based on pelvic radiography and a deep learning algorithm
Jodeiri et al. Fully automatic estimation of pelvic sagittal inclination from anterior-posterior radiography image using deep learning framework
Al-Fahoum et al. Automated detection of lung cancer using statistical and morphological image processing techniques
Nazia Fathima et al. A Survey on Osteoporosis Detection Methods with a Focus on X-ray and DEXA Images
Ou et al. A sub-pixel accurate quantification of joint space narrowing progression in rheumatoid arthritis
US20210212647A1 (en) Estimating bone mineral density from plain radiograph by assessing bone texture with deep learning
Moeskops et al. Automatic quantification of body composition at L3 vertebra level with convolutional neural networks
CN116433620A (en) CT image-based bone mineral density prediction and osteoporosis intelligent screening method and system
Michopoulou Image analysis for the diagnosis of MR images of the lumbar spine
Gu et al. Bone mineral density estimation from a plain X-ray image by learning decomposition into projections of bone-segmented computed tomography
Sha et al. The improved faster-RCNN for spinal fracture lesions detection
KR102258070B1 (en) Method for evaluating foot type and device evaluating foot type using the same
Sevestre-Ghalila et al. Texture image analysis for osteoporosis detection with morphological tools
Döhnert et al. Validity of computed photogrammetry for detecting idiopathic scoliosis in adolescents
Korstjens et al. Reliability of an image analysis system for quantifying the radiographic trabecular pattern
CN112907507A (en) Graf method hip joint ultrasonic image measuring method, device, equipment and storage medium
Taleb-Ahmed et al. On information fusion to improve segmentation of MRI sequences
Shakya et al. Benchmarking Encoder-Decoder Architectures for Biplanar X-ray to 3D Shape Reconstruction
Mesanovic et al. Application of lung segmentation algorithm to disease quantification from CT images
Mezlini et al. High resolution volume quantification of the knee joint space based on a semi-automatic segmentation of computed tomography images
Zrimec et al. 3D modelling and visualization of the human lung

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Qin Yanguo

Inventor after: He Meng

Inventor after: Guo Deming

Inventor after: Tang Xiongfeng

Inventor after: Shen Xianyue

Inventor after: Chen Bo

Inventor before: Qin Yanguo

Inventor before: He Meng

Inventor before: Tang Xiongfeng

Inventor before: Guo Deming

Inventor before: Shen Xianyue

Inventor before: Chen Bo

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant