CN113222951B - Osteoporosis artificial intelligent diagnosis device for recognizing hip joint X-ray - Google Patents

Osteoporosis artificial intelligent diagnosis device for recognizing hip joint X-ray Download PDF

Info

Publication number
CN113222951B
CN113222951B CN202110549854.5A CN202110549854A CN113222951B CN 113222951 B CN113222951 B CN 113222951B CN 202110549854 A CN202110549854 A CN 202110549854A CN 113222951 B CN113222951 B CN 113222951B
Authority
CN
China
Prior art keywords
image
neural network
network model
hip joint
ray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110549854.5A
Other languages
Chinese (zh)
Other versions
CN113222951A (en
Inventor
秦彦国
何猛
郭德明
唐雄风
沈先月
陈博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN202110549854.5A priority Critical patent/CN113222951B/en
Publication of CN113222951A publication Critical patent/CN113222951A/en
Application granted granted Critical
Publication of CN113222951B publication Critical patent/CN113222951B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/50Clinical applications
    • A61B6/505Clinical applications involving diagnosis of bone
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5217Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone

Abstract

The invention discloses an artificial intelligent diagnosis device for identifying the osteoporosis of hip joint X rays, which comprises: the system comprises a data acquisition module, a model construction module, a model training module and an image detection module; the model training module is used for training the two neural network models, and the image detection module is used for obtaining the bone condition information value through the processing of the trained image segmentation neural network model and the image classification neural network model of the hip joint X-ray film to be detected. The device can realize the purpose that only the hip joint X-ray film is identified to assist in primary screening of osteoporosis, and simply and effectively improve the diagnosis rate of the osteoporosis.

Description

Osteoporosis artificial intelligent diagnosis device for recognizing hip joint X-ray
Technical Field
The invention belongs to the technical field of medical image processing, and particularly relates to an artificial intelligent diagnosis device for identifying hip joint X rays.
Background
The risk of fracture increases exponentially with age, and after 50 years, almost 1 out of every 2 women or every 5 men suffers from brittle fracture, with osteoporosis being an important cause of the susceptibility to fracture.
Currently, early diagnosis of osteoporosis is mainly based on dual energy X-ray absorption assay (DXA) and bone density measurement, and the main detection sites include the anterior and posterior positions of the hip and lumbar spine, which can be said to be an important standard for assessing osteoporosis and fracture risk. Quantitative Computed Tomography (QCT) is a relatively advanced test method that can measure not only the bone density of cortical bone, but also the bone density of cancellous bone. A number of studies have shown that BMD measurements using QCT have results that are highly consistent with DXA, which has the advantage of no overlapping effects, but the patient receives a larger dose of radiation and is relatively expensive to examine, so that clinical applications are less. Bone density is generally expressed as T-value (T-Score), T-value= (measured value-peak bone density of normal young people of the same ethnicity-same ethnicity)/standard deviation of peak bone density of normal young people of the same ethnicity-same gender. The diagnosis standard of the central axial bone (lumbar vertebra 1-4, femoral neck or total hip) bone density or the distal radius 1/3 bone density based on DXA measurement on osteoporosis is that the T-value is less than or equal to-2.5. For postmenopausal women, men 50 years and older, it is recommended that the bone density values be lower than 1 standard deviation from the peak bone value of alien healthy adults and normal to the inside, with reference to the diagnostic criteria recommended by WHO; the standard deviation is reduced by 1 to 2.5, and the bone mass is low (or low bone mass); reduction equal to and exceeding 2.5 standard deviations is osteoporosis; the reduction of bone density meets the diagnosis standard of osteoporosis, and one or more brittle fractures are accompanied by severe osteoporosis. For children, premenopausal women and men under 50 years old, the determination of bone density levels is suggested to be represented by the Z-value of the ethnic group, Z-value= (bone density measurement value-average bone density of ethnic group homosexual homogee)/standard deviation of bone density of ethnic group homogee. Z-values of less than or equal to-2.0 are considered as "below the expected range of the same age range" or low bone mass.
Clinical routine imaging examinations are of limited value for assessing bone trabecular structure and lack of relevant quantitative parameters, the diagnostic effect on bone mass reduction is not ideal. With the development of theory and the advancement of signal processing technology, computer Aided Diagnosis (CAD) has been increasingly applied to the study of medical images. In recent years, more and more people have begun to focus on texture analysis of bone images. The research results show that texture analysis has practical potential value for bone disease diagnosis. Recently, deep learning models represented by convolutional neural networks have achieved great success in medical image processing. Review of osteoporosis classification in a large number of femur neck texture analysis, find that convolutional neural network has better classification effect, and classification effect is improved after deep features and bottom features are fused.
The Ward triangle of the hip is a weak area of trabecula, which is a relatively lowest bone density, between the greater and lesser trochanters and the trochanter, that is, between the support beam and the trochanter beam of the pressure curve, and is a good hair area of femoral neck fracture, and can be identified on the hip X-ray orthographic plate. Although Ward's triangle has high cancellous bone content and high sensitivity to bone density changes, sampling and repeatability errors result in poor accuracy due to small projected area, limiting its clinical application. However, ward triangular regions contain more than 80% of cancellous bone, are calcium reserves, and have high sensitivity to bone density changes, which are considered by students to be better for predicting fracture risk.
Since DXA is a two-dimensional overlaid image, it has the disadvantage of not distinguishing cancellous bone from cortical bone, and of not being able to handle artifacts caused by overlaid tissues of the spine, such as aortic calcification, abdominal calcification, etc., thus affecting the accuracy of BMD measurements. And there are limitations in diagnosing osteoporosis in terms of reduced bone density in the hip and lumbar vertebrae. Because the senile osteoporosis patient has long disease course, the chronic inflammation or injury repair causes compensatory hyperosteogeny with different degrees, and the bone density of the corresponding parts is not obviously reduced. The diagnosis of osteoporosis by only conventional DXA detection and hip, lumbar bone density has a significant rate of missed diagnosis.
Meanwhile, in the existing diagnosis and treatment system, patients often need to do X-ray and DXA examination at the same time when in bone examination, so that the burden of the patients is increased, meanwhile, excessive image data also increases the workload of doctors, and missed diagnosis and misdiagnosis are easy to occur.
Disclosure of Invention
The invention provides an artificial intelligent diagnosis device for identifying osteoporosis of a hip joint X-ray, which is used for predicting the form and the position of the femoral neck region of the hip joint and the Ward triangular region in the femoral neck region by marking the femoral neck region and the Ward triangular region in the femoral neck region in a hip joint X-ray image through training of a deep learning model, so that the aim of primarily screening the osteoporosis by only identifying the hip joint X-ray is fulfilled, the diagnosis accuracy of the osteoporosis is simply and effectively improved, missed diagnosis and misdiagnosis are reduced, the diagnosis efficiency of clinical osteoporosis is improved, and the economic burden of a patient and the workload of a doctor are reduced.
In order to achieve the above object, the present invention provides the following solutions:
an artificial intelligent diagnosis device for identifying osteoporosis of hip joint X-rays, comprising: the system comprises a data acquisition module, a model construction module, a model training module and an image detection module;
the data acquisition module is used for acquiring a medical image marked by the femoral neck region and the Ward triangle region inside the femoral neck region;
the model construction module is used for constructing an image segmentation neural network model and an image classification neural network model; the image segmentation neural network model is used for acquiring a segmentation image of the medical image; the classification neural network model is used for classifying and identifying the segmented image to obtain a bone condition information value;
the model training module is used for training the image segmentation neural network model and the image classification neural network model;
the image detection module is used for obtaining a segmented image through processing the medical image to be detected, which is not marked by the femoral neck region and the Ward triangle region, by the trained image segmentation neural network model, and obtaining a bone condition information value by classifying and identifying the segmented image by the trained image classification neural network model.
Preferably, the medical image is a hip X-ray image.
Preferably, the image segmentation neural network model obtains the segmented image by acquiring a labeling image of the medical image and processing the labeling image and the medical image.
Preferably, the image segmentation neural network model adopts an image segmentation neural network to extract and map the image features of the input medical image to obtain the labeling image; the image segmentation neural network includes, but is not limited to, a U-Net convolutional neural network;
the image features comprise a femoral neck region and a Ward triangle labeling image inside the femoral neck region.
Preferably, the image segmentation neural network includes an encoder and a decoder connected to each other;
the encoder is provided with a convolution layer for extracting the image features of the medical image;
the decoder is provided with a deconvolution layer for acquiring the labeling image of the femoral neck region and the Ward triangular region inside the femoral neck region from the image characteristics;
preferably, the image classification neural network model classifies and judges the segmented image by adopting an image classification neural network; the image classification neural network includes but is not limited to a densnet dense convolutional neural network,
preferably, the DenseNet dense convolutional neural network comprises a plurality of dense blocks and a plurality of transition layers;
the transition layer is connected with the dense blocks alternately.
Preferably, the DenseNet dense convolutional neural network further comprises a bottleneck layer;
the bottleneck layer is used for reducing the input quantity of the divided images;
the bottleneck layer is arranged at an input port of the image classification neural network model, and the output of the bottleneck layer is connected with the first dense block in the DenseNet dense convolutional neural network.
Preferably, the DenseNet dense convolutional neural network further comprises a global average pooling layer and a classifier;
the global averaging pooling layer and the classifier are connected with the last of the dense blocks in the DenseNet dense convolutional neural network.
The beneficial effects of the invention are as follows:
the invention discloses an artificial intelligent osteoporosis diagnosis device for recognizing hip joint X-rays, which is manufactured based on the existing DXA checking numerical result, so that intelligent primary osteoporosis screening and auxiliary diagnosis are realized on a hip joint X-ray image by recognizing a femur neck region and a Ward triangle region inside the femur neck region; the identification performance of the device is continuously optimized, so that the diagnosis accuracy of the osteoporosis is simply and effectively improved, missed diagnosis and misdiagnosis are reduced, and the diagnosis efficiency of the clinical osteoporosis is improved. Furthermore, the patient can obtain relevant bone health information only by common hip joint X-ray examination, so that the economic pressure of the patient is reduced, the workload of doctors is also reduced, the diagnosis and treatment efficiency of hospitals is improved, and the secondary examination and radiation exposure are also reduced.
Drawings
In order to more clearly illustrate the technical solutions of the present invention, the drawings that are needed in the embodiments are briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an embodiment of the present invention;
FIG. 2 is a triangular schematic view of a hip Ward in accordance with an embodiment of the present invention;
FIG. 3 is a schematic diagram of a U-Net convolutional neural network model in an embodiment of the present invention;
fig. 4 is a schematic flow chart of the diagnosis of osteoporosis using the device of this example.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
As shown in fig. 1, a schematic structural diagram of an artificial intelligent diagnosis device for identifying the osteoporosis of the hip joint X-ray provided by the invention comprises: the system comprises a data acquisition module, a model construction module, a model training module and an image detection module;
the data module is used for acquiring a medical image marked by a femoral neck region and a Ward triangle region inside the femoral neck region in the hip joint X-ray image.
Fig. 2 is a triangular schematic view of a hip Ward, located in the greater and lesser trochanters and intertrochanter ridges, with high cancellous bone content, high sensitivity to bone density changes, and better risk prediction for fractures.
The model construction module is used for constructing an image segmentation neural network model and an image classification neural network model;
in this embodiment, a U-Net convolutional neural network is used to extract and map image features of an input hip joint X-ray image to obtain the labeling image, and then the labeling image is processed to obtain a segmented image as input data of a subsequent image classification neural network model. In the present embodiment, the divided image is acquired by a clipping and scaling manner.
In this embodiment, the U-Net convolutional neural network includes an encoder and a decoder connected to each other; the encoder is provided with a convolution layer and is used for extracting image features of the medical image; the decoder is provided with a deconvolution layer for acquiring the annotation image from the image characteristics;
in this embodiment, to better obtain the image detail information, a cross-layer connection for obtaining the hip annotation image information is provided between the layers of the encoder and decoder.
The U-Net whole neural network mainly comprises two parts: a contracted path and an expanded path. The shrink path is mainly used for capturing context information in the picture, and the symmetrical expansion path is used for accurately positioning the part required to be segmented in the picture. In the embodiment, the image segmentation neural network model adopts a U-Net convolutional neural network to realize feature extraction and mapping of the hip joint X-ray image, the input of the model is the hip joint X-ray image, the output of the model is a labeling image corresponding to the hip joint X-ray image, and a region of interest in the image can be well and accurately positioned.
In this embodiment, as shown in FIG. 3, the encoder of the U-Net convolutional neural network utilizes nine convolutional layers to extract image features from the input data. The original image size is i×512×512×1, where i is the batch size of training data. The first three convolution layers are 64, 128 and 256 channels respectively, the convolution kernel size is 4×4, and the overlapping step size is 2. The fourth to ninth convolution layers total 512 channels, the convolution kernel size is 4×4, and the overlap step size is 2. The activation function of the convolutional layer is LeakyReLU with a slope of 0.2. The purpose of the decoder is to obtain the labeling image of the femoral neck region and the Ward triangle inside the femoral neck region from the obtained characteristic information of the X-ray image of the hip joint. The decoder consists of eight corresponding deconvolution layers. The first six deconvolutions had 512 channels, with a convolution kernel size of 4 x 4 and an overlap step of 2. The seventh to last deconvolution layers are 256, 128, 64 channels, respectively, the convolution kernel size is 4 x 4, the overlap step size is 2. In this embodiment, to help the decoder to better capture the details of the annotation image, a cross-layer connection is introduced to connect the corresponding encoder and decoder layers. The output image size of the convolutional neural network is the same as the input image size.
In this embodiment, the classification neural network model adopts a DenseNet dense convolutional neural network, and implements classification and identification of the segmented image according to the bone condition of the segmented image, so as to obtain a bone condition information value;
in this embodiment, the densnet dense convolutional neural network includes a plurality of dense blocks and a plurality of transition layers; the dense blocks are connected with the transition layers alternately, i.e. the dense blocks are located between two transition layers, so that the number of dense blocks is one more than the number of transition layers.
In this embodiment, in order to reduce the input number of the divided images, the DenseNet dense convolutional neural network is further provided with a bottleneck layer including a convolutional layer and an average pooling layer; the bottleneck layer is positioned at the input port of the image classification neural network model, and the output of the bottleneck layer is connected with the first dense block.
In this embodiment, in the structure of the DenseNet dense convolutional neural network, the last dense block is further connected with a global average pooling layer and a classifier.
In the embodiment, the bottleneck layer-DenseNet dense convolutional neural network-global average pooling layer and classifier structure are adopted, so that the number of input feature images can be reduced more effectively, and the calculation efficiency is improved.
The model training module is used for training and optimizing the image segmentation neural network model and the image classification neural network model by using sample data, and improving diagnosis accuracy.
The image detection module is used for inputting the medical image to be detected, which is not marked by the femoral neck region and the Ward triangle region, into a neural network model through the trained image segmentation, obtaining the marked image of the hip joint X-ray image to be detected, obtaining the segmented image through shearing and zooming the hip joint X-ray image to be detected and the marked image, classifying and identifying the obtained segmented image through the trained image classification neural network model, obtaining the bone condition information corresponding to the image, and evaluating the osteoporosis degree through the numerical value output by the model.
Fig. 4 is a schematic flow chart of the device for diagnosing osteoporosis according to the present embodiment, comprising the steps of:
s1, acquiring a hip joint X-ray image and bone density DXA inspection image data, manually labeling a femoral neck region and a Ward triangle region inside the femoral neck region in the hip joint X-ray image, and performing image assignment by using a corresponding bone density DXA inspection numerical value result;
s2, constructing an image segmentation neural network model and an image classification neural network model, wherein the image segmentation neural network model is used for acquiring a hip joint X-ray image through training and learning, and the hip joint X-ray image is used as input data of the image classification neural network model through cutting and zooming to acquire a segmentation image; the classification neural network model is used for carrying out classification and identification on the bone condition of the input segmented image through training and learning to obtain a bone condition information value;
s3, training the image segmentation neural network model and the image classification neural network model by using sample data;
s4, taking the X-ray image of the hip joint to be detected as the input of the trained image segmentation neural network model, acquiring a labeling image of the X-ray image of the hip joint to be detected, and obtaining a segmentation image by cutting and zooming the X-ray image of the hip joint to be detected and the labeling image;
s5, classifying and judging the obtained segmented image by using the trained image classification neural network model, and obtaining the bone condition information corresponding to the image.
The above embodiments are merely illustrative of the preferred embodiments of the present invention, and the scope of the present invention is not limited thereto, but various modifications and improvements made by those skilled in the art to which the present invention pertains are made without departing from the spirit of the present invention, and all modifications and improvements fall within the scope of the present invention as defined in the appended claims.

Claims (5)

1. An artificial intelligent diagnosis device for identifying the osteoporosis of the hip joint X-ray, which is characterized by comprising: the system comprises a data acquisition module, a model construction module, a model training module and an image detection module;
the data acquisition module is used for acquiring a medical image marked by the femoral neck region and the Ward triangle region inside the femoral neck region;
the model construction module is used for constructing an image segmentation neural network model and an image classification neural network model; the image segmentation neural network model is used for acquiring a segmentation image of the medical image; the classification neural network model is used for classifying and identifying the segmented image to obtain a bone condition information value;
the model training module is used for training the image segmentation neural network model and the image classification neural network model;
the image detection module is used for obtaining a segmented image through processing the medical image to be detected, which is not marked by the femoral neck region and the Ward triangle region, by the trained image segmentation neural network model, and obtaining a bone condition information value by classifying and identifying the segmented image through the trained image classification neural network model;
the medical image is a hip joint X-ray image;
the image segmentation neural network model obtains the segmentation image by acquiring a labeling image of the medical image and processing the labeling image and the medical image;
the image segmentation neural network model adopts an image segmentation neural network to extract and map the image characteristics of the input medical image to obtain the annotation image; the image segmentation neural network includes, but is not limited to, a U-Net convolutional neural network;
the image features comprise a femoral neck region and a Ward triangular region labeling image inside the femoral neck region;
the image segmentation neural network comprises an encoder and a decoder which are connected with each other;
the encoder is provided with a convolution layer for extracting the image features of the medical image;
the decoder is provided with a deconvolution layer for acquiring the labeling image of the femoral neck region and the Ward triangular region inside the femoral neck region from the image characteristics;
the method for diagnosing osteoporosis using the diagnostic device comprises the steps of:
s1, acquiring a hip joint X-ray image and bone density DXA inspection image data, manually labeling a femoral neck region and a Ward triangle region inside the femoral neck region in the hip joint X-ray image, and performing image assignment by using a corresponding bone density DXA inspection numerical value result;
s2, constructing an image segmentation neural network model and an image classification neural network model, wherein the image segmentation neural network model is used for acquiring a hip joint X-ray image through training and learning, and the hip joint X-ray image is used as input data of the image classification neural network model through cutting and zooming to acquire a segmentation image; the classification neural network model is used for carrying out classification and identification on the bone condition of the input segmented image through training and learning to obtain a bone condition information value;
s3, training the image segmentation neural network model and the image classification neural network model by using sample data;
s4, taking the X-ray image of the hip joint to be detected as the input of the trained image segmentation neural network model, acquiring a labeling image of the X-ray image of the hip joint to be detected, and obtaining a segmentation image by cutting and zooming the X-ray image of the hip joint to be detected and the labeling image;
s5, classifying and judging the obtained segmented image by using the trained image classification neural network model, and obtaining the bone condition information corresponding to the image.
2. The artificial intelligence diagnosis apparatus for osteoporosis, which is characterized in that it identifies the hip joint X-ray according to claim 1,
the image classification neural network model adopts an image classification neural network to classify and judge the segmented image; the image classification neural network includes, but is not limited to, a DenseNet dense convolutional neural network.
3. The artificial intelligence diagnosis apparatus for osteoporosis, which is characterized in that it identifies the hip joint X-ray according to claim 2,
the DenseNet dense convolutional neural network further comprises a plurality of dense blocks and a plurality of transition layers;
the transition layer is connected with the dense blocks alternately.
4. The artificial intelligence diagnosis apparatus for osteoporosis, which is characterized in that, according to claim 3,
the DenseNet dense convolutional neural network further comprises a bottleneck layer;
the bottleneck layer is used for reducing the input quantity of the divided images;
the bottleneck layer is arranged at an input port of the image classification neural network model, and the output of the bottleneck layer is connected with the first dense block in the DenseNet dense convolutional neural network.
5. The artificial intelligence diagnosis apparatus for osteoporosis, which is characterized in that, according to claim 4,
the DenseNet dense convolutional neural network further comprises a global average pooling layer and a classifier;
the global averaging pooling layer and the classifier are connected with the last of the dense blocks in the DenseNet dense convolutional neural network.
CN202110549854.5A 2021-05-20 2021-05-20 Osteoporosis artificial intelligent diagnosis device for recognizing hip joint X-ray Active CN113222951B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110549854.5A CN113222951B (en) 2021-05-20 2021-05-20 Osteoporosis artificial intelligent diagnosis device for recognizing hip joint X-ray

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110549854.5A CN113222951B (en) 2021-05-20 2021-05-20 Osteoporosis artificial intelligent diagnosis device for recognizing hip joint X-ray

Publications (2)

Publication Number Publication Date
CN113222951A CN113222951A (en) 2021-08-06
CN113222951B true CN113222951B (en) 2023-05-02

Family

ID=77093771

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110549854.5A Active CN113222951B (en) 2021-05-20 2021-05-20 Osteoporosis artificial intelligent diagnosis device for recognizing hip joint X-ray

Country Status (1)

Country Link
CN (1) CN113222951B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114049315B (en) * 2021-10-29 2023-04-18 北京长木谷医疗科技有限公司 Joint recognition method, electronic device, storage medium, and computer program product
CN113822231A (en) * 2021-11-08 2021-12-21 中国人民解放军陆军特色医学中心 Inter-trochanter fracture operation auxiliary system based on deep learning image recognition
CN114723763B (en) * 2022-05-24 2022-09-02 博志生物科技(深圳)有限公司 Medical image segmentation method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1633594A (en) * 2002-02-27 2005-06-29 成像治疗仪股份有限公司 Methods and devices for quantitative analysis of X-ray images
CN1682236A (en) * 2002-08-20 2005-10-12 成像治疗仪股份有限公司 Methods and devices for analysis of X-ray images
CN110033438A (en) * 2019-03-14 2019-07-19 上海市第六人民医院 Hip joint tagging system and its labeling method
CN112396591A (en) * 2020-11-25 2021-02-23 暨南大学附属第一医院(广州华侨医院) Osteoporosis intelligent evaluation method based on lumbar X-ray image

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070047794A1 (en) * 2000-10-11 2007-03-01 Philipp Lang Methods and devices for analysis of x-ray images
CN110648337A (en) * 2019-09-23 2020-01-03 武汉联影医疗科技有限公司 Hip joint segmentation method, hip joint segmentation device, electronic apparatus, and storage medium
CN110796636A (en) * 2019-09-25 2020-02-14 中国人民解放军战略支援部队信息工程大学 CT image bone condition detection method and device based on convolutional neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1633594A (en) * 2002-02-27 2005-06-29 成像治疗仪股份有限公司 Methods and devices for quantitative analysis of X-ray images
CN1682236A (en) * 2002-08-20 2005-10-12 成像治疗仪股份有限公司 Methods and devices for analysis of X-ray images
CN110033438A (en) * 2019-03-14 2019-07-19 上海市第六人民医院 Hip joint tagging system and its labeling method
CN112396591A (en) * 2020-11-25 2021-02-23 暨南大学附属第一医院(广州华侨医院) Osteoporosis intelligent evaluation method based on lumbar X-ray image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
S. M. Nazia Fathima 等.Estimation of t-score and BMD values from X-ray images for detection of osteoporosis.《ICCSP '19: Proceedings of the 3rd International Conference on Cryptography, Security and Privacy》.2019,220–224. *
高小玲 等.原发性骨淋巴瘤影像学特征及误诊分析.《华南国防医学杂志》.2018,26-30+57. *

Also Published As

Publication number Publication date
CN113222951A (en) 2021-08-06

Similar Documents

Publication Publication Date Title
CN113222951B (en) Osteoporosis artificial intelligent diagnosis device for recognizing hip joint X-ray
US9460506B2 (en) System and method for predicting future fractures
Pietka et al. Feature extraction in carpal-bone analysis
US8965087B2 (en) System and method of predicting future fractures
Areeckal et al. Current and emerging diagnostic imaging-based techniques for assessment of osteoporosis and fracture risk
JP2007524438A (en) Compensation method in radiological image processing technology
Wani et al. Computer-aided diagnosis systems for osteoporosis detection: a comprehensive survey
Liu et al. Diagnostic and gradation model of osteoporosis based on improved deep U-Net network
Hussain et al. Computer-aided osteoporosis detection from DXA imaging
Areeckal et al. Early diagnosis of osteoporosis using radiogrammetry and texture analysis from hand and wrist radiographs in Indian population
Tang et al. CNN-based qualitative detection of bone mineral density via diagnostic CT slices for osteoporosis screening
US7539332B1 (en) Method and system for automatically identifying regions of trabecular bone tissue and cortical bone tissue of a target bone from a digital radiograph image
Šajn et al. Computerized segmentation of whole-body bone scintigrams and its use in automated diagnostics
Seo et al. Deep focus approach for accurate bone age estimation from lateral cephalogram
KR20210054925A (en) System and Method for Extracting Region of Interest for Bone Mineral Density Calculation
Nazia Fathima et al. A Survey on Osteoporosis Detection Methods with a Focus on X-ray and DEXA Images
Xue et al. A dual-selective channel attention network for osteoporosis prediction in computed tomography images of lumbar spine
CN116433620A (en) CT image-based bone mineral density prediction and osteoporosis intelligent screening method and system
Sha et al. The improved faster-RCNN for spinal fracture lesions detection
KR102258070B1 (en) Method for evaluating foot type and device evaluating foot type using the same
Kawathekar et al. Use of textural and statistical features for analyzing severity of radio-graphic osteoarthritis of knee joint
Taleb-Ahmed et al. On information fusion to improve segmentation of MRI sequences
Fatema et al. Development of an automated optimal distance feature-based decision system for diagnosing knee osteoarthritis using segmented X-ray images
Su et al. A Hierarchical Opportunistic Screening Model for Osteoporosis Using Deep Learning
Shakya et al. Benchmarking Encoder-Decoder Architectures for Biplanar X-ray to 3D Shape Reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Qin Yanguo

Inventor after: He Meng

Inventor after: Guo Deming

Inventor after: Tang Xiongfeng

Inventor after: Shen Xianyue

Inventor after: Chen Bo

Inventor before: Qin Yanguo

Inventor before: He Meng

Inventor before: Tang Xiongfeng

Inventor before: Guo Deming

Inventor before: Shen Xianyue

Inventor before: Chen Bo

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant