CN116862824A - X-ray image analysis method - Google Patents

X-ray image analysis method Download PDF

Info

Publication number
CN116862824A
CN116862824A CN202210299218.6A CN202210299218A CN116862824A CN 116862824 A CN116862824 A CN 116862824A CN 202210299218 A CN202210299218 A CN 202210299218A CN 116862824 A CN116862824 A CN 116862824A
Authority
CN
China
Prior art keywords
image
ray image
analysis method
analyzed
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210299218.6A
Other languages
Chinese (zh)
Inventor
张汉威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Kangrui Intelligent Co ltd
Original Assignee
Shenzhen Kangrui Intelligent Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Kangrui Intelligent Co ltd filed Critical Shenzhen Kangrui Intelligent Co ltd
Priority to CN202210299218.6A priority Critical patent/CN116862824A/en
Publication of CN116862824A publication Critical patent/CN116862824A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

An X-ray image analysis method, executed by a computer, comprising: receiving an X-ray image; selecting at least one detection area in the X-ray image; performing image normalization processing on the target image in the detection area to obtain an image to be analyzed; and inputting the image to be analyzed into an image classification model to obtain an bone density analysis result.

Description

X-ray image analysis method
Technical Field
The present invention relates to an image analysis method, and more particularly to a method for performing X-ray image analysis using a neural network model.
Background
A dual energy X-ray absorptiometer (DXA), commonly known as bone densitometer, can generate X-rays of two energies. Since X-rays of different energies are attenuated differently by different media. The higher the density of the medium, the greater the attenuation caused to the X-rays. Thus, bone density of each part of the body can be detected. However, the measurement time required for using DXA generally needs 15 to 20 minutes, and the instrument cost is also more expensive than that of a general X-ray machine, which is not beneficial to popularizing the detection of people.
Disclosure of Invention
An embodiment of the invention provides an X-ray image analysis method. The X-ray image analysis method is executed by a computer and comprises the following steps: receiving an X-ray image; confirming whether the image quality of the X-ray image meets the requirement; selecting at least one detection area in the X-ray image; performing image normalization processing on the target image in the detection area to obtain an image to be analyzed; and inputting the image to be analyzed into an image classification model to obtain an bone density analysis result.
According to the X-ray image analysis method provided by the embodiment of the invention, the bone density analysis result can be automatically analyzed according to the X-ray image. According to some embodiments, the risk value may be further analyzed.
Drawings
FIG. 1 is a flow chart of an X-ray image analysis method according to an embodiment of the invention.
FIG. 2 is a detailed flowchart of an image normalization process according to an embodiment of the present invention.
FIG. 3 is a detailed flowchart of an image normalization process according to another embodiment of the present invention.
FIG. 4 is a detailed flowchart of an image classification process according to an embodiment of the invention.
FIG. 5 is a detailed flowchart of an image classification process according to another embodiment of the invention.
FIG. 6 is a detailed flowchart of risk value prediction according to an embodiment of the present invention.
FIG. 7 is a detailed flowchart of risk value prediction according to another embodiment of the present invention.
FIG. 8 is a detailed flowchart of risk value prediction according to another embodiment of the present invention.
Fig. 9 is a detailed flowchart of risk value prediction according to another embodiment of the present invention.
FIG. 10A is a schematic diagram of an X-ray image according to the requirements.
FIG. 10B is a schematic diagram of an undesirable X-ray image.
FIG. 10C is a schematic illustration of another undesirable X-ray image.
The reference numerals are explained as follows:
100. receiving X-ray images
101. Confirm whether the image quality of X-ray image meets the requirement
200. Selecting a detection area
300. Image normalization process
301,311 image sharpness processing
302,312 minimum edge cut
303,315 scaling
313. Computing high texture feature regions
314. Range sampling
400. Input to image classification model
401. Input to the triplet loss model
402. Principal component analysis
403. Obtaining analysis type according to coordinate falling point
404,413 unify all analysis types
411. Input to convolutional neural network
412. Obtaining analysis types
500. Obtaining the bone density analysis result
600. Feature normalization
700. Inputting features to risk value prediction model
800. Obtaining risk values
Detailed Description
Referring to fig. 1, a flowchart of an X-ray image analysis method according to an embodiment of the invention is shown. First, an X-ray image is received (step 100).
In some embodiments, the X-ray image is a spine X-ray image, a femur X-ray image, a collarbone X-ray image, or a metacarpal X-ray image. And judging whether the corresponding part generates fine texture structure change caused by bone loss by analyzing the characteristics of the spine X-ray image, the femur X-ray image, the collarbone X-ray image or the metacarpal X-ray image, so as to estimate whether osteoporosis occurs. The X-ray image is obtained by a diagnostic X-ray machine, a mobile X-ray machine or an X-ray machine inspection vehicle, and the equipment cost and the measurement time are lower than those of the traditional dual-energy X-ray absorbance measuring instrument.
In step 101, it is determined whether the image quality of the X-ray image meets the requirement. If the requirements are met, continuing the subsequent steps; if not, the process is ended. Specifically, the image quality of the X-ray image can be checked by an operator function such as canny, focus, sobel, laplacian. For example, a threshold may be set, and if the result of performing one of the operator functions on the X-ray image is lower than the threshold, the requirement is met. For example, the sobel operator may calculate horizontal and vertical gradients, with gradient values that are too high, indicating that the image includes excessive noise. FIG. 10A is a schematic diagram of an X-ray image according to the requirements; as shown in fig. 10B, which is a schematic diagram of an X-ray image that does not meet the requirement, it can be seen that the image includes excessive noise points; as shown in fig. 10C, another X-ray image that does not meet the requirements is shown, and it can be seen that the image includes a plurality of horizontal lines. Therefore, whether textures in the X-ray image are clear enough or not can be detected, so that the clear enough image can be screened out, and the error of a subsequent judging result is avoided.
In some embodiments, a plurality of operator functions may be used, where the operator functions respectively correspond to a threshold, and when the calculation results of the operator functions are all lower than the corresponding thresholds, it is determined that the image quality of the X-ray image meets the requirement.
In step 200, at least one detection area in the X-ray image is selected. For example, for a femoral X-ray image, a femoral neck region is used as the detection region.
In some embodiments, step 200 also provides a user interface for the user to circle out the detection area.
In some embodiments, step 200 is implemented by an object detection model. The object detection model may be, for example, mask R-CNN, YOLO, etc. The object detection model is required to be trained in advance, and the object detection model is trained to detect the femoral neck part in the femoral X-ray image by inputting multiple sample images and corresponding labeling areas containing detection targets (such as the femoral neck part) into the object detection model.
In step 300, an image normalization process is performed on the target image in the detection area to obtain an image to be analyzed. For the sake of smooth explanation, the detailed flow of the image normalization process will be left to be described later. Through image standardization processing, images with proper sizes and clear required details can be obtained and are suitable for being input into an image classification model.
In step 400, the processed image to be analyzed is input into an image classification model; next, in step 500, a bone mineral density analysis result is obtained according to the output of the image classification model. The image classification model is a neural network model, and implementation will be described in detail later. The bone mineral density analysis result may be, for example, whether or not osteoporosis is present, bone mineral density value, or the like.
In some embodiments, the size of the detection area is determined according to the input specification of the neural network model. For example, if the image size suitable for input to the neural network model is 224 pixels square, the size of the detection area is similarly 224 pixels square.
Referring to fig. 2, a detailed flowchart of an image normalization process according to an embodiment of the present invention is shown. The image normalization process 300 includes image sharpening (step 301), minimum edge cropping (step 302), and scaling (step 303).
In step 301, sharpness (sharp) processing or equalization processing (such as histogram equalization) may be used to make the image details clearer. Before the sharpness processing or the equalization processing is performed, a grayscale processing is further included to convert the target image that is color into a grayscale image. If the target image is already a gray-scale image, no gray-scale processing is required.
In step 302, a cropping process is performed on the target image. If the target image size does not meet the required size of the neural network model, cutting the target image to a corresponding size. For example, if the target image is rectangular, the long side is cut with the short side as the reference to obtain a square image.
In step 303, if the image size processed in step 302 does not conform to the size of the neural network model, scaling (scaling or enlarging) is performed to obtain the required size of the neural network model. After preprocessing the target image in steps 301 to 303, an image to be analyzed can be obtained.
Referring to fig. 3, a detailed flowchart of an image normalization process according to another embodiment of the present invention is shown. In comparison with fig. 2, the image normalization process of the present embodiment further includes a step of calculating a high texture feature region (step 313) and a step of range sampling (step 314). Steps 311, 312 and 315 are the same as steps 301, 302 and 303, respectively, and will not be repeated here.
In step 313, an edge detection algorithm is used to detect texture in the image. The edge detection algorithm may be, for example, a Canny algorithm, a Sobel algorithm, or the like. In particular for the identification of osteoporosis, the region with the most bone texture can be found by step 313.
In step 314, a specific range is enlarged according to the center of the most bone texture region found in step 313, and a plurality of region images with the same size as the detection region are randomly sampled in the specific range for inputting the region images into the image classification model in step 400. Here, since the sampled region image meets the required size of the neural network model, step 315 may be omitted.
Referring to fig. 4, a detailed flowchart of an image classification process according to an embodiment of the invention is shown. In step 401, the image or the region image to be analyzed is input into an image classification model. Here, the image classification model is a Triplet Loss (Triplet Loss) model. The triplet loss model is used to train a less diverse dataset. Input data includes an Anchor (Anchor) example, a Positive (Positive) example, and a Negative (Negative) example. And (3) optimizing the model so that the distance between the anchor example and the positive example is smaller than the distance between the anchor example and the negative example, and realizing similarity calculation of the sample. Wherein the anchor examples are randomly selected ones of the set of samples, the positive examples and the anchor examples belong to the same class of samples, and the negative examples and the anchor examples belong to different classes of samples. Thus, the image features can be clustered by the triplet loss model. For example, it is distinguished between clusters suffering from osteoporosis and clusters not suffering from osteoporosis.
In step 402, the output of the triplet loss model is reduced in dimension by principal component analysis (Principal Component Analysis, PCA). Principal component analysis can find a projection axis for the data in the feature space, and the maximum variance of the set of data can be obtained after projection. Thus, the number of dimensions can be effectively reduced, but the overall variation is not reduced much. Thus, the principal component analysis can be utilized to reduce the dimension of the grouping result so as to obtain the distribution coordinate information of each group. Through steps 401 and 402, the image to be analyzed or the region image input into the triplet loss model can be converted into a coordinate drop point.
In step 403, according to the distribution coordinate information of each group obtained in the training process, it is determined which cluster range the coordinate falling point is located in, so as to obtain which group (or referred to as analysis type) the image should belong to.
Step 404 is to integrate all analysis types. Here, it means that the analysis types obtained for each image to be analyzed or each region image captured corresponding to the same X-ray image are integrated. For example, if three area images are captured for the same X-ray image, the three area images will respectively obtain an analysis type after going through the steps 401 to 403; the three analysis types are integrated 404, so that the bone mineral density analysis result can be obtained 500 according to the integrated result. Specifically, the bone mineral density analysis results are based on a plurality of analysis types. For example, if the three types of analysis are two types of osteoporosis and one type of osteoporosis is not, the results of the osteoporosis-related bone density analysis are determined according to the majority.
Referring to fig. 5, a detailed flowchart of an image classification process according to another embodiment of the invention is shown. The difference from fig. 4 is that the present embodiment uses convolutional neural networks (Convolutional Neural Networks, CNN) as image classification models, such as depth residual networks (Deep residual network, resNet), googleLeNet, denseNet, and the like. When the model is trained, the X-ray image serving as a training sample is obtained according to the mode, the image or the regional image to be analyzed is obtained, the analysis type of the X-ray image is marked, and the X-ray image is input into the model. The last layer of the convolutional neural network is a weight classifier (e.g., XGBoost) to predict the possible class probabilities based on the extracted features. Therefore, when performing the prediction judgment, the X-ray image to be identified is obtained according to the above method, the image to be analyzed or the region image is input into the model (step 411), and the predicted analysis type can be obtained (step 412). Step 413 is the same as step 404 described above, and will not be repeated here.
Referring to fig. 6, a detailed flowchart of risk value prediction according to an embodiment of the present invention is shown. With the foregoing embodiment of fig. 5, in some embodiments, the features extracted by the convolutional neural network may also be reused. The extracted features are input into another neural network model, referred to herein as a risk value prediction model (step 700). Here, the risk value prediction model may be a multi-layer perceptron (Multilayer perceptron, MLP). During training, the extracted features corresponding to the training samples and the corresponding risk values are input into a risk value prediction model, so that when the prediction judgment is performed, the risk values can be predicted according to the extracted features of the samples to be identified, and the predicted risk values are obtained (step 800). In osteoporosis identification applications, the risk value may be, for example, a T-score (T-score) parameter or a fracture risk assessment (Fracture Risk Assessment, FRAX) parameter. In some embodiments, in addition to the features extracted from the convolutional neural network, other features may be input into the risk value prediction model, such as features of personal data (e.g., gender, age), body data (e.g., body Mass Index (BMI), height, weight), medical information (disease history (e.g., whether diabetes is suffering or not, hypertension)), etc. The features may be entered by a user via a user interface or may be obtained by reading a medical record database.
Referring to fig. 7, a detailed flowchart of risk value prediction according to another embodiment of the present invention is shown. The difference from fig. 6 is that step 600 is also performed to normalize the extracted features to a range of values between 0 and 1, prior to step 700.
Referring to fig. 8, a detailed flowchart of risk value prediction according to still another embodiment of the present invention is shown. Similar to the above-described FIG. 6, the features extracted by the triplet loss model may also be reused and input into the risk value prediction model (step 700). Step 800 is as described above and is not repeated here.
In some embodiments, in addition to the features extracted from the triplet loss model, other features may be input into the risk value prediction model, such as features of personal data (e.g., gender, age), body data (e.g., body Mass Index (BMI), height, weight), medical information (e.g., history of disease (e.g., whether diabetes, hypertension is occurring or not)), etc. The features may be entered by a user via a user interface or may be obtained by reading a medical record database.
Referring to fig. 9, a detailed flowchart of risk value prediction according to still another embodiment of the present invention is shown. Similar to fig. 7 described above, step 600 is also performed prior to step 700, normalizing the extracted features to a range of values between 0 and 1.
The X-ray image analysis method is realized by loading and executing a computer program product through a computer. The computer program product is comprised of a plurality of program instructions stored on a non-transitory computer readable medium. The computer may be, for example, a personal computer, a server, or the like, having computing capabilities. The computer generally has a processing unit (e.g., a central processing unit, a graphics processor), a memory, a storage medium (e.g., a hard disk), an input/output interface, a network interface, and other hardware resources.
In some embodiments, the computer may be linked to a medical image storage system (e.g., picture archiving and communication system, PACS) or medical examination instrument to acquire X-ray images.
In summary, according to the X-ray image analysis method of the embodiment of the invention, the bone density analysis result can be automatically analyzed according to the X-ray image. According to some embodiments, the risk value may be further analyzed.

Claims (10)

1. An X-ray image analysis method, performed by a computer, the X-ray image analysis method comprising:
receiving an X-ray image;
confirming that the image quality of the X-ray image meets the requirement;
selecting at least one detection area in the X-ray image;
performing image normalization processing on the target image in the detection area to obtain an image to be analyzed; a kind of electronic device with high-pressure air-conditioning system
Inputting the image to be analyzed to an image classification model to obtain a bone density analysis result.
2. The X-ray image analysis method of claim 1, wherein the image classification model is a triplet loss model.
3. The X-ray image analysis method according to claim 2, further comprising:
the output result of the triplet loss model is subjected to dimension reduction through principal component analysis so as to obtain coordinate falling points through conversion; a kind of electronic device with high-pressure air-conditioning system
And obtaining the analysis type of the image to be analyzed according to the cluster range of the coordinate falling point.
4. The X-ray image analysis method according to claim 3, further comprising:
and integrating all the analysis types of the images to be analyzed to obtain the bone density analysis result.
5. The X-ray image analysis method of claim 1, wherein the image classification model is a convolutional neural network or a triplet loss model.
6. The X-ray image analysis method according to claim 5, further comprising:
inputting a plurality of features extracted via the convolutional neural network or the triplet loss model to a risk value prediction model to obtain a risk value.
7. The X-ray image analysis method of claim 6, further comprising, prior to inputting the plurality of features into the risk value prediction model: normalizing the plurality of features.
8. The X-ray image analysis method according to claim 6, wherein the risk value prediction model is a multi-layer sensor.
9. The X-ray image analysis method according to claim 1, wherein the step of selecting the detection region is performed by an object detection model.
10. The X-ray image analysis method according to claim 1, wherein the step of inputting the image to be analyzed to the image classification model comprises:
inputting each image to be analyzed into the image classification model respectively so as to classify the images to be analyzed into analysis types respectively; a kind of electronic device with high-pressure air-conditioning system
Taking a plurality of analysis types corresponding to the images to be analyzed as the bone density analysis results.
CN202210299218.6A 2022-03-25 2022-03-25 X-ray image analysis method Pending CN116862824A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210299218.6A CN116862824A (en) 2022-03-25 2022-03-25 X-ray image analysis method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210299218.6A CN116862824A (en) 2022-03-25 2022-03-25 X-ray image analysis method

Publications (1)

Publication Number Publication Date
CN116862824A true CN116862824A (en) 2023-10-10

Family

ID=88220311

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210299218.6A Pending CN116862824A (en) 2022-03-25 2022-03-25 X-ray image analysis method

Country Status (1)

Country Link
CN (1) CN116862824A (en)

Similar Documents

Publication Publication Date Title
US9076197B2 (en) Probability density function estimation
US9480439B2 (en) Segmentation and fracture detection in CT images
US11275976B2 (en) Medical image assessment with classification uncertainty
KR20200095504A (en) 3D medical image analysis method and system for identifying vertebral fractures
JPH10171910A (en) Diagnosis supporting system and its method
Eddaoudi et al. Microcalcifications detection in mammographic images using texture coding
JP6945253B2 (en) Classification device, classification method, program, and information recording medium
KR102600401B1 (en) Apparatus, method and computer program for anayzing medical image using classification and segmentation
CN109461144B (en) Method and device for identifying mammary gland image
US20230306591A1 (en) Medical image analysis method
CN117315379B (en) Deep learning-oriented medical image classification model fairness evaluation method and device
GB2457022A (en) Creating a fuzzy inference model for medical image analysis
Shankara et al. Detection of lung cancer using convolution neural network
CN117745704A (en) Vertebral region segmentation system for osteoporosis recognition
CN101847260A (en) Image processing equipment, image processing method and program
CN111507957A (en) Identity card picture conversion method and device, computer equipment and storage medium
CN116862824A (en) X-ray image analysis method
TWI828096B (en) X-ray image analysis method
Herwanto et al. Association technique based on classification for classifying microcalcification and mass in mammogram
CN116862825A (en) Medical image analysis method
TWI814307B (en) Medical image analysis method
KR101085949B1 (en) The apparatus for classifying lung and the method there0f
Kalaiselvi et al. A Study on Validation Metrics of Digital Image Processing
Africano et al. A new benchmark and method for the evaluation of chest wall detection in digital mammography
Shanmugavadivu et al. BREAST CANCER CLASSIFICATION ON ENHANCED SEGMENTED MAMMOGRAMS USING OPTIMIZED CONVOLUTIONAL NEURAL NETWORKS

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination