Disclosure of Invention
The main purpose of the present application is to provide a method and a device for processing a total hip joint image based on deep learning and X-ray, so as to provide a more convenient and accurate preoperative planning mode to provide better preoperative support for a total hip joint replacement operation.
In order to achieve the above object, according to a first aspect of the present application, a total hip image processing method based on deep learning and X-ray is provided.
The method for processing the total hip joint image based on deep learning and X-ray comprises the following steps:
acquiring an X-ray image of a hip joint, wherein the X-ray image of the hip joint comprises an image of a reference object, and the reference object is a reference object with a known size;
reducing the size of the X-ray image of the hip joint according to the ratio of the image size of the reference object and the actual size of the reference object;
identifying the reduced X-ray image of the hip joint based on a deep learning model, and determining leg length difference, the position of an acetabular cup and the specification and model of a femoral stem prosthesis;
the osteotomy line position is determined from the center of rotation of the femoral stem prosthesis and the center of rotation of the acetabular cup determined during X-ray image recognition of the hip joint.
Optionally, the identifying the reduced X-ray image of the hip joint based on the deep learning model to determine the leg length difference includes:
converting the X-ray image of the hip joint into a gray scale image;
predicting each pixel value of the gray scale image based on a first neural network model, and determining a tear drop key point and a femoral lesser trochanter key point;
and determining the leg length difference according to the critical point of the tear drop and the critical point position of the lesser trochanter of the femur.
Optionally, the identifying the reduced X-ray image of the hip joint based on the deep learning model to determine the position of the acetabular cup includes:
converting the X-ray image of the hip joint into a gray scale image;
predicting each pixel value of the gray scale image based on the second neural network model, and determining the position of the femoral head;
calculating the rotation center of the femoral head according to the mass center formula of the plane image;
calculating the diameter of the acetabular cup according to the diameter of the femoral head;
determining the acetabular cup position based on the bone center of rotation and the acetabular cup diameter.
Optionally, the identifying the reduced X-ray image of the hip joint based on the deep learning model to determine the specification and the model of the femoral stem prosthesis includes:
converting the X-ray image of the hip joint into a gray scale image;
identifying the gray scale image based on the third neural network model, and determining the medullary cavity anatomical axis;
identifying the gray scale map based on a fourth neural network model, and determining the central axis of the femoral neck;
determining a femoral neck shaft angle according to the medullary cavity dissection axis and the femoral neck central axis;
and determining the specification and model of the femoral stem prosthesis according to the femoral neck shaft angle, the medullary cavity area determined in the process of determining the medullary cavity anatomical axis and the femoral head rotation center.
Optionally, the determining the osteotomy line position according to the rotation center of the femoral stem prosthesis and the rotation center of the acetabular cup determined in the X-ray image recognition process of the hip joint comprises:
the rotation center of the femoral stem prosthesis is coincided with the rotation center of the acetabular cup, and the actual position of the femoral stem prosthesis is determined;
the position of the osteotomy line is determined along the coating position of the femoral stem prosthesis.
Optionally, the identifying the gray scale map based on the third neural network model, and the determining the medullary cavity anatomical axis includes:
predicting each pixel value of the gray scale image based on a third neural network model, and determining a femoral head region and a cortical bone region;
determining a medullary cavity region according to a femoral head region and a cortical bone region;
and performing straight line fitting on the coordinates of the plurality of central points of the medullary cavity region to determine the medullary cavity anatomical axis.
Optionally, the identifying the gray scale map based on the fourth neural network model, and the determining the central axis of the femoral neck includes:
predicting each pixel value of the gray scale image based on a fourth neural network model, and determining a femoral head region and a femoral neck base region;
calculating the femoral head central coordinates and the femoral neck base central coordinates corresponding to the femoral head area and the femoral neck base area according to a mass center formula of the plane image;
and determining the central axis of the femoral neck according to the central coordinates of the femoral head and the central coordinates of the femoral neck base.
Optionally, the post-operative leg length difference, and offset, are calculated from the osteotomy line position.
In order to achieve the above object, according to a second aspect of the present application, there is provided a total hip image processing apparatus based on deep learning and X-ray.
The device for processing the total hip joint image based on deep learning and X-ray comprises:
the proportion calibration unit is used for really restoring the size of the X-ray image of the hip joint according to the image size of the reference object and the proportion of the actual size of the reference object;
the leg length difference determining unit is used for identifying the reduced X-ray image of the hip joint based on the first neural network model and determining the leg length difference;
the acetabular cup position determining unit is used for identifying the reduced X-ray image of the hip joint based on the second neural network model and determining the position of the acetabular cup;
the femoral stem prosthesis specification determining unit is used for identifying the reduced X-ray image of the hip joint based on the third neural network model and the fourth neural network model and determining the femoral stem prosthesis specification;
and the osteotomy line determining unit is used for determining the position of the osteotomy line according to the rotation center of the femoral stem prosthesis and the rotation center of the acetabular cup determined in the X-ray image recognition process of the hip joint.
In order to achieve the above object, according to a third aspect of the present application, there is provided a computer-readable storage medium storing computer instructions for causing the computer to execute the deep learning and X-ray based total hip image processing method according to any one of the first aspect.
In order to achieve the above object, according to a fourth aspect of the present application, there is provided an electronic apparatus comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to cause the at least one processor to perform the method for deep learning and X-ray based total hip image processing according to any of the first aspect.
In the embodiment of the application, in the method and the device for processing the total hip joint image based on deep learning and X-ray, the X-ray image of the hip joint is acquired, the X-ray image of the hip joint comprises an image of a reference object, and the reference object is a reference object with a known size; reducing the size of the X-ray image of the hip joint according to the ratio of the image size of the reference object and the actual size of the reference object; identifying the reduced X-ray image of the hip joint based on a deep learning model, and determining leg length difference, the position of an acetabular cup and the specification and model of a femoral stem prosthesis; the osteotomy line position is determined from the center of rotation of the femoral stem prosthesis and the center of rotation of the acetabular cup determined during X-ray image recognition of the hip joint. It can be seen that, in the preoperative planning mode of total hip joint replacement of the embodiment, the X-ray image of the hip joint is reduced by the real size, and the subsequent position identification is more accurate by the actual size; in addition, the identification is carried out based on a deep learning model in the process of identifying the X-ray image, so that the accuracy and the rapidity of the leg length difference, the position of the acetabular cup, the specification and the model of the femoral stem prosthesis and the position of the osteotomy line which are determined according to the identification result are further ensured, and better preoperative support is provided for the total hip replacement surgery.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
According to an embodiment of the present application, there is provided a total hip image processing method based on deep learning and X-ray, as shown in fig. 1, the method includes the following steps:
s101, an X-ray image of the hip joint is obtained, wherein the X-ray image of the hip joint comprises an image of a reference object.
The X-ray image of the hip joint is taken by taking an X-ray picture of the hip joint, while an object of known dimensions, the reference, is taken in the same picture. The X-ray image of the hip joint thus obtained includes an image of the reference object. As shown in fig. 2, the X-ray image of the hip joint is shown, in which the image indicating the standard size of the central portion of the bottom of the image is the image of the reference object. In practical applications, the selection of the reference object and the discharge position during shooting can be adjusted according to the adaptability, and the embodiment is not limited.
And S102, restoring the size of the X-ray image of the hip joint according to the image size of the reference object and the proportion of the actual size of the reference object.
The size of the reference object is known, the image size of the reference object can be obtained through measurement, the proportion of the X-ray image of the hip joint relative to the actual size of the hip joint can be determined according to the image size of the reference object and the proportion of the actual size of the reference object, and then the X-ray image of the hip joint is subjected to real-size reduction according to the proportion. The reduction of the real size of the X-ray image of the hip joint is based on the subsequent image identification, so that the difference between the leg length difference, the acetabular cup position, the specification and the model of the femoral stem prosthesis and the actual corresponding position determined according to the identification result is smaller, and the identification accuracy is ensured.
The specific reduction operation may be to select the critical site size of an object of known size. The ratio is determined by calculating the distance between two points in the image and performing proportional conversion with the actual size of the object, and then the ratio of the X-ray image of the hip joint is corrected according to the ratio.
S103, identifying the restored X-ray image of the hip joint based on the deep learning model, and determining the leg length difference, the position of the acetabular cup and the specification and model of the femoral stem prosthesis.
The deep learning model is a neural network model, and the input and output of the model which may be used for determining leg length difference, acetabular cup position and specification and model of femoral stem prosthesis may be different, but the principle of model training is the same. Specifically, the principle of neural network model training is as follows: the X-ray image of the hip joint is converted into a 0-255 gray scale image, then the image is subjected to manual selection and labeling, each pixel label of the image is divided into a plurality of attribute values (the number of the attribute values is different according to actual requirements, for example, the number of the attribute values can be two or three) and named respectively, and then the attribute values are input into a neural network model to carry out convolution pooling sampling and iterative learning training all the time to obtain the neural network model.
The neural network model in the step is a classification neural network, which classifies different regions in the image, for example, when leg length difference is determined, the neural network model is mainly applied to identify key points of tear drops and femoral lesser trochanters; for another example, when determining the position of the acetabular cup, the neural network model is mainly applied to identify the femoral head region; for example, when determining the specification and model of the femoral stem prosthesis, the neural network model is mainly applied to identify the femoral head and cortical bone region and the femoral head and femoral neck base region.
The neural network in this embodiment may be a convolutional neural network LeNet, a convolutional neural network AlexNet, a visual convolutional neural network ZF-Net, a convolutional neural network GoogleNet, a convolutional neural network VGG, a convolutional neural network inclusion, a convolutional neural network ResNet, a convolutional neural network DensNet, a convolutional neural network inclusion ResNet, or the like.
The leg length difference, the position of the acetabular cup and the specification and the model of the femoral stem prosthesis are determined by calculating coordinates, fitting and the like according to the recognition result of the image.
S104, determining the position of an osteotomy line according to the rotation center of the femoral stem prosthesis and the rotation center of the acetabular cup determined in the X-ray image identification process of the hip joint.
Specifically, the "determining the osteotomy line position according to the rotation center of the femoral stem prosthesis and the rotation center of the acetabular cup determined in the X-ray image recognition process of the hip joint" is to obtain the actual position of the femoral stem prosthesis by moving the femoral stem prosthesis and coinciding the rotation center of the femoral stem prosthesis with the previously calculated rotation center position of the acetabular cup. The location of the coating along the femoral stem prosthesis may determine the actual osteotomy line location in the clinic, as shown in figures 3-4. Fig. 3 shows the femoral stem prosthesis being moved to a predetermined position such that the center of rotation of the femoral stem prosthesis coincides with the previously calculated position of the center of rotation of the acetabular cup, and fig. 4 shows the determination of the osteotomy line position based on the outer shape of the femoral stem prosthesis.
From the above description, it can be seen that in the deep learning and X-ray based total hip joint image processing method according to the embodiment of the present application, an X-ray image of a hip joint is obtained, where the X-ray image of the hip joint includes an image of a reference object, and the reference object is a reference object with a known size; reducing the size of the X-ray image of the hip joint according to the ratio of the image size of the reference object and the actual size of the reference object; identifying the reduced X-ray image of the hip joint based on a deep learning model, and determining leg length difference, the position of an acetabular cup and the specification and model of a femoral stem prosthesis; the osteotomy line position is determined from the center of rotation of the femoral stem prosthesis and the center of rotation of the acetabular cup determined during X-ray image recognition of the hip joint. It can be seen that, in the preoperative planning mode of total hip joint replacement of the embodiment, the X-ray image of the hip joint is reduced by the real size, and the subsequent position identification is more accurate by the actual size; in addition, the identification is carried out based on a deep learning model in the process of identifying the X-ray image, so that the accuracy and the rapidity of the leg length difference, the position of the acetabular cup, the specification and the model of the femoral stem prosthesis and the position of the osteotomy line which are determined according to the identification result are further ensured, and better preoperative support is provided for the total hip replacement surgery.
Further, as a further refinement of the above embodiment, step S103 is described for the detailed steps of determining the leg length difference, the acetabular cup position, and the specification and model of the femoral stem prosthesis, respectively.
As shown in fig. 5, the flowchart for determining the leg length difference specifically includes the following steps:
s201, converting the X-ray image of the hip joint into a gray-scale image.
The X-ray image of the hip joint was converted to a 0-255 gray scale image.
S202, predicting each pixel value of the gray scale image based on the first neural network model, and determining a tear drop key point and a femoral lesser trochanter key point.
Before prediction is performed, a first neural network model is obtained according to sample training. Specifically, an unmarked original image (a gray scale image corresponding to an X-ray sample image of a hip joint) and marks for manually identifying the positions of marked tear drop key points and femoral lesser trochanter key points are transmitted into a convolutional neural network, the input original image is fitted with a Gaussian distribution function of characteristic points, and convolutional pooling sampling is performed until iterative learning training is performed to obtain a first neural network model. It should be noted that the convolutional neural network in this step may be a convolutional neural network LeNet, a convolutional neural network AlexNet, a visualized convolutional neural network ZF-Net, a convolutional neural network GoogleNet, a convolutional neural network VGG, a convolutional neural network inclusion, a convolutional neural network ResNet, a convolutional neural network DensNet, a convolutional neural network inclusion ResNet, or the like.
After the first neural network model is obtained, the gray scale image corresponding to the X-ray image of the hip joint is input into the first neural network model, and the positions of the tear drop key points and the femoral lesser trochanter key points can be automatically identified. Fig. 6 is a schematic diagram of the automatic identification of the location of the critical points of the tear drop and the femoral lesser trochanter, as shown in fig. 6.
S203, determining the leg length difference according to the critical point of the tear drop and the critical point position of the lesser trochanter of the femur.
Specifically, as shown in fig. 7, the horizontal straight line is defined by two critical points of the tear drop and is a connection line of the two critical points of the tear drop, the two vertical line segments are defined by the critical point of the femoral lesser trochanter and the horizontal straight line, the two vertical straight lines are respectively designated as a and B, and the difference between a and B is the difference between the leg lengths.
As shown in fig. 8, the flow chart for determining the position of the acetabular cup specifically includes the following steps:
and S301, converting the X-ray image of the hip joint into a gray scale image.
The X-ray image of the hip joint was converted to a 0-255 gray scale image.
S302, predicting each pixel value of the gray scale image based on the second neural network model, and determining the position of the femoral head.
Before prediction is carried out, a second neural network model is obtained according to sample training. Specifically, an unmarked original image (a gray scale image corresponding to an X-ray sample image of a hip joint) and a marked image for artificially identifying the attribute values of the marked pixels are transmitted into a convolutional neural network, wherein the marked image comprises two attribute values which are respectively named as 0 and 1. The value 0 represents a background pixel and 1 represents a femoral head pixel; and transmitting the data to a convolutional neural network, and performing convolutional pooling sampling and iterative learning training to obtain a second neural network model. It should be noted that the convolutional neural network in this step may be a convolutional neural network LeNet, a convolutional neural network AlexNet, a visualized convolutional neural network ZF-Net, a convolutional neural network GoogleNet, a convolutional neural network VGG, a convolutional neural network inclusion, a convolutional neural network ResNet, a convolutional neural network DensNet, a convolutional neural network inclusion ResNet, or the like.
After the second neural network model is obtained, the gray scale map corresponding to the X-ray image of the hip joint is input into the second neural network model, and each pixel value can be predicted. Automatically attributing each pixel value of the X-ray image to an attribute: 0-background, 1-femoral head, completing the automatic identification of the femoral head region (i.e., femoral head position), as shown in fig. 9. Fig. 9 is a schematic view of identifying a femoral head.
And S303, calculating the rotation center of the femoral head according to the centroid formula of the plane image.
Because the obtained image of the femoral head region is a binary image and the mass distribution is uniform, the center of mass and the centroid are superposed, and the coordinate of the center point of the femoral head, namely the rotation center of the femoral head, can be calculated according to the centroid formula of the planar image. Assuming that the binary image is B [ i, j ], the coordinates of the center point of the femoral head can be obtained according to the following formula:
wherein:
the pixel coordinates of the center point of the femoral head are obtained here, and the pixel coordinates need to be converted into image coordinates. The coordinate center coordinates of the image plane are as follows:
the pixel coordinate
The transformation formula to image coordinates (x ', y') is:
wherein S
x,S
yRespectively the row-column pitch of the image array. Finally, the position of the femoral head rotation center is obtained through an output display module, as shown in fig. 9. The center point of the circle in fig. 10 is the femoral head rotation center.
S304, calculating the diameter of the acetabular cup according to the diameter of the femoral head.
Determining the diameter of the femoral head according to the femoral head area and the femoral head rotation center, and calculating the diameter of the acetabular cup according to the diameter of the femoral head. The diameter of the acetabular cup can be determined by estimating the diameter of the femoral head by referring to any existing estimation mode.
S305, determining the position of the acetabular cup according to the bone rotation center and the diameter of the acetabular cup.
The acetabular cup position is automatically determined from the femoral head diameter and the femoral head center of rotation position, as shown in fig. 11. The area delineated by the lines in FIG. 11 is the acetabular cup location.
As shown in fig. 12, a flowchart for determining the specification and model of the femoral stem prosthesis specifically includes the following steps:
s401, converting the X-ray image of the hip joint into a gray-scale image.
The X-ray image of the hip joint was converted to a 0-255 gray scale image.
S402, identifying the gray level map based on the third neural network model, and determining the medullary cavity anatomical axis.
Specifically, the method for determining the medullary cavity anatomical axis comprises the following steps:
firstly, predicting each pixel value of a gray scale image based on a third neural network model, and determining a femoral head region and a cortical bone region;
before prediction is carried out, a third neural network model is obtained according to sample training. Specifically, an unmarked original image (a gray scale image corresponding to an X-ray sample image of a hip joint) and a marked image for manually identifying the attribute values of the marked pixels are transmitted into a convolutional neural network, wherein the marked image comprises three attribute values which are named 0, 1 and 2 respectively. The value 0 represents the background pixel, 1 represents the femoral head pixel, and 2 represents cortical bone; and transmitting the data to a convolutional neural network, and performing convolutional pooling sampling and iterative learning training to obtain a third neural network model. It should be noted that the convolutional neural network in this step may be a convolutional neural network LeNet, a convolutional neural network AlexNet, a visualized convolutional neural network ZF-Net, a convolutional neural network GoogleNet, a convolutional neural network VGG, a convolutional neural network inclusion, a convolutional neural network ResNet, a convolutional neural network DensNet, a convolutional neural network inclusion ResNet, or the like.
After the third neural network model is obtained, the gray scale map corresponding to the X-ray image of the hip joint is input into the third neural network model, and each pixel value can be predicted. Automatically attributing each pixel value of the X-ray image to an attribute: 0-background, 1-femoral head, 2-cortical bone, completing the automatic identification of femoral head region, cortical bone region, as shown in fig. 13. Fig. 13 is a schematic diagram of identifying femoral head region and cortical bone region.
Secondly, determining a medullary cavity region according to a femoral head region and a cortical bone region;
specifically, the lesser trochanter ends were cut through to the distal femoral region and the image was used to subtract the cortical bone region from the femoral region to obtain the medullary cavity region, as shown in fig. 14.
And finally, performing straight line fitting on the coordinates of a plurality of central points of the medullary cavity region to determine the medullary cavity anatomical axis.
Specifically, as shown in fig. 15, the intersection points of each transverse row and the medullary cavity are four coordinates below the lesser trochanter ending position, and are respectively named as a1, a2, B1 and B2 from left to right; the midpoint, A1 (X), can be determined from two points
1,Y
1),A2(X
2,Y
2) Midpoint coordinates of (a):
b1 and B2 can be calculated in the same way. The coordinates of the middle points of the medullary cavity are calculated in sequence in each row, and the points are fitted into a straight line, namely the medullary cavity anatomical axis (also the femur anatomical axis).
And S403, identifying the gray scale map based on a fourth neural network model, and determining the central axis of the femoral neck.
Specifically, the step of determining the central axis of the femoral neck comprises the following steps:
firstly, predicting each pixel value of the gray scale image based on a fourth neural network model, and determining a femoral head region and a femoral neck base region;
before prediction is performed, a fourth neural network model is obtained according to sample training. Specifically, an unmarked original image (a gray scale image corresponding to an X-ray sample image of a hip joint) and a marked image for manually identifying the attribute values of the marked pixels are transmitted into a convolutional neural network, wherein the marked image comprises three attribute values which are named 0, 1 and 2 respectively. The value 0 represents the background pixel, 1 represents the femoral head pixel, and 2 represents the femoral neck base pixel; and transmitting the data to a convolutional neural network, and performing convolutional pooling sampling and iterative learning training to obtain a fourth neural network model. It should be noted that the convolutional neural network in this step may be a convolutional neural network LeNet, a convolutional neural network AlexNet, a visualized convolutional neural network ZF-Net, a convolutional neural network GoogleNet, a convolutional neural network VGG, a convolutional neural network inclusion, a convolutional neural network ResNet, a convolutional neural network DensNet, a convolutional neural network inclusion ResNet, or the like.
After the fourth neural network model is obtained, the gray scale map corresponding to the X-ray image of the hip joint is input into the fourth neural network model, and each pixel value can be predicted. Automatically attributing each pixel value of the X-ray image to an attribute: 0-background, 1-femoral head, 2-femoral neck base pixel, completing the automatic identification of femoral head region, femoral neck base region, as shown in fig. 16. Fig. 16 is a schematic view of identifying a femoral head region and a femoral neck base region.
Secondly, calculating the femoral head central coordinates and the femoral neck base central coordinates corresponding to the femoral head area and the femoral neck base area according to a mass center formula of the plane image;
the calculation method of the femoral head center coordinate and the femoral neck base center coordinate is similar, and both the calculation method of the femoral head center coordinate in step S303 can be referred to, and details are not described here.
And finally, determining the central axis of the femoral neck according to the central coordinates of the femoral head and the central coordinates of the base of the femoral neck.
Specifically, the line connecting the femoral head center coordinate and the femoral neck base center coordinate is the femoral neck central axis, as shown in fig. 17. The two obliquely downward line segments in fig. 17 are femoral neck central axes.
S404, determining a femoral neck shaft angle according to the medullary cavity anatomical axis and the femoral neck central axis.
Specifically, the included angle formed by the medullary cavity dissection axis and the central axis of the femoral neck is the femoral neck shaft angle.
S405, determining the specification and model of the femoral stem prosthesis according to the femoral neck shaft angle, the medullary cavity area determined in the process of determining the medullary cavity anatomical axis and the femoral head rotation center.
Specifically, according to the angle value of the femoral neck shaft angle and the medullary cavity form, the femoral head rotation center position can give a recommendation for selecting the femoral stem prosthesis model. The femoral stem prosthesis models are distinguished according to the characteristics of the femoral stem prosthesis such as shape and size.
Further, as a supplementary illustration of the embodiment of fig. 1, after determining the osteotomy line position, calculating a post-operative leg length difference and an offset distance according to the osteotomy line position. Specifically, the offset comprises femoral offset: refers to the vertical distance from the center of rotation of the femoral head to the long axis of the femoral shaft. Also included are combined offset distances, specifically the cumulative sum of femoral and acetabular offset distances.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
According to an embodiment of the present application, there is also provided a deep learning and X-ray based total hip image processing apparatus for implementing the method described in fig. 1 to 17, as shown in fig. 18, the apparatus including:
a scale calibration unit 51 for performing real reduction of the size of the X-ray image of the hip joint according to the image size of the reference object and the scale of the actual size thereof;
a leg length difference determination unit 52, configured to identify the reduced X-ray image of the hip joint based on the first neural network model, and determine a leg length difference;
the acetabular cup position determining unit 53 is used for identifying the reduced X-ray image of the hip joint based on the second neural network model and determining the position of the acetabular cup;
a femoral stem prosthesis specification determining unit 54, configured to identify the reduced X-ray image of the hip joint based on the third neural network model and the fourth neural network model, and determine a femoral stem prosthesis specification;
an osteotomy line determining unit 55 for determining an osteotomy line position based on the center of rotation of the femoral stem prosthesis and the center of rotation of the acetabular cup determined during the X-ray image recognition of the hip joint.
Specifically, the specific process of implementing the functions of each unit and module in the device in the embodiment of the present application may refer to the related description in the method embodiment, and is not described herein again.
From the above description, it can be seen that in the deep learning and X-ray based total hip joint image processing apparatus according to the embodiment of the present application, an X-ray image of a hip joint is acquired, the X-ray image of the hip joint includes an image of a reference object, and the reference object is a reference object with a known size; reducing the size of the X-ray image of the hip joint according to the ratio of the image size of the reference object and the actual size of the reference object; identifying the reduced X-ray image of the hip joint based on a deep learning model, and determining leg length difference, the position of an acetabular cup and the specification and model of a femoral stem prosthesis; the osteotomy line position is determined from the center of rotation of the femoral stem prosthesis and the center of rotation of the acetabular cup determined during X-ray image recognition of the hip joint. It can be seen that, in the preoperative planning mode of total hip joint replacement of the embodiment, the X-ray image of the hip joint is reduced by the real size, and the subsequent position identification is more accurate by the actual size; in addition, the identification is carried out based on a deep learning model in the process of identifying the X-ray image, so that the accuracy and the rapidity of the leg length difference, the position of the acetabular cup, the specification and the model of the femoral stem prosthesis and the position of the osteotomy line which are determined according to the identification result are further ensured, and better preoperative support is provided for the total hip replacement surgery.
According to an embodiment of the present application, there is further provided a computer-readable storage medium, wherein the computer-readable storage medium stores computer instructions for causing the computer to execute the method for processing a total hip image based on deep learning and X-ray in the above method embodiment.
According to an embodiment of the present application, there is also provided an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to cause the at least one processor to perform the method of deep learning and X-ray based total hip image processing in the above method embodiments.
It will be apparent to those skilled in the art that the modules or steps of the present application described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and they may alternatively be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, or fabricated separately as individual integrated circuit modules, or fabricated as a single integrated circuit module from multiple modules or steps. Thus, the present application is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.