WO2022007972A1 - Total hip joint image processing method and apparatus - Google Patents

Total hip joint image processing method and apparatus Download PDF

Info

Publication number
WO2022007972A1
WO2022007972A1 PCT/CN2021/107720 CN2021107720W WO2022007972A1 WO 2022007972 A1 WO2022007972 A1 WO 2022007972A1 CN 2021107720 W CN2021107720 W CN 2021107720W WO 2022007972 A1 WO2022007972 A1 WO 2022007972A1
Authority
WO
WIPO (PCT)
Prior art keywords
hip joint
image
neural network
femoral
ray image
Prior art date
Application number
PCT/CN2021/107720
Other languages
French (fr)
Chinese (zh)
Inventor
张逸凌
刘星宇
Original Assignee
北京长木谷医疗科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京长木谷医疗科技有限公司 filed Critical 北京长木谷医疗科技有限公司
Publication of WO2022007972A1 publication Critical patent/WO2022007972A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F2/00Filters implantable into blood vessels; Prostheses, i.e. artificial substitutes or replacements for parts of the body; Appliances for connecting them with the body; Devices providing patency to, or preventing collapsing of, tubular structures of the body, e.g. stents
    • A61F2/02Prostheses implantable into the body
    • A61F2/30Joints
    • A61F2/46Special tools or methods for implanting or extracting artificial joints, accessories, bone grafts or substitutes, or particular adaptations therefor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F2/00Filters implantable into blood vessels; Prostheses, i.e. artificial substitutes or replacements for parts of the body; Appliances for connecting them with the body; Devices providing patency to, or preventing collapsing of, tubular structures of the body, e.g. stents
    • A61F2/02Prostheses implantable into the body
    • A61F2/30Joints
    • A61F2/46Special tools or methods for implanting or extracting artificial joints, accessories, bone grafts or substitutes, or particular adaptations therefor
    • A61F2002/4632Special tools or methods for implanting or extracting artificial joints, accessories, bone grafts or substitutes, or particular adaptations therefor using computer-controlled surgery, e.g. robotic surgery
    • A61F2002/4633Special tools or methods for implanting or extracting artificial joints, accessories, bone grafts or substitutes, or particular adaptations therefor using computer-controlled surgery, e.g. robotic surgery for selection of endoprosthetic joints or for pre-operative planning

Definitions

  • the present application relates to the technical field of data processing, and in particular, to a method and device for processing a total hip joint image.
  • the preoperative planning of total hip replacement surgery mainly includes calculating the required prosthesis size and the position of the osteotomy line.
  • the preoperative planning of total hip replacement surgery plays a very important role in the success rate of the surgery. Therefore, how to It is very important to provide accurate preoperative planning.
  • the main preoperative planning method is to measure manually through various tools, which is inefficient and cannot guarantee the accuracy. Therefore, it is urgent to provide a more convenient and accurate preoperative planning method to provide a better solution for total hip replacement surgery. Preoperative support.
  • the present application proposes a method and device for processing a total hip joint image, so as to provide a more convenient and more accurate preoperative planning manner to provide better preoperative support for total hip replacement surgery.
  • a total hip joint image processing method based on deep learning and X-ray is provided.
  • the total hip image processing method based on deep learning and X-ray according to the present application includes:
  • the X-ray image of the hip joint includes an image of a reference object, and the reference object is a reference object with a known size; according to the ratio of the image size of the reference object and its actual size, the hip
  • the size of the X-ray image of the joint is restored; based on the deep learning model, the restored X-ray image of the hip joint is identified, and the identification result is obtained, wherein the identification result includes the position of the key point used to determine the leg length difference, the use of The position of the femoral head used to determine the position of the acetabular cup, and the area of the femoral head, the cortical bone area, and the base of the femoral neck used to determine the size of the femoral stem component; according to the center of rotation of the femoral stem component and the X at the hip joint The center of rotation of the acetabular cup determined during line image recognition determines the position of the osteotomy line.
  • identifying the restored X-ray image of the hip joint based on the deep learning model to determine the leg length difference includes: converting the X-ray image of the hip joint into a grayscale image; Each pixel value of the degree map is predicted to determine the key point of the teardrop and the key point of the lesser trochanter; according to the key point of the teardrop and the key point of the lesser trochanter, the leg length difference is determined.
  • the identifying and determining the position of the acetabular cup based on the restored X-ray image of the hip joint based on the deep learning model includes: converting the X-ray image of the hip joint into a grayscale image; Each pixel value of the degree map is predicted to determine the position of the femoral head; the center of rotation of the femoral head is calculated according to the centroid formula of the plane image; the diameter of the acetabular cup is calculated according to the diameter of the femoral head; the acetabulum is determined according to the center of rotation of the bone and the diameter of the acetabular cup cup position.
  • identifying the X-ray image of the restored hip joint based on the deep learning model to determine the specification and model of the femoral stem prosthesis includes: converting the X-ray image of the hip joint into a grayscale image; based on a third neural network The model identifies the grayscale image to determine the anatomical axis of the medullary canal; the grayscale image is identified based on the fourth neural network model to determine the central axis of the femoral neck; the femoral neck shaft angle is determined according to the anatomical axis of the medullary canal and the central axis of the femoral neck; The femoral neck shaft angle, the medullary canal area determined during the process of determining the anatomical axis of the medullary canal, and the center of rotation of the femoral head determine the size of the femoral stem prosthesis.
  • the determining the position of the osteotomy line according to the rotation center of the femoral stem prosthesis and the rotation center of the acetabular cup determined during the X-ray image recognition of the hip joint includes: comparing the rotation center of the femoral stem prosthesis with the hip The position of the rotation center of the acetabular cup coincides to determine the actual position of the femoral stem prosthesis; the position of the osteotomy line is determined along the coating position of the femoral stem prosthesis.
  • the identifying the grayscale map based on the third neural network model and determining the anatomical axis of the medullary cavity include: predicting each pixel value of the grayscale map based on the third neural network model, and determining the femoral head region and the bone. Cortical area; determine the medullary cavity area according to the femoral head area and the bone cortex area; perform linear fitting on the coordinates of multiple center points of the medullary cavity area to determine the medullary cavity anatomical axis.
  • identifying the grayscale image based on the fourth neural network model and determining the central axis of the femoral neck includes: predicting each pixel value of the grayscale image based on the fourth neural network model, and determining the femoral head region and the femoral head.
  • Neck base area calculate the femoral head center coordinates and femoral neck base center coordinates corresponding to the femoral head area and the femoral neck base area according to the centroid formula of the plane image; determine the femoral neck center axis according to the femoral head center coordinates and the femoral neck base center coordinates.
  • the deep learning and X-ray-based total hip image processing method of the present application further includes: calculating the postoperative leg length difference and offset distance according to the position of the osteotomy line.
  • a method for processing a total hip joint image comprising: inputting the original X-ray image of the target hip joint with information to be determined into a pre-trained first neural network model , identify and obtain at least one key point position of a tear drop and at least one key point position of the lesser trochanter in the X-ray image of the target hip; The straight line determined by the position of the key point of the lesser trochanter determines the leg length difference corresponding to the X-ray image of the target hip joint; wherein, the training process of the first neural network model includes: combining the original X-ray image of the hip joint with the marked teardrop key point and The position of the key point of the lesser trochanter of the femur is input into the convolutional neural network as a sample set, the input original image is fitted with the Gaussian distribution function of the feature points, and the convolutional pooling sampling is performed to iteratively learn and train to obtain the first neural network.
  • a method for processing a total hip joint image comprising: inputting the target hip X-ray original image of the information to be determined into a pre-trained second neural network model , identify the position of the femoral head; calculate the center of rotation of the femoral head based on the centroid formula of the plane image; determine the diameter of the acetabular cup according to the diameter of the femoral head; determine the center of rotation of the femoral head and the diameter of the acetabular cup The position of the acetabular cup; wherein, the training process of the second neural network model includes: inputting the original X-ray image of the hip joint and the marked pixel attribute value into the convolutional neural network, and performing convolution pooling sampling until iterative The second neural network model is obtained by learning and training, wherein the pixel attribute value includes 0 representing the background pixel and 1 representing the femoral head pixel.
  • a method for processing a total hip joint image comprising: inputting the original X-ray image of the target hip joint with information to be determined into a pre-trained neural network model, identifying Obtain the femoral head area and the cortical bone area; determine the medullary cavity area according to the femoral head area and the bone cortex area; perform linear fitting on the coordinates of multiple center points of the medullary cavity area to determine the anatomical axis of the medullary canal;
  • the original X-ray image is input into the pre-trained neural network model, and the femoral head area and the femoral neck base area are identified; based on the femoral head area and the femoral neck base area, the center coordinates of the femoral head and the center coordinate of the femoral neck base are determined; and determine the central axis of the femoral neck according to the center coordinate of the femoral head and
  • a total hip image processing device based on deep learning and X-ray is provided.
  • the total hip image processing device based on deep learning and X-ray includes: a scale calibration unit, which is configured to perform a real-size X-ray image of the hip joint according to the ratio of the image size of the reference object and its actual size. restoration; the leg length difference determination unit is configured to identify the X-ray image of the restored hip joint based on the first neural network model, and determine the leg length difference; the acetabular cup position determination unit is configured to be based on the second neural network The model recognizes the X-ray image of the restored hip joint, and determines the position of the acetabular cup; the femoral stem prosthesis specification determination unit is configured to determine the restoration of the hip joint based on the third neural network model and the fourth neural network model.
  • the X-ray image is identified to determine the size of the femoral stem prosthesis; the osteotomy line determination unit is configured to determine the center of rotation of the femoral stem prosthesis and the center of rotation of the acetabular cup determined in the process of identifying the X-ray image of the hip joint Osteotomy line location.
  • a computer-readable storage medium stores computer instructions, and the computer instructions are configured to cause the computer to execute the above-mentioned first The deep learning and X-ray-based total hip image processing method according to any one of the aspects.
  • an electronic device comprising: at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores data that can be A computer program executed by the at least one processor, where the computer program is executed by the at least one processor, so that the at least one processor executes the deep learning and X-ray-based method according to any one of the first aspects above. Total hip image processing method.
  • an X-ray image of the hip joint is acquired, and the X-ray image of the hip joint includes an image of a reference object, and the reference object is a known size.
  • Reference object according to the ratio of the image size of the reference object and its actual size, restore the size of the X-ray image of the hip joint; identify the X-ray image of the restored hip joint based on the deep learning model, determine the leg length difference, The position of the acetabular cup and the specification and model of the femoral stem prosthesis; the position of the osteotomy line is determined according to the rotation center of the femoral stem prosthesis and the rotation center of the acetabular cup determined during the X-ray image recognition of the hip joint.
  • the X-ray image of the hip joint is restored to its real size, and subsequent position recognition is more accurate with the actual size;
  • the recognition is based on the deep learning model, which ensures the accuracy and speed of the leg length difference, acetabular cup position, femoral stem prosthesis size, and osteotomy line position determined according to the recognition results. This provides better preoperative support for total hip replacement surgery.
  • FIG. 1 is a flowchart of a method for processing a total hip joint image based on deep learning and X-ray provided according to an embodiment of the present application;
  • FIG. 2 is a schematic diagram of an X-ray image of a hip joint provided according to an embodiment of the present application
  • 3-4 are schematic diagrams of determining the actual position of the osteotomy line in the clinic according to an embodiment of the present application.
  • FIG. 5 is a flowchart of a method for determining a leg length difference provided according to an embodiment of the present application
  • FIG. 6 is a schematic diagram of automatically identifying the key points of teardrops and the key points of the lesser trochanter according to an embodiment of the present application
  • FIG. 7 is a schematic diagram of a leg length difference determination provided according to an embodiment of the present application.
  • FIG. 8 is a flowchart of a method for determining the position of an acetabular cup provided according to an embodiment of the present application
  • FIG. 9 is a schematic diagram of identifying a femoral head according to an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a center of rotation of a femoral head provided according to an embodiment of the present application.
  • FIG. 11 is a schematic diagram of the position of an acetabular cup provided according to an embodiment of the present application.
  • FIG. 12 is a flowchart of a method for determining the specification and model of a femoral stem prosthesis provided according to an embodiment of the present application
  • FIG. 13 is a schematic diagram of identifying a femoral head region and a cortical bone region provided according to an embodiment of the present application
  • FIG. 14 is a schematic diagram of a medullary cavity region provided according to an embodiment of the present application.
  • Fig. 15 is a schematic diagram of determining the anatomical axis of the medullary cavity according to an embodiment of the present application.
  • 16 is a schematic diagram of identifying a femoral head region and a femoral neck base region according to an embodiment of the present application
  • FIG. 17 is a schematic diagram of a central axis of a femoral neck provided according to an embodiment of the present application.
  • FIG. 18 is a block diagram of a total hip joint image processing apparatus based on deep learning and X-ray provided according to an embodiment of the present application.
  • a method for processing a total hip joint image based on deep learning and X-ray includes the following steps:
  • An X-ray image of the hip joint is obtained by taking an X-ray of the hip joint while taking an object of known size, the reference object, in the same photo.
  • the X-ray image of the hip joint thus obtained includes the image of the reference object.
  • FIG. 2 it is an X-ray image of the hip joint, wherein the image of the standard size at the bottom center of the image is the image of the reference object.
  • the selection of the reference object and the discharge position during shooting can be adjusted according to the adaptability, which is not limited in this embodiment.
  • the size of the reference object is known, and the image size of the reference object can also be obtained by measurement. According to the ratio of the image size of the reference object and its actual size, the difference between the X-ray image of the hip joint and the actual size of the hip joint can be determined. Scale (the two are the same scale), and then restore the true size of the X-ray image of the hip joint according to the scale. The restoration of the real size of the X-ray image of the hip joint is the basis for the subsequent image recognition, so that the leg length difference, the position of the acetabular cup, the size of the femoral stem prosthesis, and the position of the osteotomy line can be determined according to the recognition results. The gap with the actual corresponding position is smaller to ensure the accuracy of recognition.
  • the restoration operation may be to select the size of a key part of an object of known size.
  • the ratio is determined, and then the ratio of the X-ray image of the hip joint is corrected according to the ratio.
  • S103 Recognize the restored X-ray image of the hip joint based on the deep learning model, and obtain a recognition result, wherein the recognition result includes a key point position for determining the leg length difference, a thigh for determining the position of the acetabular cup Bone location, and the femoral head area, cortical bone area, and femoral neck base area used to determine the size of the femoral stem prosthesis.
  • the deep learning model is a neural network model.
  • the input and output of the model that may be used to determine the leg length difference, the position of the acetabular cup, and the size of the femoral stem prosthesis may be different, but the principle of model training is the same.
  • the principle of neural network model training is: convert the X-ray image of the hip joint into a 0-255 grayscale image, then manually select and label the image, and divide each pixel label of the image into several attribute values (according to the actual Requirements, the types of attribute values are different, such as two, three, etc.) and name them respectively, and then input them into the neural network model for convolution pooling sampling and iterative learning and training to obtain the neural network model.
  • the neural network model in this step is a classification neural network, which is to classify different areas in the image.
  • the neural network model is mainly used to identify key points of teardrops and lesser trochanter;
  • the neural network model is mainly used to identify the femoral head area; for example, when determining the size of the femoral stem prosthesis, the neural network model is mainly used to identify the femoral head and bone cortex. area as well as the femoral head and base of the femoral neck.
  • the neural network in this embodiment may be a convolutional neural network LeNet, a convolutional neural network AlexNet, a visualization convolutional neural network ZF-Net, a convolutional neural network GoogleNet, a convolutional neural network VGG, a convolutional neural network Inception, a convolutional neural network Neural Network ResNet, Convolutional Neural Network DensNet, Convolutional Neural Network Inception ResNet, etc.
  • the rotation center of the femoral stem prosthesis can be adjusted by moving the femoral stem prosthesis Coinciding with the previously calculated center of rotation of the acetabular cup, the actual position of the femoral stem prosthesis is obtained.
  • the location of the coating along the femoral stem component can determine the actual position of the osteotomy line in clinical practice, as shown in Figure 3-4.
  • Figure 3 shows moving the femoral stem prosthesis to a predetermined position so that the rotation center of the femoral stem prosthesis coincides with the previously calculated acetabular cup rotation center position
  • Figure 4 shows the position of the osteotomy line determined according to the shape of the femoral stem prosthesis.
  • an X-ray image of the hip joint is obtained, and the X-ray image of the hip joint includes a reference object
  • the image of the reference object is a reference object with a known size; according to the ratio of the image size of the reference object and its actual size, the X-ray image of the hip joint is restored in size; based on the deep learning model, the restored hip joint is
  • the X-ray image is used to identify the leg length difference, the position of the acetabular cup and the size of the femoral stem prosthesis.
  • the center of rotation determines the position of the osteotomy line.
  • the X-ray image of the hip joint is restored to its real size, and subsequent position recognition is more accurate with the actual size;
  • the recognition is based on the deep learning model, which ensures the accuracy and speed of the leg length difference, acetabular cup position, femoral stem prosthesis size, and osteotomy line position determined according to the recognition results. This provides better preoperative support for total hip replacement surgery.
  • step S103 separately describes the detailed steps of determining the leg length difference, the position of the acetabular cup, and the specification and model of the femoral stem prosthesis.
  • FIG. 5 shows a flowchart of an embodiment of determining the leg length difference, which may include the following steps:
  • S202 Predict the value of each pixel of the grayscale image based on the first neural network model, and determine the position of the key point of the teardrop and the key point of the lesser trochanter of the femur.
  • the first neural network model Before making predictions, the first neural network model must be obtained by training the samples.
  • the unlabeled original image (the grayscale image corresponding to the X-ray sample image of the hip joint), as well as the manually identified and marked teardrop key points and the labels of the key points of the lesser trochanter of the femur can be passed into the convolutional neural network, and the input
  • the original image is fitted with the Gaussian distribution function of the feature points, and the convolution pooling sampling is performed to iteratively learn and train to obtain the first neural network model.
  • the convolutional neural network in this step can be the convolutional neural network LeNet, the convolutional neural network AlexNet, the visualization convolutional neural network ZF-Net, the convolutional neural network GoogleNet, the convolutional neural network VGG, the convolutional neural network.
  • Neural Network Inception Convolutional Neural Network ResNet, Convolutional Neural Network DensNet, Convolutional Neural Network Inception ResNet, etc.
  • FIG. 6 is a schematic diagram of automatically identifying the key points of the tear drop and the key points of the lesser trochanter of the femur.
  • the horizontal straight line is determined by two key points of teardrop, which is the connection line of the two key points of teardrop, and the two vertical line segments are determined by the key point of the lesser trochanter and the horizontal straight line , the two vertical lines are denoted as A and B respectively, and the difference between A and B is the leg length difference.
  • Figure 8 shows an embodiment flow chart of determining the position of the acetabular cup, which may include the following steps:
  • the second neural network model Before making predictions, the second neural network model must first be obtained by training the samples.
  • the unlabeled original image (the grayscale image corresponding to the X-ray sample image of the hip joint) and the labeled image that manually identifies the attribute value of the labeled pixel can be passed into the convolutional neural network, including two attribute values, named 0, 1.
  • the value 0 represents the background pixel, and 1 represents the femoral head pixel; it is passed into the convolutional neural network, and the convolution pooling sampling is performed to iteratively learn and train to obtain the second neural network model.
  • the convolutional neural network in this step can be the convolutional neural network LeNet, the convolutional neural network AlexNet, the visualization convolutional neural network ZF-Net, the convolutional neural network GoogleNet, the convolutional neural network VGG, the convolutional neural network.
  • Neural Network Inception Convolutional Neural Network ResNet, Convolutional Neural Network DensNet, Convolutional Neural Network Inception ResNet, etc.
  • the grayscale image corresponding to the X-ray image of the hip joint is input into the second neural network model, and each pixel value can be predicted.
  • Each pixel value of the X-ray image is automatically classified into an attribute: 0-background, 1-femoral head, and the automatic identification of the femoral head area (ie, the position of the femoral head) is completed, as shown in Figure 9.
  • Figure 9 is a schematic diagram of identifying the femoral head.
  • the obtained image of the femoral head area is a binary image
  • its mass distribution is uniform, so the centroid and centroid coincide, and the center point coordinates of the femoral head can be calculated according to the centroid formula of the plane image, that is, the rotation center of the femoral head.
  • the binary image is B[i,j]
  • the coordinates of the center point of the femoral head can be obtained according to the following formula:
  • the diameter of the femoral head was determined from the area of the femoral head and the center of rotation of the femoral head, and the diameter of the acetabular cup was calculated from the diameter of the femoral head.
  • the diameter of the acetabular cup can be determined by referring to any of the existing calculation methods.
  • the position of the acetabular cup is automatically determined based on the diameter of the femoral head and the position of the center of rotation of the femoral head, as shown in Figure 11.
  • the area delineated by the lines in Figure 11 is the position of the acetabular cup.
  • Figure 12 shows an embodiment flow chart of determining the specification and model of a femoral stem prosthesis, which may include the following steps:
  • Determining the anatomical axis of the medullary canal can include the following steps:
  • a third neural network model Before making predictions, a third neural network model must be obtained by training samples.
  • the unlabeled original image (the grayscale image corresponding to the X-ray sample image of the hip joint) and the labeled image with the attribute value of the labeled pixel can be passed into the convolutional neural network, including three attribute values, named 0, 1, 2.
  • the value 0 represents the background pixel
  • 1 represents the femoral head pixel
  • 2 represents the bone cortex; it is passed into the convolutional neural network, and the convolutional pooling sampling is performed to iteratively learn and train to obtain the third neural network model.
  • the convolutional neural network in this step can be the convolutional neural network LeNet, the convolutional neural network AlexNet, the visual convolutional neural network ZF-Net, the convolutional neural network GoogleNet, the convolutional neural network VGG, the convolutional neural network.
  • Neural Network Inception Convolutional Neural Network ResNet, Convolutional Neural Network DensNet, Convolutional Neural Network Inception ResNet, etc.
  • the grayscale image corresponding to the X-ray image of the hip joint is input into the third neural network model, and each pixel value can be predicted.
  • Each pixel value of the X-ray image is automatically classified into an attribute: 0-background, 1-femoral head, 2-cortical bone, to complete the automatic identification of femoral head area and cortical bone area, as shown in Figure 13.
  • 13 is a schematic diagram of identifying the femoral head region and the cortical bone region.
  • the end of the lesser trochanter can be intercepted to the end of the femur, and the medullary canal region is obtained by subtracting the cortical bone region from the femoral region in the image, as shown in Figure 14.
  • the anatomical axis of the medullary canal is determined by linear fitting on the coordinates of multiple center points in the medullary canal region.
  • each horizontal row and the medullary cavity is four coordinates, named A1, A2, B1, B2 from left to right; the midpoint can be obtained from the two points, A1(X1 , Y1), the midpoint coordinates of A2 (X2, Y2): B1, B2 can be calculated in the same way.
  • Each row calculates the coordinates of the midpoint of the medullary canal in turn Fitting these points into a straight line is the anatomical axis of the medullary canal (also the femoral anatomical axis).
  • Determining the central axis of the femoral neck may include the following steps:
  • a fourth neural network model Before making predictions, a fourth neural network model must be obtained by training samples.
  • the unlabeled original image (the grayscale image corresponding to the X-ray sample image of the hip joint) and the labeled image with the attribute value of the labeled pixel can be passed into the convolutional neural network, including three attribute values, named 0, 1, 2.
  • the value 0 represents the background pixel
  • 1 represents the femoral head pixel
  • 2 represents the femoral neck base pixel; it is passed into the convolutional neural network, and the convolutional pooling sampling is performed to iteratively learn and train to obtain the fourth neural network model.
  • the convolutional neural network in this step can be the convolutional neural network LeNet, the convolutional neural network AlexNet, the visual convolutional neural network ZF-Net, the convolutional neural network GoogleNet, the convolutional neural network VGG, the convolutional neural network.
  • Neural Network Inception Convolutional Neural Network ResNet, Convolutional Neural Network DensNet, Convolutional Neural Network Inception ResNet, etc.
  • the grayscale image corresponding to the X-ray image of the hip joint is input into the fourth neural network model, and each pixel value can be predicted.
  • Fig. 16 is a schematic diagram of identifying the femoral head region and the femoral neck base region.
  • the center coordinates of the femoral head and the base of the femoral neck corresponding to the femoral head area and the femoral neck base area are calculated according to the centroid formula of the plane image;
  • the center axis of the femoral neck is determined according to the center coordinates of the femoral head and the center coordinates of the base of the femoral neck.
  • the line connecting the center coordinates of the femoral head and the center coordinates of the base of the femoral neck is the center axis of the femoral neck, as shown in Figure 17.
  • the two diagonally downward line segments in Figure 17 are the central axis of the femoral neck.
  • the angle formed by the anatomical axis of the medullary cavity and the central axis of the femoral neck is the femoral neck shaft angle.
  • S405. Determine the specification and model of the femoral stem prosthesis according to the femoral neck shaft angle, the medullary canal area determined in the process of determining the anatomical axis of the medullary canal, and the center of rotation of the femoral head.
  • the position of the center of rotation of the femoral head can give a recommendation for the selection of the femoral stem prosthesis model.
  • Femoral stem prosthesis models are distinguished by the shape and size of the femoral stem prosthesis.
  • Offset includes femoral eccentricity: refers to the vertical distance from the center of rotation of the femoral head to the long axis of the femoral shaft. Also includes the joint offset, which can be the cumulative sum of the femoral and acetabular offsets.
  • a total hip joint image processing method based on deep learning and X-ray comprising inputting the original X-ray image of the target hip joint with the information to be determined into the pre-trained first neural network model, Identify and obtain at least one key point position of a tear drop and at least one key point position of the lesser trochanter in the X-ray image of the target hip; The straight line determined by the position of the trochanter key point determines the leg length difference corresponding to the X-ray image of the target hip joint; wherein, the training process of the first neural network model includes: the original image of the X-ray of the hip joint, and the marked teardrop key point and the femur.
  • the position of the key point of the lesser trochanter is input into the convolutional neural network as a sample set, and the input original image is fitted with the Gaussian distribution function of the feature points, and the convolution pooling sampling is performed to iteratively learn and train to obtain the first neural network model.
  • the original image is a grayscale image corresponding to an unlabeled X-ray sample image of the hip joint.
  • the optional implementation manner of determining the leg length difference is the same as the implementation manner of the first embodiment, and details are not described herein again.
  • a total hip joint image processing method based on deep learning and X-ray comprising inputting the target hip X-ray original image of the information to be determined into a pre-trained second neural network model, Identify the position of the femoral head; calculate the rotation center of the femoral head based on the center of mass formula of the plane image; determine the diameter of the acetabular cup according to the diameter of the femoral head; determine the hip rotation center according to the femoral head and the diameter of the acetabular cup cup position; wherein, the training process of the second neural network model includes: inputting the original X-ray image of the hip joint and the marked pixel attribute value into the convolutional neural network, and performing convolution pooling sampling until iterative learning A second neural network model is obtained by training, wherein the pixel attribute value includes 0 representing the background pixel and 1 representing the femoral head pixel.
  • the optional implementation manner of determining the position of the acetabular cup is the same as the implementation manner in the first embodiment, which will not be repeated here.
  • a total hip joint image processing method based on deep learning and X-ray including: inputting the original X-ray image of the target hip joint with information to be determined into a pre-trained neural network model, identifying Obtain the femoral head area and the cortical bone area; determine the medullary cavity area according to the femoral head area and the bone cortex area; perform linear fitting on the coordinates of multiple center points of the medullary cavity area to determine the anatomical axis of the medullary canal; The original X-ray image is input into the pre-trained neural network model, and the femoral head area and the femoral neck base area are identified; based on the femoral head area and the femoral neck base area, the center coordinates of the femoral head and the center coordinate of the femoral neck base are determined; and determine the center axis of the femoral neck according to the center coordinate of the femoral head and the
  • the optional implementation manner of determining the model of the femoral stem prosthesis is the same as the implementation manner in the first embodiment, which will not be repeated here.
  • a deep learning and X-ray-based total hip image processing device for implementing the methods described in Figures 1-17 is also provided.
  • the device includes:
  • the scale calibration unit 51 is configured to perform a true restoration of the size of the X-ray image of the hip joint according to the ratio of the image size of the reference object and its actual size;
  • the leg length difference determining unit 52 is configured to identify the restored X-ray image of the hip joint based on the first neural network model, and determine the leg length difference;
  • the acetabular cup position determining unit 53 is configured to identify the restored X-ray image of the hip joint based on the second neural network model, and determine the acetabular cup position;
  • the femoral stem prosthesis specification determining unit 54 is configured to recognize the restored X-ray image of the hip joint based on the third neural network model and the fourth neural network model, and determine the femoral stem prosthesis specification;
  • the osteotomy line determining unit 55 is configured to determine the position of the osteotomy line according to the rotation center of the femoral stem prosthesis and the rotation center of the acetabular cup determined during the X-ray image recognition process of the hip joint.
  • an X-ray image of the hip joint is obtained, and the X-ray image of the hip joint includes a reference object
  • the image of the reference object is a reference object with a known size; according to the ratio of the image size of the reference object and its actual size, the X-ray image of the hip joint is restored in size; based on the deep learning model, the restored hip joint is
  • the X-ray image is used to identify the leg length difference, the position of the acetabular cup and the size of the femoral stem prosthesis.
  • the center of rotation determines the position of the osteotomy line.
  • the X-ray image of the hip joint is restored to its real size, and subsequent position recognition is more accurate with the actual size;
  • the recognition is based on the deep learning model, which ensures the accuracy and speed of the leg length difference, acetabular cup position, femoral stem prosthesis size, and osteotomy line position determined according to the recognition results. This provides better preoperative support for total hip replacement surgery.
  • a computer-readable storage medium stores computer instructions, and the computer instructions are configured to cause the computer to perform the above method embodiments.
  • an electronic device including: at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores data executable by the at least one processor.
  • the computer program is executed by the at least one processor, so that the at least one processor executes the deep learning and X-ray-based total hip joint image processing method in the above method embodiments.
  • modules or steps of the present application can be implemented by a general-purpose computing device, and they can be centralized on a single computing device, or distributed on a network composed of multiple computing devices, optionally, they can be It is implemented with program codes executable by a computing device, so that they can be stored in a storage device and executed by the computing device, or they can be separately made into individual integrated circuit modules, or a plurality of modules or steps in them can be made into A single integrated circuit module is implemented.
  • the present application is not limited to any particular combination of hardware and software.

Abstract

A total hip joint image processing method, comprising: acquiring an X-ray image of a hip joint, the X-ray image of the hip joint containing an image of a reference object, and the reference object being a reference object of a known size (S101); according to the scale of the image size of the reference object to the actual size thereof, restoring the X-ray image of the hip joint in size (S102); on the basis of a depth learning model, recognizing the restored X-ray image of the hip joint, and determining a leg length difference, an acetabular cup position and the size and model of a femoral stem prosthesis (S103); and according to a rotation center of the femoral stem prosthesis and a rotation center of the acetabular cup determined in the process of recognizing the X-ray image of the hip joint, determining an osteotomy line position (S104). The total hip joint image processing method provides better preoperative support for total hip replacement surgeries.

Description

一种全髋关节图像的处理方法及装置A method and device for processing a total hip image
本申请要求在2020年7月6日提交中国专利局、申请号为CN202010643713.5、发明名称为“基于深度学习与X线的全髋关节图像处理方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application with the application number CN202010643713.5 and the invention titled "Method and Device for Image Processing of Total Hip Joint Based on Deep Learning and X-ray" filed with the Chinese Patent Office on July 6, 2020, The entire contents of which are incorporated herein by reference.
技术领域technical field
本申请涉及数据处理技术领域,具体而言,涉及一种全髋关节图像的处理方法及装置。The present application relates to the technical field of data processing, and in particular, to a method and device for processing a total hip joint image.
背景技术Background technique
在医学领域中全髋关节置换手术的术前规划主要包括计算所需假体型号及截骨线位置,全髋关节置换手术的术前规划对于手术的成功率起着非常重要的作用,因此如何提供准确的术前规划是非常重要的。目前主要的术前规划方式为人工通过各种工具进行测量,效率低而且准确性无法保证,因此亟需提供一种更便捷更准确的术前规划的方法为全髋关节置换手术提供更好的术前支持。In the medical field, the preoperative planning of total hip replacement surgery mainly includes calculating the required prosthesis size and the position of the osteotomy line. The preoperative planning of total hip replacement surgery plays a very important role in the success rate of the surgery. Therefore, how to It is very important to provide accurate preoperative planning. At present, the main preoperative planning method is to measure manually through various tools, which is inefficient and cannot guarantee the accuracy. Therefore, it is urgent to provide a more convenient and accurate preoperative planning method to provide a better solution for total hip replacement surgery. Preoperative support.
发明内容SUMMARY OF THE INVENTION
本申请在于提出一种全髋关节图像的处理方法及装置,以提供一种更便捷更准确的术前规划的方式为全髋关节置换手术提供更好的术前支持。The present application proposes a method and device for processing a total hip joint image, so as to provide a more convenient and more accurate preoperative planning manner to provide better preoperative support for total hip replacement surgery.
为了实现上述目的,根据本申请的第一方面,提供了一种基于深度学习与X线的全髋关节图像处理方法。In order to achieve the above object, according to the first aspect of the present application, a total hip joint image processing method based on deep learning and X-ray is provided.
根据本申请的基于深度学习与X线的全髋关节图像处理方法包括:The total hip image processing method based on deep learning and X-ray according to the present application includes:
获取髋关节的X线图像,所述髋关节的X线图像中包含参照物的图像,所述参照物为已知尺寸的参照物;根据参照物的图像尺寸及其实际尺寸的比例,将髋关节的X线图像进行尺寸的还原;基于深度学习模型对还原后的髋关节的X线图像进行识别,得到识别结果,其中,所述识别结果包括用于确定腿长差的关键点位置、用于确定髋臼杯位置的股骨头位置、以及用于确定股骨柄假体的规格型号的股骨头区域、骨皮质区域及股骨颈基底区域;根据股骨柄假体的旋转中心和在髋关节的X线图像识别过程中确定的髋臼杯的旋转中心确定截骨线位 置。Obtain an X-ray image of the hip joint, the X-ray image of the hip joint includes an image of a reference object, and the reference object is a reference object with a known size; according to the ratio of the image size of the reference object and its actual size, the hip The size of the X-ray image of the joint is restored; based on the deep learning model, the restored X-ray image of the hip joint is identified, and the identification result is obtained, wherein the identification result includes the position of the key point used to determine the leg length difference, the use of The position of the femoral head used to determine the position of the acetabular cup, and the area of the femoral head, the cortical bone area, and the base of the femoral neck used to determine the size of the femoral stem component; according to the center of rotation of the femoral stem component and the X at the hip joint The center of rotation of the acetabular cup determined during line image recognition determines the position of the osteotomy line.
可选的,所述基于深度学习模型对还原后的髋关节的X线图像进行识别确定腿长差,包括:将髋关节的X线图像转化为灰度图;基于第一神经网络模型对灰度图的每个像素值进行预测,确定泪滴关键点以及股骨小转子关键点位置;根据泪滴关键点以及股骨小转子关键点位置,确定腿长差。Optionally, identifying the restored X-ray image of the hip joint based on the deep learning model to determine the leg length difference includes: converting the X-ray image of the hip joint into a grayscale image; Each pixel value of the degree map is predicted to determine the key point of the teardrop and the key point of the lesser trochanter; according to the key point of the teardrop and the key point of the lesser trochanter, the leg length difference is determined.
可选的,所述基于深度学习模型对还原后的髋关节的X线图像进行识别确定髋臼杯位置包括:将髋关节的X线图像转化为灰度图;基于第二神经网络模型对灰度图的每个像素值进行预测,确定股骨头位置;根据平面图像的质心公式计算股骨头旋转中心;根据股骨头的直径推算髋臼杯直径;根据骨头旋转中心和髋臼杯直径确定髋臼杯位置。Optionally, the identifying and determining the position of the acetabular cup based on the restored X-ray image of the hip joint based on the deep learning model includes: converting the X-ray image of the hip joint into a grayscale image; Each pixel value of the degree map is predicted to determine the position of the femoral head; the center of rotation of the femoral head is calculated according to the centroid formula of the plane image; the diameter of the acetabular cup is calculated according to the diameter of the femoral head; the acetabulum is determined according to the center of rotation of the bone and the diameter of the acetabular cup cup position.
可选的,所述基于深度学习模型对还原后的髋关节的X线图像进行识别确定股骨柄假体的规格型号包括:将髋关节的X线图像转化为灰度图;基于第三神经网络模型对灰度图进行识别,确定髓腔解剖轴线;基于第四神经网络模型对灰度图进行识别,确定股骨颈中心轴线;根据髓腔解剖轴线和股骨颈中心轴线确定股骨颈干角;根据股骨颈干角、在确定髓腔解剖轴线过程中确定的髓腔区域以及股骨头旋转中心确定股骨柄假体的规格型号。Optionally, identifying the X-ray image of the restored hip joint based on the deep learning model to determine the specification and model of the femoral stem prosthesis includes: converting the X-ray image of the hip joint into a grayscale image; based on a third neural network The model identifies the grayscale image to determine the anatomical axis of the medullary canal; the grayscale image is identified based on the fourth neural network model to determine the central axis of the femoral neck; the femoral neck shaft angle is determined according to the anatomical axis of the medullary canal and the central axis of the femoral neck; The femoral neck shaft angle, the medullary canal area determined during the process of determining the anatomical axis of the medullary canal, and the center of rotation of the femoral head determine the size of the femoral stem prosthesis.
可选的,所述根据股骨柄假体的旋转中心和在髋关节的X线图像识别过程中确定的髋臼杯的旋转中心确定截骨线位置包括:将股骨柄假体的旋转中心与髋臼杯的旋转中心位置重合,确定股骨柄假体实际位置;沿股骨柄假体的涂层位置确定截骨线位置。Optionally, the determining the position of the osteotomy line according to the rotation center of the femoral stem prosthesis and the rotation center of the acetabular cup determined during the X-ray image recognition of the hip joint includes: comparing the rotation center of the femoral stem prosthesis with the hip The position of the rotation center of the acetabular cup coincides to determine the actual position of the femoral stem prosthesis; the position of the osteotomy line is determined along the coating position of the femoral stem prosthesis.
可选的,所述基于第三神经网络模型对灰度图进行识别,确定髓腔解剖轴线包括:基于第三神经网络模型对灰度图的每个像素值进行预测,确定股骨头区域和骨皮质区域;根据股骨头区域、骨皮质区域确定髓腔区域;对髓腔区域多个中心点坐标进行直线拟合确定髓腔解剖轴线。Optionally, the identifying the grayscale map based on the third neural network model and determining the anatomical axis of the medullary cavity include: predicting each pixel value of the grayscale map based on the third neural network model, and determining the femoral head region and the bone. Cortical area; determine the medullary cavity area according to the femoral head area and the bone cortex area; perform linear fitting on the coordinates of multiple center points of the medullary cavity area to determine the medullary cavity anatomical axis.
可选的,所述基于第四神经网络模型对灰度图进行识别,确定股骨颈中心轴线包括:基于第四神经网络模型对灰度图的每个像素值进行预测,确定股骨头区域和股骨颈基底区域;根据平面图像的质心公式计算股骨头区域和股骨颈基底区域对应的股骨头中心坐标和股骨颈基底中心坐标;根据股骨头中心坐标和股骨颈基底中心坐标确定股骨颈中心轴线。Optionally, identifying the grayscale image based on the fourth neural network model and determining the central axis of the femoral neck includes: predicting each pixel value of the grayscale image based on the fourth neural network model, and determining the femoral head region and the femoral head. Neck base area; calculate the femoral head center coordinates and femoral neck base center coordinates corresponding to the femoral head area and the femoral neck base area according to the centroid formula of the plane image; determine the femoral neck center axis according to the femoral head center coordinates and the femoral neck base center coordinates.
可选的,本申请的基于深度学习与X线的全髋关节图像处理方法还包括:根据截骨线位置,计算术后的腿长差,以及偏距。Optionally, the deep learning and X-ray-based total hip image processing method of the present application further includes: calculating the postoperative leg length difference and offset distance according to the position of the osteotomy line.
为了实现上述目的,根据本申请的第二方面,提供了一种全髋关节图像的处理方法,包括:将待确定信息的目标髋关节X线原始图像输入至预训练的第一神经网络模型中,识别得到所述目标髋关节X线图像中的至少一个泪滴关键点位置、以及至少一个股骨小转子关键点 位置;基于所述至少一个泪滴关键点位置确定的连线、以及至少一个股骨小转子关键点位置确定的直线,确定目标髋关节X线图像对应的腿长差;其中,第一神经网络模型的训练过程包括:将髋关节X线原始图像、和标记的泪滴关键点及股骨小转子关键点位置作为样本集输入至卷积神经网络中,将输入的所述原始图像与特征点的高斯分布函数进行拟合,进行卷积池化采样一直迭代学习训练得到第一神经网络模型,其中,所述原始图像为未标记的髋关节的X线样本图像对应的灰度图。In order to achieve the above object, according to the second aspect of the present application, a method for processing a total hip joint image is provided, comprising: inputting the original X-ray image of the target hip joint with information to be determined into a pre-trained first neural network model , identify and obtain at least one key point position of a tear drop and at least one key point position of the lesser trochanter in the X-ray image of the target hip; The straight line determined by the position of the key point of the lesser trochanter determines the leg length difference corresponding to the X-ray image of the target hip joint; wherein, the training process of the first neural network model includes: combining the original X-ray image of the hip joint with the marked teardrop key point and The position of the key point of the lesser trochanter of the femur is input into the convolutional neural network as a sample set, the input original image is fitted with the Gaussian distribution function of the feature points, and the convolutional pooling sampling is performed to iteratively learn and train to obtain the first neural network. model, wherein the original image is a grayscale image corresponding to an unlabeled X-ray sample image of the hip joint.
为了实现上述目的,根据本申请的第三方面,提供了一种全髋关节图像的处理方法,包括:将待确定信息的目标髋关节X线原始图像输入至预训练的第二神经网络模型中,识别得到股骨头位置;基于平面图像的质心公式计算所述股骨头的旋转中心;根据所述股骨头的直径确定髋臼杯直径;根据所述股骨头旋转中心和所述髋臼杯直径确定髋臼杯位置;其中,所述第二神经网络模型的训练过程包括:将髋关节的X线原始图像、和标记的像素属性值输入至卷积神经网络中,进行卷积池化采样一直迭代学习训练得到第二神经网络模型,其中,所述像素属性值包括代表背景像素的0,以及代表股骨头像素的1。In order to achieve the above object, according to a third aspect of the present application, a method for processing a total hip joint image is provided, comprising: inputting the target hip X-ray original image of the information to be determined into a pre-trained second neural network model , identify the position of the femoral head; calculate the center of rotation of the femoral head based on the centroid formula of the plane image; determine the diameter of the acetabular cup according to the diameter of the femoral head; determine the center of rotation of the femoral head and the diameter of the acetabular cup The position of the acetabular cup; wherein, the training process of the second neural network model includes: inputting the original X-ray image of the hip joint and the marked pixel attribute value into the convolutional neural network, and performing convolution pooling sampling until iterative The second neural network model is obtained by learning and training, wherein the pixel attribute value includes 0 representing the background pixel and 1 representing the femoral head pixel.
为了实现上述目的,根据本申请的第四方面,提供了一种全髋关节图像的处理方法,包括:将待确定信息的目标髋关节X线原始图像输入至预训练的神经网络模型中,识别得到股骨头区域和骨皮质区域;根据股骨头区域、骨皮质区域确定髓腔区域;并对髓腔区域多个中心点坐标进行直线拟合确定髓腔解剖轴线;将待确定信息的目标髋关节X线原始图像输入至预训练的神经网络模型中,识别得到股骨头区域、和股骨颈基底区域;基于所述股骨头区域、和股骨颈基底区域确定股骨头中心坐标和股骨颈基底中心坐标;并根据股骨头中心坐标和股骨颈基底中心坐标确定股骨颈中心轴线;基于所述髓腔解剖轴线、和股骨颈中心轴线确定股骨颈干角;并根据所述股骨颈干角确定股骨柄假体型号;其中,所述神经网络模型的训练过程包括:将髋关节的X线原始图像、和标记的像素属性值输入至卷积神经网络中,进行卷积池化采样一直迭代学习训练得到第三神经网络模型,其中,所述像素属性值包括代表背景像素的数值0,代表股骨头像素的数值1,代表骨皮质像素的数值2;所述神经网络模型的训练过程还包括:将髋关节的X线原始图像、以及标记的像素属性值传入到卷积神经网络中,行卷积池化采样一直迭代学习训练得到第四神经网络模型,其中,所述像素属性值包括代表背景像素的数值0,代表股骨头像素的数值1,代表股骨颈基底像素的数值2。In order to achieve the above object, according to the fourth aspect of the present application, a method for processing a total hip joint image is provided, comprising: inputting the original X-ray image of the target hip joint with information to be determined into a pre-trained neural network model, identifying Obtain the femoral head area and the cortical bone area; determine the medullary cavity area according to the femoral head area and the bone cortex area; perform linear fitting on the coordinates of multiple center points of the medullary cavity area to determine the anatomical axis of the medullary canal; The original X-ray image is input into the pre-trained neural network model, and the femoral head area and the femoral neck base area are identified; based on the femoral head area and the femoral neck base area, the center coordinates of the femoral head and the center coordinate of the femoral neck base are determined; and determine the central axis of the femoral neck according to the center coordinate of the femoral head and the center coordinate of the base of the femoral neck; determine the femoral neck shaft angle based on the medullary canal anatomical axis and the femoral neck central axis; and determine the femoral stem prosthesis according to the femoral neck shaft angle model; wherein, the training process of the neural network model includes: inputting the original X-ray image of the hip joint and the marked pixel attribute value into the convolutional neural network, performing convolution pooling sampling until iterative learning and training to obtain a third A neural network model, wherein the pixel attribute values include a value of 0 representing a background pixel, a value of 1 representing a pixel of the femoral head, and a value of 2 representing a pixel of the cortical bone; the training process of the neural network model further includes: The original X-ray image and the marked pixel attribute value are passed into the convolutional neural network, and the row convolution pooling sampling has been iteratively learned and trained to obtain a fourth neural network model, wherein the pixel attribute value includes the value representing the background pixel 0, the value of 1 for the pixels of the femoral head, and the value of 2 for the pixels of the base of the femoral neck.
为了实现上述目的,根据本申请的第五方面,提供了一种基于深度学习与X线的全髋关节图像处理装置。In order to achieve the above object, according to the fifth aspect of the present application, a total hip image processing device based on deep learning and X-ray is provided.
根据本申请的基于深度学习与X线的全髋关节图像处理装置包括:比例校准单元,被配 置为根据参照物的图像尺寸及其实际尺寸的比例,将髋关节的X线图像进行尺寸的真实还原;腿长差确定单元,被配置为基于第一神经网络模型对还原后的髋关节的X线图像进行识别,确定腿长差;髋臼杯位置确定单元,被配置为基于第二神经网络模型对还原后的髋关节的X线图像进行识别,确定髋臼杯位置;股骨柄假体规格确定单元,被配置为基于第三神经网络模型、第四神经网络模型对还原后的髋关节的X线图像进行识别,确定股骨柄假体规格;截骨线确定单元,被配置为根据股骨柄假体的旋转中心和在髋关节的X线图像识别过程中确定的髋臼杯的旋转中心确定截骨线位置。The total hip image processing device based on deep learning and X-ray according to the present application includes: a scale calibration unit, which is configured to perform a real-size X-ray image of the hip joint according to the ratio of the image size of the reference object and its actual size. restoration; the leg length difference determination unit is configured to identify the X-ray image of the restored hip joint based on the first neural network model, and determine the leg length difference; the acetabular cup position determination unit is configured to be based on the second neural network The model recognizes the X-ray image of the restored hip joint, and determines the position of the acetabular cup; the femoral stem prosthesis specification determination unit is configured to determine the restoration of the hip joint based on the third neural network model and the fourth neural network model. The X-ray image is identified to determine the size of the femoral stem prosthesis; the osteotomy line determination unit is configured to determine the center of rotation of the femoral stem prosthesis and the center of rotation of the acetabular cup determined in the process of identifying the X-ray image of the hip joint Osteotomy line location.
为了实现上述目的,根据本申请的第六方面,提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机指令,所述计算机指令被配置为使所述计算机执行上述第一方面中任意一项所述的基于深度学习与X线的全髋关节图像处理方法。In order to achieve the above object, according to a sixth aspect of the present application, a computer-readable storage medium is provided, where the computer-readable storage medium stores computer instructions, and the computer instructions are configured to cause the computer to execute the above-mentioned first The deep learning and X-ray-based total hip image processing method according to any one of the aspects.
为了实现上述目的,根据本申请的第七方面,提供了一种电子设备,包括:至少一个处理器;以及与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的计算机程序,所述计算机程序被所述至少一个处理器执行,以使所述至少一个处理器执行上述第一方面中任意一项所述的基于深度学习与X线的全髋关节图像处理方法。In order to achieve the above object, according to a seventh aspect of the present application, there is provided an electronic device, comprising: at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores data that can be A computer program executed by the at least one processor, where the computer program is executed by the at least one processor, so that the at least one processor executes the deep learning and X-ray-based method according to any one of the first aspects above. Total hip image processing method.
在本申请实施例中,全髋关节图像的处理方法及装置中,获取髋关节的X线图像,所述髋关节的X线图像中包含参照物的图像,所述参照物为已知尺寸的参照物;根据参照物的图像尺寸及其实际尺寸的比例,将髋关节的X线图像进行尺寸的还原;基于深度学习模型对还原后的髋关节的X线图像进行识别,确定腿长差、髋臼杯位置以及股骨柄假体的规格型号;根据股骨柄假体的旋转中心和在髋关节的X线图像识别过程中确定的髋臼杯的旋转中心确定截骨线位置。可以看出,本实施例的全髋关节置换术前规划方式中,将髋关节的X线图像进行了真实尺寸的还原,以实际的尺寸进行后续的位置识别更准确;另外,在对X线图像识别的过程中都是基于深度学习模型进行识别的,保证了根据识别结果确定的腿长差、髋臼杯位置、股骨柄假体的规格型号、截骨线位置的准确性和快速性,从而为全髋关节置换手术提供了更好的术前支持。In the embodiments of the present application, in the method and device for processing a total hip joint image, an X-ray image of the hip joint is acquired, and the X-ray image of the hip joint includes an image of a reference object, and the reference object is a known size. Reference object; according to the ratio of the image size of the reference object and its actual size, restore the size of the X-ray image of the hip joint; identify the X-ray image of the restored hip joint based on the deep learning model, determine the leg length difference, The position of the acetabular cup and the specification and model of the femoral stem prosthesis; the position of the osteotomy line is determined according to the rotation center of the femoral stem prosthesis and the rotation center of the acetabular cup determined during the X-ray image recognition of the hip joint. It can be seen that, in the preoperative planning method for total hip arthroplasty in this embodiment, the X-ray image of the hip joint is restored to its real size, and subsequent position recognition is more accurate with the actual size; In the process of image recognition, the recognition is based on the deep learning model, which ensures the accuracy and speed of the leg length difference, acetabular cup position, femoral stem prosthesis size, and osteotomy line position determined according to the recognition results. This provides better preoperative support for total hip replacement surgery.
附图说明Description of drawings
构成本申请的一部分的附图用来提供对本申请的理解,使得本申请的其它特征、目的和优点变得更明显。本申请的示意性实施例附图及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:The accompanying drawings, which form a part of this application, are used to provide an understanding of the application and make other features, objects and advantages of the application more apparent. The accompanying drawings and descriptions of the exemplary embodiments of the present application are used to explain the present application, and do not constitute an improper limitation of the present application. In the attached image:
图1是根据本申请实施例提供的一种基于深度学习与X线的全髋关节图像处理方法流程图;1 is a flowchart of a method for processing a total hip joint image based on deep learning and X-ray provided according to an embodiment of the present application;
图2是根据本申请实施例提供的一种为髋关节的X线图像的示意图;2 is a schematic diagram of an X-ray image of a hip joint provided according to an embodiment of the present application;
图3-4是根据本申请实施例提供的一种确定临床中的实际截骨线位置的示意图;3-4 are schematic diagrams of determining the actual position of the osteotomy line in the clinic according to an embodiment of the present application;
图5是根据本申请实施例提供的一种确定腿长差的方法流程图;5 is a flowchart of a method for determining a leg length difference provided according to an embodiment of the present application;
图6是根据本申请实施例提供的一种自动识别出泪滴关键点以及股骨小转子关键点位置的示意图;6 is a schematic diagram of automatically identifying the key points of teardrops and the key points of the lesser trochanter according to an embodiment of the present application;
图7是根据本申请实施例提供的一种腿长差确定的示意图;7 is a schematic diagram of a leg length difference determination provided according to an embodiment of the present application;
图8是根据本申请实施例提供的一种确定髋臼杯位置的方法流程图;8 is a flowchart of a method for determining the position of an acetabular cup provided according to an embodiment of the present application;
图9为根据本申请实施例提供的一种识别股骨头的示意图;9 is a schematic diagram of identifying a femoral head according to an embodiment of the present application;
图10为根据本申请实施例提供的股骨头旋转中心的示意图;10 is a schematic diagram of a center of rotation of a femoral head provided according to an embodiment of the present application;
图11是根据本申请实施例提供的髋臼杯位置的示意图;11 is a schematic diagram of the position of an acetabular cup provided according to an embodiment of the present application;
图12是根据本申请实施例提供的一种确定股骨柄假体的规格型号的方法流程图;12 is a flowchart of a method for determining the specification and model of a femoral stem prosthesis provided according to an embodiment of the present application;
图13是根据本申请实施例提供的一种识别股骨头区域、骨皮质区域的示意图;13 is a schematic diagram of identifying a femoral head region and a cortical bone region provided according to an embodiment of the present application;
图14是根据本申请实施例提供的一种髓腔区域的示意图;14 is a schematic diagram of a medullary cavity region provided according to an embodiment of the present application;
图15是根据本申请实施例提供的一种确定髓腔解剖轴线的示意图;Fig. 15 is a schematic diagram of determining the anatomical axis of the medullary cavity according to an embodiment of the present application;
图16是根据本申请实施例提供的一种识别股骨头区域、股骨颈基底区域的示意图;16 is a schematic diagram of identifying a femoral head region and a femoral neck base region according to an embodiment of the present application;
图17是根据本申请实施例提供的一种股骨颈中心轴线的示意图;17 is a schematic diagram of a central axis of a femoral neck provided according to an embodiment of the present application;
图18是根据本申请实施例提供的一种基于深度学习与X线的全髋关节图像处理装置的组成框图。FIG. 18 is a block diagram of a total hip joint image processing apparatus based on deep learning and X-ray provided according to an embodiment of the present application.
具体实施方式detailed description
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部 分的实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本申请保护的范围。In order to enable those skilled in the art to better understand the solutions of the present application, the technical solutions in the embodiments of the present application will be described clearly and completely below with reference to the accompanying drawings in the embodiments of the present application. Obviously, the described embodiments are Examples of some, but not all, examples of this application. Based on the embodiments in the present application, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the scope of protection of the present application.
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。The terms "first", "second" and the like in the description and claims of the present application and the above drawings are used to distinguish similar objects, and are not necessarily used to describe a specific order or sequence. It is to be understood that the data so used are interchangeable under appropriate circumstances for the embodiments of the application described herein. Furthermore, the terms "comprising" and "having" and any variations thereof, are intended to cover non-exclusive inclusion, for example, a process, method, system, product or device comprising a series of steps or units is not necessarily limited to those expressly listed Rather, those steps or units may include other steps or units not expressly listed or inherent to these processes, methods, products or devices.
在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本申请。The embodiments in this application and the features in the embodiments may be combined with each other without conflict. The present application will be described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
根据本申请实施例,提供了一种基于深度学习与X线的全髋关节图像处理方法,如图1所示,该方法包括如下的步骤:According to an embodiment of the present application, a method for processing a total hip joint image based on deep learning and X-ray is provided. As shown in FIG. 1 , the method includes the following steps:
S101.获取髋关节的X线图像,髋关节的X线图像中包含参照物的图像。S101. Acquire an X-ray image of the hip joint, and the X-ray image of the hip joint includes an image of a reference object.
髋关节的X线图像是通过对髋关节进行X光片拍摄时获取的,同时在同张照片里拍摄一个已知尺寸的物体,即参照物。因此得到的髋关节的X线图像中包含参照物的图像。如图2所示,为髋关节的X线图像,其中图像的底部中心部位的标示标准尺寸的图像为参照物的图像。是实际应用中,参照物的选取和拍摄时的排放位置可以根据适应性的调整,本实施例不作限制。An X-ray image of the hip joint is obtained by taking an X-ray of the hip joint while taking an object of known size, the reference object, in the same photo. The X-ray image of the hip joint thus obtained includes the image of the reference object. As shown in FIG. 2 , it is an X-ray image of the hip joint, wherein the image of the standard size at the bottom center of the image is the image of the reference object. In practical applications, the selection of the reference object and the discharge position during shooting can be adjusted according to the adaptability, which is not limited in this embodiment.
S102.根据参照物的图像尺寸及其实际尺寸的比例,将髋关节的X线图像进行尺寸的还原。S102. According to the ratio of the image size of the reference object and its actual size, restore the size of the X-ray image of the hip joint.
参照物的尺寸是已知的,参照物的图像尺寸也可以通过测量得到,根据参照物的图像尺寸及其实际尺寸的比例,可以确定出髋关节的X线图像相对于实际的髋关节尺寸的比例(两者比例相同),然后根据比例将髋关节的X线图像进行真实尺寸的还原。将髋关节的X线图像进行真实尺寸的还原是为和后续的图像识别做基础,使后续根据识别结果确定的腿长差、髋臼杯位置、股骨柄假体的规格型号、截骨线位置与实际的对应位置差距更小,保证识别的准确性。The size of the reference object is known, and the image size of the reference object can also be obtained by measurement. According to the ratio of the image size of the reference object and its actual size, the difference between the X-ray image of the hip joint and the actual size of the hip joint can be determined. Scale (the two are the same scale), and then restore the true size of the X-ray image of the hip joint according to the scale. The restoration of the real size of the X-ray image of the hip joint is the basis for the subsequent image recognition, so that the leg length difference, the position of the acetabular cup, the size of the femoral stem prosthesis, and the position of the osteotomy line can be determined according to the recognition results. The gap with the actual corresponding position is smaller to ensure the accuracy of recognition.
可选地,还原操作可以为选取已知尺寸物体的关键部位尺寸。通过计算图像中像素间两点距离,和物体实际尺寸进行比例换算,确定比例,然后根据比例对髋关节的X线图像的比例进行修正。Optionally, the restoration operation may be to select the size of a key part of an object of known size. By calculating the distance between two points between pixels in the image, and converting the ratio with the actual size of the object, the ratio is determined, and then the ratio of the X-ray image of the hip joint is corrected according to the ratio.
S103.基于深度学习模型对还原后的髋关节的X线图像进行识别,得到识别结果,其中, 所述识别结果包括用于确定腿长差的关键点位置、用于确定髋臼杯位置的股骨头位置、以及用于确定股骨柄假体的规格型号的股骨头区域、骨皮质区域及股骨颈基底区域。S103. Recognize the restored X-ray image of the hip joint based on the deep learning model, and obtain a recognition result, wherein the recognition result includes a key point position for determining the leg length difference, a thigh for determining the position of the acetabular cup Bone location, and the femoral head area, cortical bone area, and femoral neck base area used to determine the size of the femoral stem prosthesis.
深度学习模型是神经网络模型,确定腿长差、髋臼杯位置以及股骨柄假体的规格型号可能会用到的模型的输入和输出可能是不同的,但是模型训练的原理是相同的。神经网络模型训练的原理为:将髋关节的X线图像转化为0-255灰度图,然后将图像进行人工选定标注,将图片的每个像素标注划分为几种属性值(根据实际的需求,属性值的种类数不同,比如可以为两种、三种等等)并分别命名,然后将其输入到神经网络模型中进行卷积池化采样一直迭代学习训练得到神经网络模型。The deep learning model is a neural network model. The input and output of the model that may be used to determine the leg length difference, the position of the acetabular cup, and the size of the femoral stem prosthesis may be different, but the principle of model training is the same. The principle of neural network model training is: convert the X-ray image of the hip joint into a 0-255 grayscale image, then manually select and label the image, and divide each pixel label of the image into several attribute values (according to the actual Requirements, the types of attribute values are different, such as two, three, etc.) and name them respectively, and then input them into the neural network model for convolution pooling sampling and iterative learning and training to obtain the neural network model.
本步骤中的神经网络模型为分类神经网络,是将图像中的不同的区域进行分类,比如在确定腿长差时,应用神经网络模型主要是为了识别出泪滴和股骨小转子的关键点;再比如在确定髋臼杯位置时,应用神经网络模型主要是为了识别出股骨头区域;再比如在确定股骨柄假体的规格型号时,应用神经网络模型主要是为了识别出股骨头、骨皮质区域以及股骨头、股骨颈基底区域。The neural network model in this step is a classification neural network, which is to classify different areas in the image. For example, when determining the leg length difference, the neural network model is mainly used to identify key points of teardrops and lesser trochanter; For another example, when determining the position of the acetabular cup, the neural network model is mainly used to identify the femoral head area; for example, when determining the size of the femoral stem prosthesis, the neural network model is mainly used to identify the femoral head and bone cortex. area as well as the femoral head and base of the femoral neck.
本实施例中的神经网络可以为卷积神经网络LeNet、卷积神经网络AlexNet、可视化卷积神经网络ZF-Net、卷积神经网络GoogleNet、卷积神经网络VGG、卷积神经网络Inception、卷积神经网络ResNet、卷积神经网络DensNet、卷积神经网络Inception ResNet等。The neural network in this embodiment may be a convolutional neural network LeNet, a convolutional neural network AlexNet, a visualization convolutional neural network ZF-Net, a convolutional neural network GoogleNet, a convolutional neural network VGG, a convolutional neural network Inception, a convolutional neural network Neural Network ResNet, Convolutional Neural Network DensNet, Convolutional Neural Network Inception ResNet, etc.
确定腿长差、髋臼杯位置以及股骨柄假体的规格型号是根据图像的识别结果再进行一些坐标、拟合等计算后确定的。To determine the leg length difference, the position of the acetabular cup, and the size and model of the femoral stem prosthesis, some coordinates and fitting are calculated based on the image recognition results.
S104.根据股骨柄假体的旋转中心和在髋关节的X线图像识别过程中确定的髋臼杯的旋转中心确定截骨线位置。S104. Determine the position of the osteotomy line according to the rotation center of the femoral stem prosthesis and the rotation center of the acetabular cup determined during the X-ray image recognition process of the hip joint.
“根据股骨柄假体的旋转中心和在髋关节的X线图像识别过程中确定的髋臼杯的旋转中心确定截骨线位置”可以通过移动股骨柄假体,将股骨柄假体的旋转中心与之前计算的髋臼杯旋转中心位置重合,得到股骨柄假体实际位置。沿股骨柄假体的涂层位置可确定临床中的实际截骨线位置,如图3-4所示。图3为移动股骨柄假体到预定位置,使股骨柄假体的旋转中心与之前计算的髋臼杯旋转中心位置重合,图4为根据股骨柄假体的外形确定截骨线位置。"Determine the position of the osteotomy line according to the rotation center of the femoral stem prosthesis and the rotation center of the acetabular cup determined during the X-ray image recognition of the hip joint" The rotation center of the femoral stem prosthesis can be adjusted by moving the femoral stem prosthesis Coinciding with the previously calculated center of rotation of the acetabular cup, the actual position of the femoral stem prosthesis is obtained. The location of the coating along the femoral stem component can determine the actual position of the osteotomy line in clinical practice, as shown in Figure 3-4. Figure 3 shows moving the femoral stem prosthesis to a predetermined position so that the rotation center of the femoral stem prosthesis coincides with the previously calculated acetabular cup rotation center position, and Figure 4 shows the position of the osteotomy line determined according to the shape of the femoral stem prosthesis.
从以上的描述中,可以看出,本申请实施例的基于深度学习与X线的全髋关节图像处理方法中,获取髋关节的X线图像,所述髋关节的X线图像中包含参照物的图像,所述参照物为已知尺寸的参照物;根据参照物的图像尺寸及其实际尺寸的比例,将髋关节的X线图像进行尺寸的还原;基于深度学习模型对还原后的髋关节的X线图像进行识别,确定腿长差、髋臼杯位置以及股骨柄假体的规格型号;根据股骨柄假体的旋转中心和在髋关节的X线图像识 别过程中确定的髋臼杯的旋转中心确定截骨线位置。可以看出,本实施例的全髋关节置换术前规划方式中,将髋关节的X线图像进行了真实尺寸的还原,以实际的尺寸进行后续的位置识别更准确;另外,在对X线图像识别的过程中都是基于深度学习模型进行识别的,保证了根据识别结果确定的腿长差、髋臼杯位置、股骨柄假体的规格型号、截骨线位置的准确性和快速性,从而为全髋关节置换手术提供了更好的术前支持。From the above description, it can be seen that in the deep learning and X-ray-based total hip image processing method according to the embodiment of the present application, an X-ray image of the hip joint is obtained, and the X-ray image of the hip joint includes a reference object The image of the reference object is a reference object with a known size; according to the ratio of the image size of the reference object and its actual size, the X-ray image of the hip joint is restored in size; based on the deep learning model, the restored hip joint is The X-ray image is used to identify the leg length difference, the position of the acetabular cup and the size of the femoral stem prosthesis. The center of rotation determines the position of the osteotomy line. It can be seen that, in the preoperative planning method for total hip arthroplasty in this embodiment, the X-ray image of the hip joint is restored to its real size, and subsequent position recognition is more accurate with the actual size; In the process of image recognition, the recognition is based on the deep learning model, which ensures the accuracy and speed of the leg length difference, acetabular cup position, femoral stem prosthesis size, and osteotomy line position determined according to the recognition results. This provides better preoperative support for total hip replacement surgery.
作为上述实施例的细化,步骤S103对于确定腿长差、髋臼杯位置以及股骨柄假体的规格型号的详细步骤进行分别说明。As a refinement of the above-mentioned embodiment, step S103 separately describes the detailed steps of determining the leg length difference, the position of the acetabular cup, and the specification and model of the femoral stem prosthesis.
图5示出了一种确定腿长差实施例的流程图,可以包括如下步骤:FIG. 5 shows a flowchart of an embodiment of determining the leg length difference, which may include the following steps:
S201.将髋关节的X线图像转化为灰度图。S201. Convert the X-ray image of the hip joint into a grayscale image.
将髋关节的X线图像转化为0-255灰度图。Convert the X-ray image of the hip joint to a 0-255 grayscale image.
S202.基于第一神经网络模型对灰度图的每个像素值进行预测,确定泪滴关键点以及股骨小转子关键点位置。S202. Predict the value of each pixel of the grayscale image based on the first neural network model, and determine the position of the key point of the teardrop and the key point of the lesser trochanter of the femur.
在进行预测之前,首先要根据样本训练得到第一神经网络模型。可以将未标记的原始图像(髋关节的X线样本图像对应的灰度图)以及人工识别标记的泪滴关键点及股骨小转子关键点位置的标记传入到卷积神经网络中,将输入的原始图像与特征点的高斯分布函数进行拟合,进行卷积池化采样一直迭代学习训练得到第一神经网络模型。需要说明的是,本步骤中的卷积神经网络可以为卷积神经网络LeNet、卷积神经网络AlexNet、可视化卷积神经网络ZF-Net、卷积神经网络GoogleNet、卷积神经网络VGG、卷积神经网络Inception、卷积神经网络ResNet、卷积神经网络DensNet、卷积神经网络Inception ResNet等。Before making predictions, the first neural network model must be obtained by training the samples. The unlabeled original image (the grayscale image corresponding to the X-ray sample image of the hip joint), as well as the manually identified and marked teardrop key points and the labels of the key points of the lesser trochanter of the femur can be passed into the convolutional neural network, and the input The original image is fitted with the Gaussian distribution function of the feature points, and the convolution pooling sampling is performed to iteratively learn and train to obtain the first neural network model. It should be noted that the convolutional neural network in this step can be the convolutional neural network LeNet, the convolutional neural network AlexNet, the visualization convolutional neural network ZF-Net, the convolutional neural network GoogleNet, the convolutional neural network VGG, the convolutional neural network. Neural Network Inception, Convolutional Neural Network ResNet, Convolutional Neural Network DensNet, Convolutional Neural Network Inception ResNet, etc.
得到第一神经网络模型后,将髋关节的X线图像对应的灰度图输入到第一神经网络模型中,可以自动识别出泪滴关键点以及股骨小转子关键点位置。如图6所示,图6为自动识别出泪滴关键点以及股骨小转子关键点位置的示意图。After the first neural network model is obtained, the grayscale image corresponding to the X-ray image of the hip joint is input into the first neural network model, and the key points of the teardrop and the lesser trochanter can be automatically identified. As shown in FIG. 6 , FIG. 6 is a schematic diagram of automatically identifying the key points of the tear drop and the key points of the lesser trochanter of the femur.
S203.根据泪滴关键点以及股骨小转子关键点位置,确定腿长差。S203. Determine the leg length difference according to the key point of the tear drop and the position of the key point of the lesser trochanter.
如图7所示,其中水平的直线是由两个泪滴关键点确定的,是两个泪滴关键点的连线,其中两条垂直的线段是由股骨小转子关键点和水平直线确定的,两条垂直直线分别记作A和B,A和B的差值为腿长差。As shown in Figure 7, the horizontal straight line is determined by two key points of teardrop, which is the connection line of the two key points of teardrop, and the two vertical line segments are determined by the key point of the lesser trochanter and the horizontal straight line , the two vertical lines are denoted as A and B respectively, and the difference between A and B is the leg length difference.
图8示出了一种确定髋臼杯位置的实施例流程图,可以包括如下步骤:Figure 8 shows an embodiment flow chart of determining the position of the acetabular cup, which may include the following steps:
S301.将髋关节的X线图像转化为灰度图。S301. Convert the X-ray image of the hip joint into a grayscale image.
将髋关节的X线图像转化为0-255灰度图。Convert the X-ray image of the hip joint to a 0-255 grayscale image.
S302.基于第二神经网络模型对灰度图的每个像素值进行预测,确定股骨头位置。S302. Predict the value of each pixel of the grayscale image based on the second neural network model, and determine the position of the femoral head.
在进行预测之前,首先要根据样本训练得到第二神经网络模型。可以将未标记的原始图像(髋关节的X线样本图像对应的灰度图)以及人工识别标记像素属性值的标记图像传入到卷积神经网络中,包括两种属性值,分别命名0、1。数值0代表背景像素,1代表股骨头像素;传入到卷积神经网络中,进行卷积池化采样一直迭代学习训练得到第二神经网络模型。需要说明的是,本步骤中的卷积神经网络可以为卷积神经网络LeNet、卷积神经网络AlexNet、可视化卷积神经网络ZF-Net、卷积神经网络GoogleNet、卷积神经网络VGG、卷积神经网络Inception、卷积神经网络ResNet、卷积神经网络DensNet、卷积神经网络Inception ResNet等。Before making predictions, the second neural network model must first be obtained by training the samples. The unlabeled original image (the grayscale image corresponding to the X-ray sample image of the hip joint) and the labeled image that manually identifies the attribute value of the labeled pixel can be passed into the convolutional neural network, including two attribute values, named 0, 1. The value 0 represents the background pixel, and 1 represents the femoral head pixel; it is passed into the convolutional neural network, and the convolution pooling sampling is performed to iteratively learn and train to obtain the second neural network model. It should be noted that the convolutional neural network in this step can be the convolutional neural network LeNet, the convolutional neural network AlexNet, the visualization convolutional neural network ZF-Net, the convolutional neural network GoogleNet, the convolutional neural network VGG, the convolutional neural network. Neural Network Inception, Convolutional Neural Network ResNet, Convolutional Neural Network DensNet, Convolutional Neural Network Inception ResNet, etc.
得到第二神经网络模型后,将髋关节的X线图像对应的灰度图输入到第二神经网络模型中,可以对每个像素值进行预测。自动将X线图像的每个像素值归为一个属性中:0-背景,1-股骨头,完成股骨头区域(即股骨头位置)的自动识别,如图9所示。图9为识别股骨头的示意图。After the second neural network model is obtained, the grayscale image corresponding to the X-ray image of the hip joint is input into the second neural network model, and each pixel value can be predicted. Each pixel value of the X-ray image is automatically classified into an attribute: 0-background, 1-femoral head, and the automatic identification of the femoral head area (ie, the position of the femoral head) is completed, as shown in Figure 9. Figure 9 is a schematic diagram of identifying the femoral head.
S303.根据平面图像的质心公式计算股骨头旋转中心。S303. Calculate the center of rotation of the femoral head according to the centroid formula of the plane image.
因为得到的股骨头区域的图像是二值图像,其质量分布是均匀的,所以质心和形心重合,根据平面图像的质心公式可以计算得到股骨头的中心点坐标,即股骨头旋转中心。假设二值图像为B[i,j],则可根据下列公式求得股骨头的中心点坐标:Because the obtained image of the femoral head area is a binary image, its mass distribution is uniform, so the centroid and centroid coincide, and the center point coordinates of the femoral head can be calculated according to the centroid formula of the plane image, that is, the rotation center of the femoral head. Assuming that the binary image is B[i,j], the coordinates of the center point of the femoral head can be obtained according to the following formula:
Figure PCTCN2021107720-appb-000001
Figure PCTCN2021107720-appb-000001
其中:
Figure PCTCN2021107720-appb-000002
此处得到的是股骨头的中心点的像素坐标,需要将像素坐标转换为图像坐标。
in:
Figure PCTCN2021107720-appb-000002
What is obtained here is the pixel coordinates of the center point of the femoral head, and the pixel coordinates need to be converted into image coordinates.
图像平面坐标中心坐标为:
Figure PCTCN2021107720-appb-000003
则像素坐标
Figure PCTCN2021107720-appb-000004
到图像坐标(x’,y’)的变换公式为:
Figure PCTCN2021107720-appb-000005
其中,分别为图像阵列的行列间距。最后通过输出显示模块,得到股骨头旋转中心的位置,如图9所示。图10中圆圈的中心点为股骨头旋转中心。
The coordinates of the center of the image plane coordinates are:
Figure PCTCN2021107720-appb-000003
then the pixel coordinates
Figure PCTCN2021107720-appb-000004
The transformation formula to image coordinates (x', y') is:
Figure PCTCN2021107720-appb-000005
Wherein, are the row and column spacing of the image array, respectively. Finally, through the output display module, the position of the rotation center of the femoral head is obtained, as shown in Figure 9. The center point of the circle in Figure 10 is the center of rotation of the femoral head.
S304.根据股骨头的直径推算髋臼杯直径。S304. Calculate the diameter of the acetabular cup according to the diameter of the femoral head.
根据股骨头区域和股骨头旋转中心确定股骨头的直径,根据股骨头的直径推算髋臼杯直径。根据股骨头的直径推算髋臼杯直径的可以参考现有的任意一种推算方式确定髋臼杯直径。The diameter of the femoral head was determined from the area of the femoral head and the center of rotation of the femoral head, and the diameter of the acetabular cup was calculated from the diameter of the femoral head. To calculate the diameter of the acetabular cup according to the diameter of the femoral head, the diameter of the acetabular cup can be determined by referring to any of the existing calculation methods.
S305.根据骨头旋转中心和髋臼杯直径确定髋臼杯位置。S305. Determine the acetabular cup position according to the bone rotation center and the acetabular cup diameter.
根据股骨头的直径以及股骨头旋转中心位置自动确定髋臼杯位置,如图11所示。图11中线条勾画的区域为髋臼杯位置。The position of the acetabular cup is automatically determined based on the diameter of the femoral head and the position of the center of rotation of the femoral head, as shown in Figure 11. The area delineated by the lines in Figure 11 is the position of the acetabular cup.
图12示出了一种确定股骨柄假体的规格型号的实施例流程图,可以包括如下步骤:Figure 12 shows an embodiment flow chart of determining the specification and model of a femoral stem prosthesis, which may include the following steps:
S401.将髋关节的X线图像转化为灰度图。S401. Convert the X-ray image of the hip joint into a grayscale image.
将髋关节的X线图像转化为0-255灰度图。Convert the X-ray image of the hip joint to a 0-255 grayscale image.
S402.基于第三神经网络模型对灰度图进行识别,确定髓腔解剖轴线。S402. Identify the grayscale image based on the third neural network model, and determine the anatomical axis of the medullary cavity.
确定髓腔解剖轴线可以包括如下步骤:Determining the anatomical axis of the medullary canal can include the following steps:
首先,基于第三神经网络模型对灰度图的每个像素值进行预测,确定股骨头区域和骨皮质区域;First, predict each pixel value of the grayscale image based on the third neural network model to determine the femoral head area and the bone cortex area;
在进行预测之前,首先要根据样本训练得到第三神经网络模型。可以将未标记的原始图像(髋关节的X线样本图像对应的灰度图)以及人工识别标记像素属性值的标记图像传入到卷积神经网络中,包括三种属性值,分别命名0、1、2。数值0代表背景像素,1代表股骨头像素,2代表骨皮质;传入到卷积神经网络中,进行卷积池化采样一直迭代学习训练得到第三神经网络模型。需要说明的是,本步骤中的卷积神经网络可以为卷积神经网络LeNet、卷积神经网络AlexNet、可视化卷积神经网络ZF-Net、卷积神经网络GoogleNet、卷积神经网络VGG、卷积神经网络Inception、卷积神经网络ResNet、卷积神经网络DensNet、卷积神经网络Inception ResNet等。Before making predictions, a third neural network model must be obtained by training samples. The unlabeled original image (the grayscale image corresponding to the X-ray sample image of the hip joint) and the labeled image with the attribute value of the labeled pixel can be passed into the convolutional neural network, including three attribute values, named 0, 1, 2. The value 0 represents the background pixel, 1 represents the femoral head pixel, and 2 represents the bone cortex; it is passed into the convolutional neural network, and the convolutional pooling sampling is performed to iteratively learn and train to obtain the third neural network model. It should be noted that the convolutional neural network in this step can be the convolutional neural network LeNet, the convolutional neural network AlexNet, the visual convolutional neural network ZF-Net, the convolutional neural network GoogleNet, the convolutional neural network VGG, the convolutional neural network. Neural Network Inception, Convolutional Neural Network ResNet, Convolutional Neural Network DensNet, Convolutional Neural Network Inception ResNet, etc.
得到第三神经网络模型后,将髋关节的X线图像对应的灰度图输入到第三神经网络模型中,可以对每个像素值进行预测。自动将X线图像的每个像素值归为一个属性中:0-背景,1-股骨头,2-骨皮质,完成股骨头区域、骨皮质区域的自动识别,如图13所示。图13为识别股骨头区域、骨皮质区域的示意图。After the third neural network model is obtained, the grayscale image corresponding to the X-ray image of the hip joint is input into the third neural network model, and each pixel value can be predicted. Each pixel value of the X-ray image is automatically classified into an attribute: 0-background, 1-femoral head, 2-cortical bone, to complete the automatic identification of femoral head area and cortical bone area, as shown in Figure 13. 13 is a schematic diagram of identifying the femoral head region and the cortical bone region.
其次,根据股骨头区域、骨皮质区域确定髓腔区域;Secondly, determine the medullary cavity area according to the femoral head area and the bone cortex area;
可以截取小转子结束处直到股骨末端部位,使用图像中股骨区域减去骨皮质区域得到的是髓腔区域,如图14所示。The end of the lesser trochanter can be intercepted to the end of the femur, and the medullary canal region is obtained by subtracting the cortical bone region from the femoral region in the image, as shown in Figure 14.
最后,对髓腔区域多个中心点坐标进行直线拟合确定髓腔解剖轴线。Finally, the anatomical axis of the medullary canal is determined by linear fitting on the coordinates of multiple center points in the medullary canal region.
如图15所示,从小转子结束位置以下,每横行与髓腔交点为四个坐标,从左至右分别命名为A1,A2,B1,B2;依据两点可以求出中点,A1(X1,Y1),A2(X2,Y2)的中点坐标:B1,B2同理可算得。每行依次算得髓腔的中点坐标
Figure PCTCN2021107720-appb-000006
将这些点拟合成一条直线即为髓腔解剖轴线(也是股骨解剖轴线)。
As shown in Figure 15, from the end position of the lesser trochanter, the intersection of each horizontal row and the medullary cavity is four coordinates, named A1, A2, B1, B2 from left to right; the midpoint can be obtained from the two points, A1(X1 , Y1), the midpoint coordinates of A2 (X2, Y2): B1, B2 can be calculated in the same way. Each row calculates the coordinates of the midpoint of the medullary canal in turn
Figure PCTCN2021107720-appb-000006
Fitting these points into a straight line is the anatomical axis of the medullary canal (also the femoral anatomical axis).
S403.基于第四神经网络模型对灰度图进行识别,确定股骨颈中心轴线。S403. Identify the grayscale image based on the fourth neural network model, and determine the central axis of the femoral neck.
确定股骨颈中心轴线可以包括如下步骤:Determining the central axis of the femoral neck may include the following steps:
首先,基于第四神经网络模型对灰度图的每个像素值进行预测,确定股骨头区域和股骨 颈基底区域;First, predict each pixel value of the grayscale image based on the fourth neural network model to determine the femoral head area and the femoral neck base area;
在进行预测之前,首先要根据样本训练得到第四神经网络模型。可以将未标记的原始图像(髋关节的X线样本图像对应的灰度图)以及人工识别标记像素属性值的标记图像传入到卷积神经网络中,包括三种属性值,分别命名0、1、2。数值0代表背景像素,1代表股骨头像素,2代表股骨颈基底像素;传入到卷积神经网络中,进行卷积池化采样一直迭代学习训练得到第四神经网络模型。需要说明的是,本步骤中的卷积神经网络可以为卷积神经网络LeNet、卷积神经网络AlexNet、可视化卷积神经网络ZF-Net、卷积神经网络GoogleNet、卷积神经网络VGG、卷积神经网络Inception、卷积神经网络ResNet、卷积神经网络DensNet、卷积神经网络Inception ResNet等。Before making predictions, a fourth neural network model must be obtained by training samples. The unlabeled original image (the grayscale image corresponding to the X-ray sample image of the hip joint) and the labeled image with the attribute value of the labeled pixel can be passed into the convolutional neural network, including three attribute values, named 0, 1, 2. The value 0 represents the background pixel, 1 represents the femoral head pixel, and 2 represents the femoral neck base pixel; it is passed into the convolutional neural network, and the convolutional pooling sampling is performed to iteratively learn and train to obtain the fourth neural network model. It should be noted that the convolutional neural network in this step can be the convolutional neural network LeNet, the convolutional neural network AlexNet, the visual convolutional neural network ZF-Net, the convolutional neural network GoogleNet, the convolutional neural network VGG, the convolutional neural network. Neural Network Inception, Convolutional Neural Network ResNet, Convolutional Neural Network DensNet, Convolutional Neural Network Inception ResNet, etc.
在得到第四神经网络模型后,将髋关节的X线图像对应的灰度图输入到第四神经网络模型中,可以对每个像素值进行预测。自动将X线图像的每个像素值归为一个属性中:0-背景,1-股骨头,2-股骨颈基底像素,完成股骨头区域、股骨颈基底区域的自动识别,如图16所示。图16为识别股骨头区域、股骨颈基底区域的示意图。After the fourth neural network model is obtained, the grayscale image corresponding to the X-ray image of the hip joint is input into the fourth neural network model, and each pixel value can be predicted. Automatically classify each pixel value of the X-ray image into an attribute: 0-background, 1-femoral head, 2-femoral neck base pixel, complete the automatic identification of femoral head area and femoral neck base area, as shown in Figure 16 . Fig. 16 is a schematic diagram of identifying the femoral head region and the femoral neck base region.
其次,根据平面图像的质心公式计算股骨头区域和股骨颈基底区域对应的股骨头中心坐标和股骨颈基底中心坐标;Secondly, the center coordinates of the femoral head and the base of the femoral neck corresponding to the femoral head area and the femoral neck base area are calculated according to the centroid formula of the plane image;
股骨头中心坐标和股骨颈基底中心坐标的计算方式类似,都可以参见步骤S303中计算股骨头中心点坐标的实现方式,此处不在赘述。The calculation methods of the center coordinates of the femoral head and the center coordinates of the base of the femoral neck are similar, and reference may be made to the implementation method of calculating the coordinates of the center point of the femoral head in step S303, which is not repeated here.
最后,根据股骨头中心坐标和股骨颈基底中心坐标确定股骨颈中心轴线。Finally, the center axis of the femoral neck is determined according to the center coordinates of the femoral head and the center coordinates of the base of the femoral neck.
股骨头中心坐标和股骨颈基底中心坐标连线即为股骨颈中心轴线,如图17所示。图17中两条斜向下的线段为股骨颈中心轴线。The line connecting the center coordinates of the femoral head and the center coordinates of the base of the femoral neck is the center axis of the femoral neck, as shown in Figure 17. The two diagonally downward line segments in Figure 17 are the central axis of the femoral neck.
S404.根据髓腔解剖轴线和股骨颈中心轴线确定股骨颈干角。S404. Determine the femoral neck shaft angle according to the anatomical axis of the medullary cavity and the central axis of the femoral neck.
髓腔解剖轴线和股骨颈中心轴线形成的夹角为股骨颈干角。The angle formed by the anatomical axis of the medullary cavity and the central axis of the femoral neck is the femoral neck shaft angle.
S405.根据股骨颈干角、在确定髓腔解剖轴线过程中确定的髓腔区域以及股骨头旋转中心确定股骨柄假体的规格型号。S405. Determine the specification and model of the femoral stem prosthesis according to the femoral neck shaft angle, the medullary canal area determined in the process of determining the anatomical axis of the medullary canal, and the center of rotation of the femoral head.
可以根据股骨颈干角角度值,再结合髓腔形态,股骨头旋转中心位置可对股骨柄假体型号的选择给出推荐。股骨柄假体型号按照股骨柄假体的形状和尺寸等特征进行区分。According to the angle value of the femoral neck shaft angle, combined with the shape of the medullary cavity, the position of the center of rotation of the femoral head can give a recommendation for the selection of the femoral stem prosthesis model. Femoral stem prosthesis models are distinguished by the shape and size of the femoral stem prosthesis.
作为图1实施例的补充说明,在确定截骨线位置之后,还包括根据截骨线位置,计算术后的腿长差,以及偏距。偏距包含股骨偏心距:指股骨头旋转中心至股骨干长轴间的垂直距离。还包括联合偏距,可以是股骨和髋臼偏距的累积和。As a supplementary description of the embodiment of FIG. 1 , after the position of the osteotomy line is determined, it also includes calculating the post-operative leg length difference and offset according to the position of the osteotomy line. Offset includes femoral eccentricity: refers to the vertical distance from the center of rotation of the femoral head to the long axis of the femoral shaft. Also includes the joint offset, which can be the cumulative sum of the femoral and acetabular offsets.
需要说明的是,在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机 系统中执行,并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。It should be noted that the steps shown in the flowcharts of the accompanying drawings may be executed in a computer system, such as a set of computer-executable instructions, and, although a logical sequence is shown in the flowcharts, in some cases, Steps shown or described may be performed in an order different from that herein.
根据本申请实施例,还提供了一种基于深度学习与X线的全髋关节图像处理方法,包括将待确定信息的目标髋关节X线原始图像输入至预训练的第一神经网络模型中,识别得到所述目标髋关节X线图像中的至少一个泪滴关键点位置、以及至少一个股骨小转子关键点位置;基于所述至少一个泪滴关键点位置确定的连线、以及至少一个股骨小转子关键点位置确定的直线,确定目标髋关节X线图像对应的腿长差;其中,第一神经网络模型的训练过程包括:将髋关节X线原始图像、和标记的泪滴关键点及股骨小转子关键点位置作为样本集输入至卷积神经网络中,将输入的所述原始图像与特征点的高斯分布函数进行拟合,进行卷积池化采样一直迭代学习训练得到第一神经网络模型,其中,所述原始图像为未标记的髋关节的X线样本图像对应的灰度图。According to the embodiment of the present application, there is also provided a total hip joint image processing method based on deep learning and X-ray, comprising inputting the original X-ray image of the target hip joint with the information to be determined into the pre-trained first neural network model, Identify and obtain at least one key point position of a tear drop and at least one key point position of the lesser trochanter in the X-ray image of the target hip; The straight line determined by the position of the trochanter key point determines the leg length difference corresponding to the X-ray image of the target hip joint; wherein, the training process of the first neural network model includes: the original image of the X-ray of the hip joint, and the marked teardrop key point and the femur. The position of the key point of the lesser trochanter is input into the convolutional neural network as a sample set, and the input original image is fitted with the Gaussian distribution function of the feature points, and the convolution pooling sampling is performed to iteratively learn and train to obtain the first neural network model. , wherein the original image is a grayscale image corresponding to an unlabeled X-ray sample image of the hip joint.
在本实施例中,确定腿长差的可选实现方式与第一实施例的实现方式相同,在此不再赘述。In this embodiment, the optional implementation manner of determining the leg length difference is the same as the implementation manner of the first embodiment, and details are not described herein again.
根据本申请实施例,还提供了一种基于深度学习与X线的全髋关节图像处理方法,包括将待确定信息的目标髋关节X线原始图像输入至预训练的第二神经网络模型中,识别得到股骨头位置;基于平面图像的质心公式计算所述股骨头的旋转中心;根据所述股骨头的直径确定髋臼杯直径;根据所述股骨头旋转中心和所述髋臼杯直径确定髋臼杯位置;其中,所述第二神经网络模型的训练过程包括:将髋关节的X线原始图像、和标记的像素属性值输入至卷积神经网络中,进行卷积池化采样一直迭代学习训练得到第二神经网络模型,其中,所述像素属性值包括代表背景像素的0,以及代表股骨头像素的1。According to the embodiment of the present application, there is also provided a total hip joint image processing method based on deep learning and X-ray, comprising inputting the target hip X-ray original image of the information to be determined into a pre-trained second neural network model, Identify the position of the femoral head; calculate the rotation center of the femoral head based on the center of mass formula of the plane image; determine the diameter of the acetabular cup according to the diameter of the femoral head; determine the hip rotation center according to the femoral head and the diameter of the acetabular cup cup position; wherein, the training process of the second neural network model includes: inputting the original X-ray image of the hip joint and the marked pixel attribute value into the convolutional neural network, and performing convolution pooling sampling until iterative learning A second neural network model is obtained by training, wherein the pixel attribute value includes 0 representing the background pixel and 1 representing the femoral head pixel.
在本实施例中,确定髋臼杯位置的可选实现方式与第一实施例中的实现方式相同,在此不再赘述。In this embodiment, the optional implementation manner of determining the position of the acetabular cup is the same as the implementation manner in the first embodiment, which will not be repeated here.
根据本申请实施例,还提供了一种基于深度学习与X线的全髋关节图像处理方法,包括:将待确定信息的目标髋关节X线原始图像输入至预训练的神经网络模型中,识别得到股骨头区域和骨皮质区域;根据股骨头区域、骨皮质区域确定髓腔区域;并对髓腔区域多个中心点坐标进行直线拟合确定髓腔解剖轴线;将待确定信息的目标髋关节X线原始图像输入至预训练的神经网络模型中,识别得到股骨头区域、和股骨颈基底区域;基于所述股骨头区域、和股骨颈基底区域确定股骨头中心坐标和股骨颈基底中心坐标;并根据股骨头中心坐标和股骨颈基底中心坐标确定股骨颈中心轴线;基于所述髓腔解剖轴线、和股骨颈中心轴线确定股骨颈干角;并根据所述股骨颈干角确定股骨柄假体型号;其中,所述神经网络模型的训练过程 包括:将髋关节的X线原始图像、和标记的像素属性值输入至卷积神经网络中,进行卷积池化采样一直迭代学习训练得到第三神经网络模型,其中,所述像素属性值包括代表背景像素的数值0,代表股骨头像素的数值1,代表骨皮质像素的数值2;所述神经网络模型的训练过程还包括:将髋关节的X线原始图像、以及标记的像素属性值传入到卷积神经网络中,行卷积池化采样一直迭代学习训练得到第四神经网络模型,其中,所述像素属性值包括代表背景像素的数值0,代表股骨头像素的数值1,代表股骨颈基底像素的数值2。According to the embodiment of the present application, there is also provided a total hip joint image processing method based on deep learning and X-ray, including: inputting the original X-ray image of the target hip joint with information to be determined into a pre-trained neural network model, identifying Obtain the femoral head area and the cortical bone area; determine the medullary cavity area according to the femoral head area and the bone cortex area; perform linear fitting on the coordinates of multiple center points of the medullary cavity area to determine the anatomical axis of the medullary canal; The original X-ray image is input into the pre-trained neural network model, and the femoral head area and the femoral neck base area are identified; based on the femoral head area and the femoral neck base area, the center coordinates of the femoral head and the center coordinate of the femoral neck base are determined; and determine the center axis of the femoral neck according to the center coordinate of the femoral head and the center coordinate of the base of the femoral neck; determine the femoral neck shaft angle based on the anatomical axis of the medullary cavity and the femoral neck center axis; and determine the femoral stem prosthesis according to the femoral neck shaft angle model; wherein, the training process of the neural network model includes: inputting the original X-ray image of the hip joint and the marked pixel attribute value into the convolutional neural network, performing convolution pooling sampling until iterative learning and training to obtain a third A neural network model, wherein the pixel attribute values include a value of 0 representing a background pixel, a value of 1 representing a pixel of the femoral head, and a value of 2 representing a pixel of the cortical bone; the training process of the neural network model further includes: The original X-ray image and the marked pixel attribute value are passed into the convolutional neural network, and the row convolution pooling sampling has been iteratively learned and trained to obtain a fourth neural network model, wherein the pixel attribute value includes the value representing the background pixel 0, the value of 1 for the pixels of the femoral head, and the value of 2 for the pixels of the base of the femoral neck.
在本实施例中,确定股骨柄假体型号的可选实现方式与第一实施例中的实现方式相同,在此不再赘述。In this embodiment, the optional implementation manner of determining the model of the femoral stem prosthesis is the same as the implementation manner in the first embodiment, which will not be repeated here.
根据本申请实施例,还提供了一种用于实施上述图1-17所述方法的基于深度学习与X线的全髋关节图像处理装置,如图18所示,该装置包括:According to the embodiment of the present application, a deep learning and X-ray-based total hip image processing device for implementing the methods described in Figures 1-17 is also provided. As shown in Figure 18, the device includes:
比例校准单元51,被配置为根据参照物的图像尺寸及其实际尺寸的比例,将髋关节的X线图像进行尺寸的真实还原;The scale calibration unit 51 is configured to perform a true restoration of the size of the X-ray image of the hip joint according to the ratio of the image size of the reference object and its actual size;
腿长差确定单元52,被配置为基于第一神经网络模型对还原后的髋关节的X线图像进行识别,确定腿长差;The leg length difference determining unit 52 is configured to identify the restored X-ray image of the hip joint based on the first neural network model, and determine the leg length difference;
髋臼杯位置确定单元53,被配置为基于第二神经网络模型对还原后的髋关节的X线图像进行识别,确定髋臼杯位置;The acetabular cup position determining unit 53 is configured to identify the restored X-ray image of the hip joint based on the second neural network model, and determine the acetabular cup position;
股骨柄假体规格确定单元54,被配置为基于第三神经网络模型、第四神经网络模型对还原后的髋关节的X线图像进行识别,确定股骨柄假体规格;The femoral stem prosthesis specification determining unit 54 is configured to recognize the restored X-ray image of the hip joint based on the third neural network model and the fourth neural network model, and determine the femoral stem prosthesis specification;
截骨线确定单元55,被配置为根据股骨柄假体的旋转中心和在髋关节的X线图像识别过程中确定的髋臼杯的旋转中心确定截骨线位置。The osteotomy line determining unit 55 is configured to determine the position of the osteotomy line according to the rotation center of the femoral stem prosthesis and the rotation center of the acetabular cup determined during the X-ray image recognition process of the hip joint.
本申请实施例的装置中各单元、模块实现其功能的过程可参见方法实施例中的相关描述,此处不再赘述。For the process of implementing the functions of each unit and module in the apparatus of the embodiment of the present application, reference may be made to the relevant description in the method embodiment, and details are not repeated here.
从以上的描述中,可以看出,本申请实施例的基于深度学习与X线的全髋关节图像处理装置中,获取髋关节的X线图像,所述髋关节的X线图像中包含参照物的图像,所述参照物为已知尺寸的参照物;根据参照物的图像尺寸及其实际尺寸的比例,将髋关节的X线图像进行尺寸的还原;基于深度学习模型对还原后的髋关节的X线图像进行识别,确定腿长差、髋臼杯位置以及股骨柄假体的规格型号;根据股骨柄假体的旋转中心和在髋关节的X线图像识别过程中确定的髋臼杯的旋转中心确定截骨线位置。可以看出,本实施例的全髋关节置换术前规划方式中,将髋关节的X线图像进行了真实尺寸的还原,以实际的尺寸进行后续的位置识别更准确;另外,在对X线图像识别的过程中都是基于深度学习模型进行识别的,保证了根 据识别结果确定的腿长差、髋臼杯位置、股骨柄假体的规格型号、截骨线位置的准确性和快速性,从而为全髋关节置换手术提供了更好的术前支持。From the above description, it can be seen that in the total hip image processing device based on deep learning and X-ray in the embodiment of the present application, an X-ray image of the hip joint is obtained, and the X-ray image of the hip joint includes a reference object The image of the reference object is a reference object with a known size; according to the ratio of the image size of the reference object and its actual size, the X-ray image of the hip joint is restored in size; based on the deep learning model, the restored hip joint is The X-ray image is used to identify the leg length difference, the position of the acetabular cup and the size of the femoral stem prosthesis. The center of rotation determines the position of the osteotomy line. It can be seen that, in the preoperative planning method for total hip arthroplasty in this embodiment, the X-ray image of the hip joint is restored to its real size, and subsequent position recognition is more accurate with the actual size; In the process of image recognition, the recognition is based on the deep learning model, which ensures the accuracy and speed of the leg length difference, acetabular cup position, femoral stem prosthesis size, and osteotomy line position determined according to the recognition results. This provides better preoperative support for total hip replacement surgery.
根据本申请实施例,还提供了一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机指令,所述计算机指令被配置为使所述计算机执行上述方法实施例中的基于深度学习与X线的全髋关节图像处理方法。According to an embodiment of the present application, a computer-readable storage medium is further provided, wherein the computer-readable storage medium stores computer instructions, and the computer instructions are configured to cause the computer to perform the above method embodiments. A total hip image processing method based on deep learning and X-ray.
根据本申请实施例,还提供了一种电子设备,包括:至少一个处理器;以及与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的计算机程序,所述计算机程序被所述至少一个处理器执行,以使所述至少一个处理器执行上述方法实施例中的基于深度学习与X线的全髋关节图像处理方法。According to an embodiment of the present application, an electronic device is further provided, including: at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores data executable by the at least one processor. The computer program is executed by the at least one processor, so that the at least one processor executes the deep learning and X-ray-based total hip joint image processing method in the above method embodiments.
显然,上述的本申请的各模块或各步骤可以用通用的计算装置来实现,它们可以集中在单个的计算装置上,或者分布在多个计算装置所组成的网络上,可选地,它们可以用计算装置可执行的程序代码来实现,从而,可以将它们存储在存储装置中由计算装置来执行,或者将它们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。这样,本申请不限制于任何特定的硬件和软件结合。Obviously, the above-mentioned modules or steps of the present application can be implemented by a general-purpose computing device, and they can be centralized on a single computing device, or distributed on a network composed of multiple computing devices, optionally, they can be It is implemented with program codes executable by a computing device, so that they can be stored in a storage device and executed by the computing device, or they can be separately made into individual integrated circuit modules, or a plurality of modules or steps in them can be made into A single integrated circuit module is implemented. As such, the present application is not limited to any particular combination of hardware and software.
以上所述的实施例,并不用于限制本申请,对于本领域的技术人员来说,本申请可以有各种更改和变化。The above-mentioned embodiments are not intended to limit the present application, and for those skilled in the art, the present application may have various modifications and changes.

Claims (13)

  1. 一种全髋关节图像的处理方法,所述方法包括:A method for processing a total hip joint image, the method comprising:
    获取髋关节的X线图像,所述髋关节的X线图像中包含参照物的图像,所述参照物为已知尺寸的参照物;acquiring an X-ray image of the hip joint, the X-ray image of the hip joint includes an image of a reference object, and the reference object is a reference object of known size;
    根据参照物的图像尺寸及其实际尺寸的比例,将髋关节的X线图像进行尺寸的还原;According to the ratio of the image size of the reference object and its actual size, the size of the X-ray image of the hip joint is restored;
    基于深度学习模型对还原后的髋关节的X线图像进行识别,得到识别结果,其中,所述识别结果包括用于确定腿长差的关键点位置、用于确定髋臼杯位置的股骨头位置、以及用于确定股骨柄假体的规格型号的股骨头区域、骨皮质区域及股骨颈基底区域;Identify the restored X-ray image of the hip joint based on the deep learning model, and obtain the identification result, wherein the identification result includes the position of the key point used to determine the leg length difference, the position of the femoral head used to determine the position of the acetabular cup , and the femoral head area, cortical bone area and femoral neck base area used to determine the size of the femoral stem prosthesis;
    根据股骨柄假体的旋转中心和在髋关节的X线图像识别过程中确定的髋臼杯的旋转中心确定截骨线位置。The position of the osteotomy line is determined according to the rotation center of the femoral stem prosthesis and the rotation center of the acetabular cup determined during the X-ray image recognition of the hip joint.
  2. 根据权利要求1所述的全髋关节图像的处理方法,其中,所述基于深度学习模型对还原后的髋关节的X线图像进行识别确定腿长差,包括:The method for processing a total hip joint image according to claim 1, wherein the identification of the restored X-ray image of the hip joint based on the deep learning model to determine the leg length difference comprises:
    将髋关节的X线图像转化为灰度图;Convert the X-ray image of the hip joint to a grayscale image;
    基于第一神经网络模型对灰度图的每个像素值进行预测,确定泪滴关键点以及股骨小转子关键点位置;Predict the value of each pixel of the grayscale image based on the first neural network model, and determine the key point of the teardrop and the position of the key point of the lesser trochanter;
    根据泪滴关键点以及股骨小转子关键点位置,确定腿长差。The leg length difference was determined according to the key point of the teardrop and the key point of the lesser trochanter of the femur.
  3. 根据权利要求1所述的全髋关节图像的处理方法,其中,所述基于深度学习模型对还原后的髋关节的X线图像进行识别确定髋臼杯位置包括:The method for processing a total hip joint image according to claim 1, wherein the identifying and determining the position of the acetabular cup based on the deep learning model of the restored X-ray image of the hip joint comprises:
    将髋关节的X线图像转化为灰度图;Convert the X-ray image of the hip joint to a grayscale image;
    基于第二神经网络模型对灰度图的每个像素值进行预测,确定股骨头位置;Predict each pixel value of the grayscale image based on the second neural network model, and determine the position of the femoral head;
    根据平面图像的质心公式计算股骨头旋转中心;Calculate the center of rotation of the femoral head according to the centroid formula of the plane image;
    根据股骨头的直径推算髋臼杯直径;Calculate the diameter of the acetabular cup from the diameter of the femoral head;
    根据骨头旋转中心和髋臼杯直径确定髋臼杯位置。The position of the acetabular cup is determined according to the center of rotation of the bone and the diameter of the acetabular cup.
  4. 根据权利要求1所述的全髋关节图像的处理方法,其中所述基于深度学习模型对还原后的髋关节的X线图像进行识别确定股骨柄假体的规格型号包括:The method for processing a total hip joint image according to claim 1, wherein the recognizing the X-ray image of the restored hip joint based on the deep learning model to determine the specification and model of the femoral stem prosthesis comprises:
    将髋关节的X线图像转化为灰度图;Convert the X-ray image of the hip joint to a grayscale image;
    基于第三神经网络模型对灰度图进行识别,确定髓腔解剖轴线;Identify the grayscale image based on the third neural network model, and determine the anatomical axis of the medullary cavity;
    基于第四神经网络模型对灰度图进行识别,确定股骨颈中心轴线;Identify the grayscale image based on the fourth neural network model, and determine the central axis of the femoral neck;
    根据髓腔解剖轴线和股骨颈中心轴线确定股骨颈干角;Determine the femoral neck shaft angle according to the anatomical axis of the medullary cavity and the central axis of the femoral neck;
    根据股骨颈干角、在确定髓腔解剖轴线过程中确定的髓腔区域以及股骨头旋转中心确定股骨柄假体的规格型号。The size of the femoral stem prosthesis is determined according to the femoral neck shaft angle, the area of the medullary canal determined during the process of determining the anatomical axis of the medullary canal, and the center of rotation of the femoral head.
  5. 根据权利要求1所述的全髋关节图像的处理方法,其中,所述根据股骨柄假体的旋转中心和在髋关节的X线图像识别过程中确定的髋臼杯的旋转中心确定截骨线位置包括:The method for processing a total hip joint image according to claim 1, wherein the osteotomy line is determined according to the rotation center of the femoral stem prosthesis and the rotation center of the acetabular cup determined during the X-ray image recognition of the hip joint Locations include:
    将股骨柄假体的旋转中心与髋臼杯的旋转中心位置重合,确定股骨柄假体实际位置;Coincide the rotation center of the femoral stem prosthesis with the rotation center of the acetabular cup to determine the actual position of the femoral stem prosthesis;
    沿股骨柄假体的涂层位置确定截骨线位置。Determine the position of the osteotomy line along the coating position of the femoral stem component.
  6. 根据权利要求4所述的全髋关节图像的处理方法,其中,所述基于第三神经网络模型对灰度图进行识别,确定髓腔解剖轴线包括:The method for processing a total hip joint image according to claim 4, wherein the identifying the grayscale image based on the third neural network model, and determining the anatomical axis of the medullary cavity comprises:
    基于第三神经网络模型对灰度图的每个像素值进行预测,确定股骨头区域和骨皮质区域;Predict each pixel value of the grayscale image based on the third neural network model, and determine the femoral head area and the bone cortex area;
    根据股骨头区域、骨皮质区域确定髓腔区域;Determine the medullary cavity area according to the femoral head area and the bone cortex area;
    对髓腔区域多个中心点坐标进行直线拟合确定髓腔解剖轴线。The anatomical axis of the medullary canal was determined by linear fitting to the coordinates of multiple center points in the medullary canal region.
  7. 根据权利要求4所述的全髋关节图像的处理方法,其中,所述基于第四神经网络模型对灰度图进行识别,确定股骨颈中心轴线包括:The method for processing a total hip joint image according to claim 4, wherein the identifying the grayscale image based on the fourth neural network model, and determining the central axis of the femoral neck comprises:
    基于第四神经网络模型对灰度图的每个像素值进行预测,确定股骨头区域和股骨颈基底区域;Predict each pixel value of the grayscale image based on the fourth neural network model, and determine the femoral head area and the femoral neck base area;
    根据平面图像的质心公式计算股骨头区域和股骨颈基底区域对应的股骨头中心坐标和股骨颈基底中心坐标;Calculate the femoral head center coordinates and the femoral neck base center coordinates corresponding to the femoral head area and the femoral neck base area according to the centroid formula of the plane image;
    根据股骨头中心坐标和股骨颈基底中心坐标确定股骨颈中心轴线。The central axis of the femoral neck is determined according to the central coordinates of the femoral head and the central coordinates of the base of the femoral neck.
  8. 根据权利要求1所述的全髋关节图像的处理方法,其中,还包括根据截骨线位置,计算术后的腿长差,以及偏距。The method for processing a total hip joint image according to claim 1, further comprising calculating the postoperative leg length difference and offset distance according to the position of the osteotomy line.
  9. 一种全髋关节图像的处理方法,包括:A method for processing a total hip joint image, comprising:
    将待确定信息的目标髋关节X线原始图像输入至预训练的第一神经网络模型中,识别得到所述目标髋关节X线图像中的至少一个泪滴关键点位置、以及至少一个股骨小转子关键点位置;Input the original image of the target hip X-ray with the information to be determined into the pre-trained first neural network model, and identify at least one teardrop key point position and at least one lesser trochanter in the target hip X-ray image. key point location;
    基于所述至少一个泪滴关键点位置确定的连线、以及至少一个股骨小转子关键点位置确定的直线,确定目标髋关节X线图像对应的腿长差;Determine the leg length difference corresponding to the target hip X-ray image based on the connection line determined by the position of the at least one key point of the teardrop and the straight line determined by the position of the at least one key point of the lesser trochanter;
    其中,第一神经网络模型的训练过程包括:将髋关节X线原始图像、和标记的泪滴关键点及股骨小转子关键点位置作为样本集输入至卷积神经网络中,将输入的所述原始图像与特征点的高斯分布函数进行拟合,进行卷积池化采样一直迭代学习训练得到第一神经网络模型,其中,所述原始图像为未标记的髋关节的X线样本图像对应的灰度图。Wherein, the training process of the first neural network model includes: inputting the original X-ray image of the hip joint, the marked teardrop key point and the key point position of the femoral lesser trochanter as a sample set into the convolutional neural network, and the input said The original image is fitted with the Gaussian distribution function of the feature points, and the first neural network model is obtained by performing convolution pooling sampling and iterative learning and training, wherein the original image is the gray corresponding to the unlabeled X-ray sample image of the hip joint. Degree Chart.
  10. 一种全髋关节图像的处理方法,包括:A method for processing a total hip joint image, comprising:
    将待确定信息的目标髋关节X线原始图像输入至预训练的第二神经网络模型中,识别得到股骨头位置;Input the original X-ray image of the target hip joint of the information to be determined into the pre-trained second neural network model, and identify the position of the femoral head;
    基于平面图像的质心公式计算所述股骨头的旋转中心;根据所述股骨头的直径确定髋臼杯直径;根据所述股骨头旋转中心和所述髋臼杯直径确定髋臼杯位置;Calculate the center of rotation of the femoral head based on the centroid formula of the plane image; determine the diameter of the acetabular cup according to the diameter of the femoral head; determine the position of the acetabular cup according to the center of rotation of the femoral head and the diameter of the acetabular cup;
    其中,所述第二神经网络模型的训练过程包括:将髋关节的X线原始图像、和标记的像素属性值输入至卷积神经网络中,进行卷积池化采样一直迭代学习训练得到第二神经网络模型,其中,所述像素属性值包括代表背景像素的数值0,以及代表股骨头像素的数值1。Wherein, the training process of the second neural network model includes: inputting the original X-ray image of the hip joint and the marked pixel attribute value into the convolutional neural network, and performing convolution pooling sampling until iterative learning and training to obtain the second neural network. A neural network model, wherein the pixel attribute values include a value of 0 representing a background pixel and a value of 1 representing a pixel of the femoral head.
  11. 一种全髋关节图像的处理方法,包括:A method for processing a total hip joint image, comprising:
    将待确定信息的目标髋关节X线原始图像输入至预训练的神经网络模型中,识别得到股骨头区域和骨皮质区域;Input the original X-ray image of the target hip joint with the information to be determined into the pre-trained neural network model, and identify the femoral head area and the bone cortex area;
    根据股骨头区域、骨皮质区域确定髓腔区域;并对髓腔区域多个中心点坐标进行直线拟合确定髓腔解剖轴线;Determine the medullary cavity area according to the femoral head area and the bone cortex area; perform linear fitting on the coordinates of multiple center points of the medullary cavity area to determine the anatomical axis of the medullary cavity;
    将待确定信息的目标髋关节X线原始图像输入至预训练的神经网络模型中,识别得到股骨头区域、和股骨颈基底区域;Input the original X-ray image of the target hip joint with the information to be determined into the pre-trained neural network model, and identify the femoral head area and the femoral neck base area;
    基于所述股骨头区域、和股骨颈基底区域确定股骨头中心坐标和股骨颈基底中心坐标;并根据股骨头中心坐标和股骨颈基底中心坐标确定股骨颈中心轴线;Determine the femoral head center coordinate and the femoral neck base center coordinate based on the femoral head area and the femoral neck base area; and determine the femoral neck center axis according to the femoral head center coordinate and the femoral neck base center coordinate;
    基于所述髓腔解剖轴线、和股骨颈中心轴线确定股骨颈干角;并根据所述股骨颈干角确定股骨柄假体型号;Determine the femoral neck shaft angle based on the anatomical axis of the medullary cavity and the central axis of the femoral neck; and determine the femoral stem prosthesis model according to the femoral neck shaft angle;
    其中,所述神经网络模型的训练过程包括:将髋关节的X线原始图像、和标记的像素属性值输入至卷积神经网络中,进行卷积池化采样一直迭代学习训练得到第三神经网络模型,其中,所述像素属性值包括代表背景像素的数值0,代表股骨头像素的数值1,代表骨皮质像素的数值2;Wherein, the training process of the neural network model includes: inputting the original X-ray image of the hip joint and the marked pixel attribute value into the convolutional neural network, performing convolutional pooling sampling until iterative learning and training to obtain a third neural network model, wherein the pixel attribute value includes a value of 0 representing a background pixel, a value of 1 representing a pixel of the femoral head, and a value of 2 representing a pixel of the cortical bone;
    所述神经网络模型的训练过程还包括:将髋关节的X线原始图像、以及标记的像素属性值传入到卷积神经网络中,行卷积池化采样一直迭代学习训练得到第四神经网络模型,其中,所述像素属性值包括代表背景像素的数值0,代表股骨头像素的数值1,代表股骨颈基底像素的数值2。The training process of the neural network model further includes: inputting the original X-ray image of the hip joint and the marked pixel attribute value into the convolutional neural network, and performing convolutional pooling sampling and iterative learning and training to obtain a fourth neural network. model, wherein the pixel attribute value includes a value of 0 representing the background pixel, a value of 1 representing the pixel of the femoral head, and a value of 2 representing the pixel of the base of the femoral neck.
  12. 一种全髋关节图像的处理装置,包括:A processing device for a total hip joint image, comprising:
    比例校准单元,被配置为根据参照物的图像尺寸及其实际尺寸的比例,将髋关节的X线图像进行尺寸的真实还原;The scale calibration unit is configured to restore the real size of the X-ray image of the hip joint according to the ratio of the image size of the reference object and its actual size;
    腿长差确定单元,被配置为基于第一神经网络模型对还原后的髋关节的X线图像进行识别,确定腿长差;The leg length difference determining unit is configured to recognize the restored X-ray image of the hip joint based on the first neural network model, and determine the leg length difference;
    髋臼杯位置确定单元,被配置为基于第二神经网络模型对还原后的髋关节的X线图像进行识别,确定髋臼杯位置;an acetabular cup position determination unit, configured to identify the X-ray image of the restored hip joint based on the second neural network model, and determine the position of the acetabular cup;
    股骨柄假体规格确定单元,被配置为基于第三神经网络模型、第四神经网络模型对还原后的髋关节的X线图像进行识别,确定股骨柄假体规格;The femoral stem prosthesis specification determination unit is configured to recognize the restored X-ray image of the hip joint based on the third neural network model and the fourth neural network model, and determine the femoral stem prosthesis specification;
    截骨线确定单元,被配置为根据股骨柄假体的旋转中心和在髋关节的X线图像识别过程中确定的髋臼杯的旋转中心确定截骨线位置。The osteotomy line determination unit is configured to determine the position of the osteotomy line according to the rotation center of the femoral stem prosthesis and the rotation center of the acetabular cup determined during the X-ray image recognition of the hip joint.
  13. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机指令,所述计算机指令被配置为使所述计算机执行权利要求1至8任意一项所述的全髋关节图像的处理方法,或权利要求9,或权利要求10,或权利要求11所述的全髋关节图像的处理方法。A computer-readable storage medium storing computer instructions, the computer instructions being configured to cause the computer to execute the method for processing a total hip joint image according to any one of claims 1 to 8 , or claim 9 , or claim 10 , or the method for processing a total hip joint image according to claim 11 .
PCT/CN2021/107720 2020-07-06 2021-07-21 Total hip joint image processing method and apparatus WO2022007972A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010643713.5 2020-07-06
CN202010643713 2020-07-06

Publications (1)

Publication Number Publication Date
WO2022007972A1 true WO2022007972A1 (en) 2022-01-13

Family

ID=73190359

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/107720 WO2022007972A1 (en) 2020-07-06 2021-07-21 Total hip joint image processing method and apparatus

Country Status (2)

Country Link
CN (1) CN111888059B (en)
WO (1) WO2022007972A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114431957A (en) * 2022-04-12 2022-05-06 北京长木谷医疗科技有限公司 Deep learning-based preoperative planning method for revision after total knee joint replacement
CN115830247A (en) * 2023-02-14 2023-03-21 北京壹点灵动科技有限公司 Fitting method and device for hip joint rotation center, processor and electronic equipment
CN117437459A (en) * 2023-10-08 2024-01-23 昆山市第一人民医院 Method for realizing user knee joint patella softening state analysis based on decision network

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112842529B (en) * 2020-12-31 2022-02-08 北京长木谷医疗科技有限公司 Total knee joint image processing method and device
CN113133802B (en) * 2021-04-20 2022-12-23 四川大学 Bone surgery line automatic positioning method based on machine learning
CN113744214B (en) * 2021-08-24 2022-05-13 北京长木谷医疗科技有限公司 Femoral stem placing device based on deep reinforcement learning and electronic equipment
CN113974920B (en) * 2021-10-08 2022-10-11 北京长木谷医疗科技有限公司 Knee joint femur force line determining method and device, electronic equipment and storage medium
CN113907775A (en) * 2021-10-13 2022-01-11 瓴域影诺(北京)科技有限公司 Hip joint image quality judgment method and system
CN114419618B (en) * 2022-01-27 2024-02-02 北京长木谷医疗科技股份有限公司 Total hip replacement preoperative planning system based on deep learning
CN114742747B (en) * 2022-02-24 2023-04-18 北京长木谷医疗科技有限公司 Evaluation method and system for hip replacement postoperative image based on deep learning
CN116597002B (en) * 2023-05-12 2024-01-30 北京长木谷医疗科技股份有限公司 Automatic femoral stem placement method, device and equipment based on deep reinforcement learning
CN116650110A (en) * 2023-06-12 2023-08-29 北京长木谷医疗科技股份有限公司 Automatic knee joint prosthesis placement method and device based on deep reinforcement learning
CN116993824A (en) * 2023-07-19 2023-11-03 北京长木谷医疗科技股份有限公司 Acetabular rotation center calculating method, device, equipment and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6917827B2 (en) * 2000-11-17 2005-07-12 Ge Medical Systems Global Technology Company, Llc Enhanced graphic features for computer assisted surgery system
CN110648337A (en) * 2019-09-23 2020-01-03 武汉联影医疗科技有限公司 Hip joint segmentation method, hip joint segmentation device, electronic apparatus, and storage medium
CN110730639A (en) * 2017-03-14 2020-01-24 S·B·墨菲 System and method for determining leg length changes during hip surgery
CN106456196B (en) * 2014-02-11 2020-05-19 史密夫和内修有限公司 Front-reference and back-reference sizing guide and cutting block and method
CN111179350A (en) * 2020-02-13 2020-05-19 张逸凌 Hip joint image processing method based on deep learning and computing equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5171193B2 (en) * 2007-09-28 2013-03-27 株式会社 レキシー Program for preoperative planning of knee replacement surgery
KR101973101B1 (en) * 2009-05-29 2019-04-26 스미스 앤드 네퓨, 인크. Methods and apparatus for performing knee arthroplasty
JP5902166B2 (en) * 2010-08-13 2016-04-13 スミス アンド ネフュー インコーポレーテッド Surgical guide
DE102015100049A1 (en) * 2015-01-06 2016-07-07 Waldemar Link Gmbh & Co. Kg Teaching for the determination of a suitable implant size for a patient of the femur implant of a knee endoprosthesis

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6917827B2 (en) * 2000-11-17 2005-07-12 Ge Medical Systems Global Technology Company, Llc Enhanced graphic features for computer assisted surgery system
CN106456196B (en) * 2014-02-11 2020-05-19 史密夫和内修有限公司 Front-reference and back-reference sizing guide and cutting block and method
CN110730639A (en) * 2017-03-14 2020-01-24 S·B·墨菲 System and method for determining leg length changes during hip surgery
CN110648337A (en) * 2019-09-23 2020-01-03 武汉联影医疗科技有限公司 Hip joint segmentation method, hip joint segmentation device, electronic apparatus, and storage medium
CN111179350A (en) * 2020-02-13 2020-05-19 张逸凌 Hip joint image processing method based on deep learning and computing equipment

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114431957A (en) * 2022-04-12 2022-05-06 北京长木谷医疗科技有限公司 Deep learning-based preoperative planning method for revision after total knee joint replacement
CN114431957B (en) * 2022-04-12 2022-07-29 北京长木谷医疗科技有限公司 Total knee joint replacement postoperative revision preoperative planning system based on deep learning
CN115830247A (en) * 2023-02-14 2023-03-21 北京壹点灵动科技有限公司 Fitting method and device for hip joint rotation center, processor and electronic equipment
CN115830247B (en) * 2023-02-14 2023-07-14 北京壹点灵动科技有限公司 Fitting method and device for hip joint rotation center, processor and electronic equipment
CN117437459A (en) * 2023-10-08 2024-01-23 昆山市第一人民医院 Method for realizing user knee joint patella softening state analysis based on decision network
CN117437459B (en) * 2023-10-08 2024-03-22 昆山市第一人民医院 Method for realizing user knee joint patella softening state analysis based on decision network

Also Published As

Publication number Publication date
CN111888059A (en) 2020-11-06
CN111888059B (en) 2021-07-27

Similar Documents

Publication Publication Date Title
WO2022007972A1 (en) Total hip joint image processing method and apparatus
US10991070B2 (en) Method of providing surgical guidance
US11937888B2 (en) Artificial intelligence intra-operative surgical guidance system
US20240096508A1 (en) Systems and methods for using generic anatomy models in surgical planning
Rouzrokh et al. A deep learning tool for automated radiographic measurement of acetabular component inclination and version after total hip arthroplasty
CN111652888B (en) Method and device for determining medullary cavity anatomical axis based on deep learning
CN111652301B (en) Femoral lesser trochanter identification method and device based on deep learning and electronic equipment
CN114419618A (en) Deep learning-based preoperative planning system for total hip replacement
US8050469B2 (en) Automated measurement of objects using deformable models
Paulano-Godino et al. Identification of fracture zones and its application in automatic bone fracture reduction
CN110751179A (en) Focus information acquisition method, focus prediction model training method and ultrasonic equipment
WO2023160272A1 (en) Deep learning-based hip replacement postoperative image evaluation method and system
US11540794B2 (en) Artificial intelligence intra-operative surgical guidance system and method of use
CN115456990A (en) CT image-based rib counting method, device, equipment and storage medium
US20230105822A1 (en) Intraoperative guidance systems and methods
CN110874834A (en) Bone age prediction method and device, electronic equipment and readable storage medium
CN114305690B (en) Surgical navigation positioning method and device
US20230108487A1 (en) Intraoperative localisation systems and methods
CN114612400A (en) Knee joint femoral replacement postoperative evaluation system based on deep learning
Kotcheff et al. Shape model analysis of THR radiographs
Redhead et al. An automated method for assessing routine radiographs of patients with total hip replacements
CN117422721B (en) Intelligent labeling method based on lower limb CT image
CN115252233B (en) Automatic planning method for acetabular cup in total hip arthroplasty based on deep learning
Ferreira Automatic Landmark Detection In 3D Representation Of Orthopedic Implants
CN113112560A (en) Physiological point region marking method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21838401

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21838401

Country of ref document: EP

Kind code of ref document: A1