CN111888059B - Full hip joint image processing method and device based on deep learning and X-ray - Google Patents

Full hip joint image processing method and device based on deep learning and X-ray Download PDF

Info

Publication number
CN111888059B
CN111888059B CN202010707817.8A CN202010707817A CN111888059B CN 111888059 B CN111888059 B CN 111888059B CN 202010707817 A CN202010707817 A CN 202010707817A CN 111888059 B CN111888059 B CN 111888059B
Authority
CN
China
Prior art keywords
hip joint
image
determining
femoral
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010707817.8A
Other languages
Chinese (zh)
Other versions
CN111888059A (en
Inventor
张逸凌
刘星宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Longwood Valley Medtech Co Ltd
Original Assignee
Longwood Valley Medtech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Longwood Valley Medtech Co Ltd filed Critical Longwood Valley Medtech Co Ltd
Publication of CN111888059A publication Critical patent/CN111888059A/en
Application granted granted Critical
Publication of CN111888059B publication Critical patent/CN111888059B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F2/00Filters implantable into blood vessels; Prostheses, i.e. artificial substitutes or replacements for parts of the body; Appliances for connecting them with the body; Devices providing patency to, or preventing collapsing of, tubular structures of the body, e.g. stents
    • A61F2/02Prostheses implantable into the body
    • A61F2/30Joints
    • A61F2/46Special tools or methods for implanting or extracting artificial joints, accessories, bone grafts or substitutes, or particular adaptations therefor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F2/00Filters implantable into blood vessels; Prostheses, i.e. artificial substitutes or replacements for parts of the body; Appliances for connecting them with the body; Devices providing patency to, or preventing collapsing of, tubular structures of the body, e.g. stents
    • A61F2/02Prostheses implantable into the body
    • A61F2/30Joints
    • A61F2/46Special tools or methods for implanting or extracting artificial joints, accessories, bone grafts or substitutes, or particular adaptations therefor
    • A61F2002/4632Special tools or methods for implanting or extracting artificial joints, accessories, bone grafts or substitutes, or particular adaptations therefor using computer-controlled surgery, e.g. robotic surgery
    • A61F2002/4633Special tools or methods for implanting or extracting artificial joints, accessories, bone grafts or substitutes, or particular adaptations therefor using computer-controlled surgery, e.g. robotic surgery for selection of endoprosthetic joints or for pre-operative planning

Abstract

The application discloses a method and a device for processing a total hip joint image based on deep learning and X-ray, wherein the method comprises the steps of obtaining an X-ray image of a hip joint, wherein the X-ray image of the hip joint comprises an image of a reference object, and the reference object is a reference object with a known size; reducing the size of the X-ray image of the hip joint according to the ratio of the image size of the reference object and the actual size of the reference object; identifying the reduced X-ray image of the hip joint based on a deep learning model, and determining leg length difference, the position of an acetabular cup and the specification and model of a femoral stem prosthesis; the osteotomy line position is determined from the center of rotation of the femoral stem prosthesis and the center of rotation of the acetabular cup determined during X-ray image recognition of the hip joint. The method aims to provide a more convenient and more accurate preoperative planning mode to provide better preoperative support for the total hip replacement surgery.

Description

Full hip joint image processing method and device based on deep learning and X-ray
Technical Field
The application relates to the technical field of medicine, in particular to a method and a device for processing a total hip joint image based on deep learning and X-ray.
Background
The preoperative planning of the total hip replacement surgery in the medical field mainly comprises calculating a required prosthesis model and an osteotomy line position, and the preoperative planning of the total hip replacement surgery plays an important role in the success rate of the surgery, so that how to provide accurate preoperative planning is very important. At present, the main preoperative planning mode is that measurement is carried out manually through various tools, the efficiency is low, and the accuracy cannot be guaranteed, so that a more convenient and more accurate preoperative planning method is urgently needed to provide better preoperative support for the total hip replacement surgery.
Disclosure of Invention
The main purpose of the present application is to provide a method and a device for processing a total hip joint image based on deep learning and X-ray, so as to provide a more convenient and accurate preoperative planning mode to provide better preoperative support for a total hip joint replacement operation.
In order to achieve the above object, according to a first aspect of the present application, a total hip image processing method based on deep learning and X-ray is provided.
The method for processing the total hip joint image based on deep learning and X-ray comprises the following steps:
acquiring an X-ray image of a hip joint, wherein the X-ray image of the hip joint comprises an image of a reference object, and the reference object is a reference object with a known size;
reducing the size of the X-ray image of the hip joint according to the ratio of the image size of the reference object and the actual size of the reference object;
identifying the reduced X-ray image of the hip joint based on a deep learning model, and determining leg length difference, the position of an acetabular cup and the specification and model of a femoral stem prosthesis;
the osteotomy line position is determined from the center of rotation of the femoral stem prosthesis and the center of rotation of the acetabular cup determined during X-ray image recognition of the hip joint.
Optionally, the identifying the reduced X-ray image of the hip joint based on the deep learning model to determine the leg length difference includes:
converting the X-ray image of the hip joint into a gray scale image;
predicting each pixel value of the gray scale image based on a first neural network model, and determining a tear drop key point and a femoral lesser trochanter key point;
and determining the leg length difference according to the critical point of the tear drop and the critical point position of the lesser trochanter of the femur.
Optionally, the identifying the reduced X-ray image of the hip joint based on the deep learning model to determine the position of the acetabular cup includes:
converting the X-ray image of the hip joint into a gray scale image;
predicting each pixel value of the gray scale image based on the second neural network model, and determining the position of the femoral head;
calculating the rotation center of the femoral head according to the mass center formula of the plane image;
calculating the diameter of the acetabular cup according to the diameter of the femoral head;
determining the acetabular cup position based on the bone center of rotation and the acetabular cup diameter.
Optionally, the identifying the reduced X-ray image of the hip joint based on the deep learning model to determine the specification and the model of the femoral stem prosthesis includes:
converting the X-ray image of the hip joint into a gray scale image;
identifying the gray scale image based on the third neural network model, and determining the medullary cavity anatomical axis;
identifying the gray scale map based on a fourth neural network model, and determining the central axis of the femoral neck;
determining a femoral neck shaft angle according to the medullary cavity dissection axis and the femoral neck central axis;
and determining the specification and model of the femoral stem prosthesis according to the femoral neck shaft angle, the medullary cavity area determined in the process of determining the medullary cavity anatomical axis and the femoral head rotation center.
Optionally, the determining the osteotomy line position according to the rotation center of the femoral stem prosthesis and the rotation center of the acetabular cup determined in the X-ray image recognition process of the hip joint comprises:
the rotation center of the femoral stem prosthesis is coincided with the rotation center of the acetabular cup, and the actual position of the femoral stem prosthesis is determined;
the position of the osteotomy line is determined along the coating position of the femoral stem prosthesis.
Optionally, the identifying the gray scale map based on the third neural network model, and the determining the medullary cavity anatomical axis includes:
predicting each pixel value of the gray scale image based on a third neural network model, and determining a femoral head region and a cortical bone region;
determining a medullary cavity region according to a femoral head region and a cortical bone region;
and performing straight line fitting on the coordinates of the plurality of central points of the medullary cavity region to determine the medullary cavity anatomical axis.
Optionally, the identifying the gray scale map based on the fourth neural network model, and the determining the central axis of the femoral neck includes:
predicting each pixel value of the gray scale image based on a fourth neural network model, and determining a femoral head region and a femoral neck base region;
calculating the femoral head central coordinates and the femoral neck base central coordinates corresponding to the femoral head area and the femoral neck base area according to a mass center formula of the plane image;
and determining the central axis of the femoral neck according to the central coordinates of the femoral head and the central coordinates of the femoral neck base.
Optionally, the post-operative leg length difference, and offset, are calculated from the osteotomy line position.
In order to achieve the above object, according to a second aspect of the present application, there is provided a total hip image processing apparatus based on deep learning and X-ray.
The device for processing the total hip joint image based on deep learning and X-ray comprises:
the proportion calibration unit is used for really restoring the size of the X-ray image of the hip joint according to the image size of the reference object and the proportion of the actual size of the reference object;
the leg length difference determining unit is used for identifying the reduced X-ray image of the hip joint based on the first neural network model and determining the leg length difference;
the acetabular cup position determining unit is used for identifying the reduced X-ray image of the hip joint based on the second neural network model and determining the position of the acetabular cup;
the femoral stem prosthesis specification determining unit is used for identifying the reduced X-ray image of the hip joint based on the third neural network model and the fourth neural network model and determining the femoral stem prosthesis specification;
and the osteotomy line determining unit is used for determining the position of the osteotomy line according to the rotation center of the femoral stem prosthesis and the rotation center of the acetabular cup determined in the X-ray image recognition process of the hip joint.
In order to achieve the above object, according to a third aspect of the present application, there is provided a computer-readable storage medium storing computer instructions for causing the computer to execute the deep learning and X-ray based total hip image processing method according to any one of the first aspect.
In order to achieve the above object, according to a fourth aspect of the present application, there is provided an electronic apparatus comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to cause the at least one processor to perform the method for deep learning and X-ray based total hip image processing according to any of the first aspect.
In the embodiment of the application, in the method and the device for processing the total hip joint image based on deep learning and X-ray, the X-ray image of the hip joint is acquired, the X-ray image of the hip joint comprises an image of a reference object, and the reference object is a reference object with a known size; reducing the size of the X-ray image of the hip joint according to the ratio of the image size of the reference object and the actual size of the reference object; identifying the reduced X-ray image of the hip joint based on a deep learning model, and determining leg length difference, the position of an acetabular cup and the specification and model of a femoral stem prosthesis; the osteotomy line position is determined from the center of rotation of the femoral stem prosthesis and the center of rotation of the acetabular cup determined during X-ray image recognition of the hip joint. It can be seen that, in the preoperative planning mode of total hip joint replacement of the embodiment, the X-ray image of the hip joint is reduced by the real size, and the subsequent position identification is more accurate by the actual size; in addition, the identification is carried out based on a deep learning model in the process of identifying the X-ray image, so that the accuracy and the rapidity of the leg length difference, the position of the acetabular cup, the specification and the model of the femoral stem prosthesis and the position of the osteotomy line which are determined according to the identification result are further ensured, and better preoperative support is provided for the total hip replacement surgery.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, serve to provide a further understanding of the application and to enable other features, objects, and advantages of the application to be more apparent. The drawings and their description illustrate the embodiments of the invention and do not limit it. In the drawings:
fig. 1 is a flowchart of a total hip joint image processing method based on deep learning and X-ray according to an embodiment of the present application;
FIG. 2 is a schematic illustration of an X-ray image of a hip joint provided in accordance with an embodiment of the present application;
3-4 are schematic diagrams of determining an actual osteotomy line position in a clinic provided in accordance with an embodiment of the present application;
FIG. 5 is a flow chart of a method for determining leg length difference according to an embodiment of the present application;
fig. 6 is a schematic diagram for automatically identifying a location of a critical point of a tear drop and a critical point of a femoral lesser trochanter provided in accordance with an embodiment of the present application;
FIG. 7 is a schematic illustration of a leg length difference determination provided in accordance with an embodiment of the present application;
FIG. 8 is a flow chart of a method of determining acetabular cup position provided in accordance with embodiments of the present application;
fig. 9 is a schematic view of a femoral head identification provided in accordance with an embodiment of the present application;
fig. 10 is a schematic illustration of a femoral head center of rotation provided in accordance with an embodiment of the present application;
FIG. 11 is a schematic illustration of an acetabular cup position provided in accordance with an embodiment of the application;
FIG. 12 is a flow chart of a method of determining a specification model for a femoral stem prosthesis provided in accordance with an embodiment of the present application;
fig. 13 is a schematic illustration of identifying a femoral head region and a cortical bone region in accordance with an embodiment of the present application;
FIG. 14 is a schematic illustration of a region of a medullary cavity provided in accordance with an embodiment of the present application;
FIG. 15 is a schematic illustration of a method of determining an anatomical axis of a medullary cavity according to an embodiment of the present application;
fig. 16 is a schematic illustration of identifying a femoral head region, a femoral neck base region, provided in accordance with an embodiment of the present application;
FIG. 17 is a schematic view of a femoral neck central axis provided in accordance with an embodiment of the present application;
fig. 18 is a block diagram of a deep learning and X-ray based total hip image processing device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
According to an embodiment of the present application, there is provided a total hip image processing method based on deep learning and X-ray, as shown in fig. 1, the method includes the following steps:
s101, an X-ray image of the hip joint is obtained, wherein the X-ray image of the hip joint comprises an image of a reference object.
The X-ray image of the hip joint is taken by taking an X-ray picture of the hip joint, while an object of known dimensions, the reference, is taken in the same picture. The X-ray image of the hip joint thus obtained includes an image of the reference object. As shown in fig. 2, the X-ray image of the hip joint is shown, in which the image indicating the standard size of the central portion of the bottom of the image is the image of the reference object. In practical applications, the selection of the reference object and the discharge position during shooting can be adjusted according to the adaptability, and the embodiment is not limited.
And S102, restoring the size of the X-ray image of the hip joint according to the image size of the reference object and the proportion of the actual size of the reference object.
The size of the reference object is known, the image size of the reference object can be obtained through measurement, the proportion of the X-ray image of the hip joint relative to the actual size of the hip joint can be determined according to the image size of the reference object and the proportion of the actual size of the reference object, and then the X-ray image of the hip joint is subjected to real-size reduction according to the proportion. The reduction of the real size of the X-ray image of the hip joint is based on the subsequent image identification, so that the difference between the leg length difference, the acetabular cup position, the specification and the model of the femoral stem prosthesis and the actual corresponding position determined according to the identification result is smaller, and the identification accuracy is ensured.
The specific reduction operation may be to select the critical site size of an object of known size. The ratio is determined by calculating the distance between two points in the image and performing proportional conversion with the actual size of the object, and then the ratio of the X-ray image of the hip joint is corrected according to the ratio.
S103, identifying the restored X-ray image of the hip joint based on the deep learning model, and determining the leg length difference, the position of the acetabular cup and the specification and model of the femoral stem prosthesis.
The deep learning model is a neural network model, and the input and output of the model which may be used for determining leg length difference, acetabular cup position and specification and model of femoral stem prosthesis may be different, but the principle of model training is the same. Specifically, the principle of neural network model training is as follows: the X-ray image of the hip joint is converted into a 0-255 gray scale image, then the image is subjected to manual selection and labeling, each pixel label of the image is divided into a plurality of attribute values (the number of the attribute values is different according to actual requirements, for example, the number of the attribute values can be two or three) and named respectively, and then the attribute values are input into a neural network model to carry out convolution pooling sampling and iterative learning training all the time to obtain the neural network model.
The neural network model in the step is a classification neural network, which classifies different regions in the image, for example, when leg length difference is determined, the neural network model is mainly applied to identify key points of tear drops and femoral lesser trochanters; for another example, when determining the position of the acetabular cup, the neural network model is mainly applied to identify the femoral head region; for example, when determining the specification and model of the femoral stem prosthesis, the neural network model is mainly applied to identify the femoral head and cortical bone region and the femoral head and femoral neck base region.
The neural network in this embodiment may be a convolutional neural network LeNet, a convolutional neural network AlexNet, a visual convolutional neural network ZF-Net, a convolutional neural network GoogleNet, a convolutional neural network VGG, a convolutional neural network inclusion, a convolutional neural network ResNet, a convolutional neural network DensNet, a convolutional neural network inclusion ResNet, or the like.
The leg length difference, the position of the acetabular cup and the specification and the model of the femoral stem prosthesis are determined by calculating coordinates, fitting and the like according to the recognition result of the image.
S104, determining the position of an osteotomy line according to the rotation center of the femoral stem prosthesis and the rotation center of the acetabular cup determined in the X-ray image identification process of the hip joint.
Specifically, the "determining the osteotomy line position according to the rotation center of the femoral stem prosthesis and the rotation center of the acetabular cup determined in the X-ray image recognition process of the hip joint" is to obtain the actual position of the femoral stem prosthesis by moving the femoral stem prosthesis and coinciding the rotation center of the femoral stem prosthesis with the previously calculated rotation center position of the acetabular cup. The location of the coating along the femoral stem prosthesis may determine the actual osteotomy line location in the clinic, as shown in figures 3-4. Fig. 3 shows the femoral stem prosthesis being moved to a predetermined position such that the center of rotation of the femoral stem prosthesis coincides with the previously calculated position of the center of rotation of the acetabular cup, and fig. 4 shows the determination of the osteotomy line position based on the outer shape of the femoral stem prosthesis.
From the above description, it can be seen that in the deep learning and X-ray based total hip joint image processing method according to the embodiment of the present application, an X-ray image of a hip joint is obtained, where the X-ray image of the hip joint includes an image of a reference object, and the reference object is a reference object with a known size; reducing the size of the X-ray image of the hip joint according to the ratio of the image size of the reference object and the actual size of the reference object; identifying the reduced X-ray image of the hip joint based on a deep learning model, and determining leg length difference, the position of an acetabular cup and the specification and model of a femoral stem prosthesis; the osteotomy line position is determined from the center of rotation of the femoral stem prosthesis and the center of rotation of the acetabular cup determined during X-ray image recognition of the hip joint. It can be seen that, in the preoperative planning mode of total hip joint replacement of the embodiment, the X-ray image of the hip joint is reduced by the real size, and the subsequent position identification is more accurate by the actual size; in addition, the identification is carried out based on a deep learning model in the process of identifying the X-ray image, so that the accuracy and the rapidity of the leg length difference, the position of the acetabular cup, the specification and the model of the femoral stem prosthesis and the position of the osteotomy line which are determined according to the identification result are further ensured, and better preoperative support is provided for the total hip replacement surgery.
Further, as a further refinement of the above embodiment, step S103 is described for the detailed steps of determining the leg length difference, the acetabular cup position, and the specification and model of the femoral stem prosthesis, respectively.
As shown in fig. 5, the flowchart for determining the leg length difference specifically includes the following steps:
s201, converting the X-ray image of the hip joint into a gray-scale image.
The X-ray image of the hip joint was converted to a 0-255 gray scale image.
S202, predicting each pixel value of the gray scale image based on the first neural network model, and determining a tear drop key point and a femoral lesser trochanter key point.
Before prediction is performed, a first neural network model is obtained according to sample training. Specifically, an unmarked original image (a gray scale image corresponding to an X-ray sample image of a hip joint) and marks for manually identifying the positions of marked tear drop key points and femoral lesser trochanter key points are transmitted into a convolutional neural network, the input original image is fitted with a Gaussian distribution function of characteristic points, and convolutional pooling sampling is performed until iterative learning training is performed to obtain a first neural network model. It should be noted that the convolutional neural network in this step may be a convolutional neural network LeNet, a convolutional neural network AlexNet, a visualized convolutional neural network ZF-Net, a convolutional neural network GoogleNet, a convolutional neural network VGG, a convolutional neural network inclusion, a convolutional neural network ResNet, a convolutional neural network DensNet, a convolutional neural network inclusion ResNet, or the like.
After the first neural network model is obtained, the gray scale image corresponding to the X-ray image of the hip joint is input into the first neural network model, and the positions of the tear drop key points and the femoral lesser trochanter key points can be automatically identified. Fig. 6 is a schematic diagram of the automatic identification of the location of the critical points of the tear drop and the femoral lesser trochanter, as shown in fig. 6.
S203, determining the leg length difference according to the critical point of the tear drop and the critical point position of the lesser trochanter of the femur.
Specifically, as shown in fig. 7, the horizontal straight line is defined by two critical points of the tear drop and is a connection line of the two critical points of the tear drop, the two vertical line segments are defined by the critical point of the femoral lesser trochanter and the horizontal straight line, the two vertical straight lines are respectively designated as a and B, and the difference between a and B is the difference between the leg lengths.
As shown in fig. 8, the flow chart for determining the position of the acetabular cup specifically includes the following steps:
and S301, converting the X-ray image of the hip joint into a gray scale image.
The X-ray image of the hip joint was converted to a 0-255 gray scale image.
S302, predicting each pixel value of the gray scale image based on the second neural network model, and determining the position of the femoral head.
Before prediction is carried out, a second neural network model is obtained according to sample training. Specifically, an unmarked original image (a gray scale image corresponding to an X-ray sample image of a hip joint) and a marked image for artificially identifying the attribute values of the marked pixels are transmitted into a convolutional neural network, wherein the marked image comprises two attribute values which are respectively named as 0 and 1. The value 0 represents a background pixel and 1 represents a femoral head pixel; and transmitting the data to a convolutional neural network, and performing convolutional pooling sampling and iterative learning training to obtain a second neural network model. It should be noted that the convolutional neural network in this step may be a convolutional neural network LeNet, a convolutional neural network AlexNet, a visualized convolutional neural network ZF-Net, a convolutional neural network GoogleNet, a convolutional neural network VGG, a convolutional neural network inclusion, a convolutional neural network ResNet, a convolutional neural network DensNet, a convolutional neural network inclusion ResNet, or the like.
After the second neural network model is obtained, the gray scale map corresponding to the X-ray image of the hip joint is input into the second neural network model, and each pixel value can be predicted. Automatically attributing each pixel value of the X-ray image to an attribute: 0-background, 1-femoral head, completing the automatic identification of the femoral head region (i.e., femoral head position), as shown in fig. 9. Fig. 9 is a schematic view of identifying a femoral head.
And S303, calculating the rotation center of the femoral head according to the centroid formula of the plane image.
Because the obtained image of the femoral head region is a binary image and the mass distribution is uniform, the center of mass and the centroid are superposed, and the coordinate of the center point of the femoral head, namely the rotation center of the femoral head, can be calculated according to the centroid formula of the planar image. Assuming that the binary image is B [ i, j ], the coordinates of the center point of the femoral head can be obtained according to the following formula:
Figure GDA0002952170040000101
wherein:
Figure GDA0002952170040000102
the pixel coordinates of the center point of the femoral head are obtained here, and the pixel coordinates need to be converted into image coordinates. The coordinate center coordinates of the image plane are as follows:
Figure GDA0002952170040000111
Figure GDA0002952170040000112
the pixel coordinate
Figure GDA0002952170040000113
The transformation formula to image coordinates (x ', y') is:
Figure GDA0002952170040000114
Figure GDA0002952170040000115
wherein Sx,SyRespectively the row-column pitch of the image array. Finally, the position of the femoral head rotation center is obtained through an output display module, as shown in fig. 9. The center point of the circle in fig. 10 is the femoral head rotation center.
S304, calculating the diameter of the acetabular cup according to the diameter of the femoral head.
Determining the diameter of the femoral head according to the femoral head area and the femoral head rotation center, and calculating the diameter of the acetabular cup according to the diameter of the femoral head. The diameter of the acetabular cup can be determined by estimating the diameter of the femoral head by referring to any existing estimation mode.
S305, determining the position of the acetabular cup according to the bone rotation center and the diameter of the acetabular cup.
The acetabular cup position is automatically determined from the femoral head diameter and the femoral head center of rotation position, as shown in fig. 11. The area delineated by the lines in FIG. 11 is the acetabular cup location.
As shown in fig. 12, a flowchart for determining the specification and model of the femoral stem prosthesis specifically includes the following steps:
s401, converting the X-ray image of the hip joint into a gray-scale image.
The X-ray image of the hip joint was converted to a 0-255 gray scale image.
S402, identifying the gray level map based on the third neural network model, and determining the medullary cavity anatomical axis.
Specifically, the method for determining the medullary cavity anatomical axis comprises the following steps:
firstly, predicting each pixel value of a gray scale image based on a third neural network model, and determining a femoral head region and a cortical bone region;
before prediction is carried out, a third neural network model is obtained according to sample training. Specifically, an unmarked original image (a gray scale image corresponding to an X-ray sample image of a hip joint) and a marked image for manually identifying the attribute values of the marked pixels are transmitted into a convolutional neural network, wherein the marked image comprises three attribute values which are named 0, 1 and 2 respectively. The value 0 represents the background pixel, 1 represents the femoral head pixel, and 2 represents cortical bone; and transmitting the data to a convolutional neural network, and performing convolutional pooling sampling and iterative learning training to obtain a third neural network model. It should be noted that the convolutional neural network in this step may be a convolutional neural network LeNet, a convolutional neural network AlexNet, a visualized convolutional neural network ZF-Net, a convolutional neural network GoogleNet, a convolutional neural network VGG, a convolutional neural network inclusion, a convolutional neural network ResNet, a convolutional neural network DensNet, a convolutional neural network inclusion ResNet, or the like.
After the third neural network model is obtained, the gray scale map corresponding to the X-ray image of the hip joint is input into the third neural network model, and each pixel value can be predicted. Automatically attributing each pixel value of the X-ray image to an attribute: 0-background, 1-femoral head, 2-cortical bone, completing the automatic identification of femoral head region, cortical bone region, as shown in fig. 13. Fig. 13 is a schematic diagram of identifying femoral head region and cortical bone region.
Secondly, determining a medullary cavity region according to a femoral head region and a cortical bone region;
specifically, the lesser trochanter ends were cut through to the distal femoral region and the image was used to subtract the cortical bone region from the femoral region to obtain the medullary cavity region, as shown in fig. 14.
And finally, performing straight line fitting on the coordinates of a plurality of central points of the medullary cavity region to determine the medullary cavity anatomical axis.
Specifically, as shown in fig. 15, the intersection points of each transverse row and the medullary cavity are four coordinates below the lesser trochanter ending position, and are respectively named as a1, a2, B1 and B2 from left to right; the midpoint, A1 (X), can be determined from two points1,Y1),A2(X2,Y2) Midpoint coordinates of (a):
Figure GDA0002952170040000121
b1 and B2 can be calculated in the same way. The coordinates of the middle points of the medullary cavity are calculated in sequence in each row, and the points are fitted into a straight line, namely the medullary cavity anatomical axis (also the femur anatomical axis).
And S403, identifying the gray scale map based on a fourth neural network model, and determining the central axis of the femoral neck.
Specifically, the step of determining the central axis of the femoral neck comprises the following steps:
firstly, predicting each pixel value of the gray scale image based on a fourth neural network model, and determining a femoral head region and a femoral neck base region;
before prediction is performed, a fourth neural network model is obtained according to sample training. Specifically, an unmarked original image (a gray scale image corresponding to an X-ray sample image of a hip joint) and a marked image for manually identifying the attribute values of the marked pixels are transmitted into a convolutional neural network, wherein the marked image comprises three attribute values which are named 0, 1 and 2 respectively. The value 0 represents the background pixel, 1 represents the femoral head pixel, and 2 represents the femoral neck base pixel; and transmitting the data to a convolutional neural network, and performing convolutional pooling sampling and iterative learning training to obtain a fourth neural network model. It should be noted that the convolutional neural network in this step may be a convolutional neural network LeNet, a convolutional neural network AlexNet, a visualized convolutional neural network ZF-Net, a convolutional neural network GoogleNet, a convolutional neural network VGG, a convolutional neural network inclusion, a convolutional neural network ResNet, a convolutional neural network DensNet, a convolutional neural network inclusion ResNet, or the like.
After the fourth neural network model is obtained, the gray scale map corresponding to the X-ray image of the hip joint is input into the fourth neural network model, and each pixel value can be predicted. Automatically attributing each pixel value of the X-ray image to an attribute: 0-background, 1-femoral head, 2-femoral neck base pixel, completing the automatic identification of femoral head region, femoral neck base region, as shown in fig. 16. Fig. 16 is a schematic view of identifying a femoral head region and a femoral neck base region.
Secondly, calculating the femoral head central coordinates and the femoral neck base central coordinates corresponding to the femoral head area and the femoral neck base area according to a mass center formula of the plane image;
the calculation method of the femoral head center coordinate and the femoral neck base center coordinate is similar, and both the calculation method of the femoral head center coordinate in step S303 can be referred to, and details are not described here.
And finally, determining the central axis of the femoral neck according to the central coordinates of the femoral head and the central coordinates of the base of the femoral neck.
Specifically, the line connecting the femoral head center coordinate and the femoral neck base center coordinate is the femoral neck central axis, as shown in fig. 17. The two obliquely downward line segments in fig. 17 are femoral neck central axes.
S404, determining a femoral neck shaft angle according to the medullary cavity anatomical axis and the femoral neck central axis.
Specifically, the included angle formed by the medullary cavity dissection axis and the central axis of the femoral neck is the femoral neck shaft angle.
S405, determining the specification and model of the femoral stem prosthesis according to the femoral neck shaft angle, the medullary cavity area determined in the process of determining the medullary cavity anatomical axis and the femoral head rotation center.
Specifically, according to the angle value of the femoral neck shaft angle and the medullary cavity form, the femoral head rotation center position can give a recommendation for selecting the femoral stem prosthesis model. The femoral stem prosthesis models are distinguished according to the characteristics of the femoral stem prosthesis such as shape and size.
Further, as a supplementary illustration of the embodiment of fig. 1, after determining the osteotomy line position, calculating a post-operative leg length difference and an offset distance according to the osteotomy line position. Specifically, the offset comprises femoral offset: refers to the vertical distance from the center of rotation of the femoral head to the long axis of the femoral shaft. Also included are combined offset distances, specifically the cumulative sum of femoral and acetabular offset distances.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
According to an embodiment of the present application, there is also provided a deep learning and X-ray based total hip image processing apparatus for implementing the method described in fig. 1 to 17, as shown in fig. 18, the apparatus including:
a scale calibration unit 51 for performing real reduction of the size of the X-ray image of the hip joint according to the image size of the reference object and the scale of the actual size thereof;
a leg length difference determination unit 52, configured to identify the reduced X-ray image of the hip joint based on the first neural network model, and determine a leg length difference;
the acetabular cup position determining unit 53 is used for identifying the reduced X-ray image of the hip joint based on the second neural network model and determining the position of the acetabular cup;
a femoral stem prosthesis specification determining unit 54, configured to identify the reduced X-ray image of the hip joint based on the third neural network model and the fourth neural network model, and determine a femoral stem prosthesis specification;
an osteotomy line determining unit 55 for determining an osteotomy line position based on the center of rotation of the femoral stem prosthesis and the center of rotation of the acetabular cup determined during the X-ray image recognition of the hip joint.
Specifically, the specific process of implementing the functions of each unit and module in the device in the embodiment of the present application may refer to the related description in the method embodiment, and is not described herein again.
From the above description, it can be seen that in the deep learning and X-ray based total hip joint image processing apparatus according to the embodiment of the present application, an X-ray image of a hip joint is acquired, the X-ray image of the hip joint includes an image of a reference object, and the reference object is a reference object with a known size; reducing the size of the X-ray image of the hip joint according to the ratio of the image size of the reference object and the actual size of the reference object; identifying the reduced X-ray image of the hip joint based on a deep learning model, and determining leg length difference, the position of an acetabular cup and the specification and model of a femoral stem prosthesis; the osteotomy line position is determined from the center of rotation of the femoral stem prosthesis and the center of rotation of the acetabular cup determined during X-ray image recognition of the hip joint. It can be seen that, in the preoperative planning mode of total hip joint replacement of the embodiment, the X-ray image of the hip joint is reduced by the real size, and the subsequent position identification is more accurate by the actual size; in addition, the identification is carried out based on a deep learning model in the process of identifying the X-ray image, so that the accuracy and the rapidity of the leg length difference, the position of the acetabular cup, the specification and the model of the femoral stem prosthesis and the position of the osteotomy line which are determined according to the identification result are further ensured, and better preoperative support is provided for the total hip replacement surgery.
According to an embodiment of the present application, there is further provided a computer-readable storage medium, wherein the computer-readable storage medium stores computer instructions for causing the computer to execute the method for processing a total hip image based on deep learning and X-ray in the above method embodiment.
According to an embodiment of the present application, there is also provided an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to cause the at least one processor to perform the method of deep learning and X-ray based total hip image processing in the above method embodiments.
It will be apparent to those skilled in the art that the modules or steps of the present application described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and they may alternatively be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, or fabricated separately as individual integrated circuit modules, or fabricated as a single integrated circuit module from multiple modules or steps. Thus, the present application is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (9)

1. A method for processing a total hip joint image based on deep learning and X-ray, the method being executed in a computer, the method comprising:
acquiring an X-ray image of a hip joint, wherein the X-ray image of the hip joint comprises an image of a reference object, and the reference object is a reference object with a known size;
the proportion calibration unit restores the size of the X-ray image of the hip joint according to the image size of the reference object and the proportion of the actual size of the reference object;
identifying the reduced X-ray image of the hip joint based on a deep learning model, and determining leg length difference, the position of an acetabular cup and the specification and model of a femoral stem prosthesis;
the step of identifying the reduced X-ray image of the hip joint based on the deep learning model and determining the leg length difference, the position of the acetabular cup and the specification and model of the femoral stem prosthesis comprises the following steps: the leg length difference determining unit identifies the reduced X-ray image of the hip joint based on the first neural network model and determines the leg length difference; the acetabulum cup position determining unit identifies the X-ray image of the reduced hip joint based on the second neural network model, and determines the position of the acetabulum cup; the femoral stem prosthesis specification determining unit identifies the reduced X-ray image of the hip joint based on the third neural network model and the fourth neural network model, and determines the femoral stem prosthesis specification;
the osteotomy line determining unit determines the position of an osteotomy line according to the rotation center of the femoral stem prosthesis and the rotation center of the acetabular cup determined in the X-ray image recognition process of the hip joint;
the determining the osteotomy line position from the center of rotation of the femoral stem prosthesis and the center of rotation of the acetabular cup determined during X-ray image recognition of the hip joint comprises:
the rotation center of the femoral stem prosthesis is coincided with the rotation center of the acetabular cup, and the actual position of the femoral stem prosthesis is determined; the position of the osteotomy line is determined along the coating position of the femoral stem prosthesis.
2. The method for processing the image of the hip joint based on the deep learning and the X-ray according to claim 1, wherein the step of identifying the X-ray image of the hip joint after the reduction based on the deep learning model to determine the leg length difference comprises the following steps:
converting the X-ray image of the hip joint into a gray scale image;
predicting each pixel value of the gray scale image based on a first neural network model, and determining a tear drop key point and a femoral lesser trochanter key point;
and determining the leg length difference according to the critical point of the tear drop and the critical point position of the lesser trochanter of the femur.
3. The method for processing the image of the hip joint based on the deep learning and the X-ray according to claim 1, wherein the step of identifying the X-ray image of the reduced hip joint based on the deep learning model to determine the position of the acetabular cup comprises the steps of:
converting the X-ray image of the hip joint into a gray scale image;
predicting each pixel value of the gray scale image based on the second neural network model, and determining the position of the femoral head;
calculating the rotation center of the femoral head according to the mass center formula of the plane image;
calculating the diameter of the acetabular cup according to the diameter of the femoral head;
determining the acetabular cup position based on the bone center of rotation and the acetabular cup diameter.
4. The method for processing the image of the total hip joint based on the deep learning and the X-ray according to claim 1, wherein the identifying the X-ray image of the reduced hip joint based on the deep learning model to determine the specification and the model of the femoral stem prosthesis comprises:
converting the X-ray image of the hip joint into a gray scale image;
identifying the gray scale image based on the third neural network model, and determining the medullary cavity anatomical axis;
identifying the gray scale map based on a fourth neural network model, and determining the central axis of the femoral neck;
determining a femoral neck shaft angle according to the medullary cavity dissection axis and the femoral neck central axis;
and determining the specification and model of the femoral stem prosthesis according to the femoral neck shaft angle, the medullary cavity area determined in the process of determining the medullary cavity anatomical axis and the femoral head rotation center.
5. The deep learning and X-ray based total hip image processing method according to claim 4, wherein the identifying a gray scale map based on the third neural network model, and the determining the medullary cavity anatomical axis comprises:
predicting each pixel value of the gray scale image based on a third neural network model, and determining a femoral head region and a cortical bone region;
determining a medullary cavity region according to a femoral head region and a cortical bone region;
and performing straight line fitting on the coordinates of the plurality of central points of the medullary cavity region to determine the medullary cavity anatomical axis.
6. The deep learning and X-ray based total hip image processing method according to claim 4, wherein the identifying a gray scale map based on the fourth neural network model, and the determining the central axis of the femoral neck comprises:
predicting each pixel value of the gray scale image based on a fourth neural network model, and determining a femoral head region and a femoral neck base region;
calculating the femoral head central coordinates and the femoral neck base central coordinates corresponding to the femoral head area and the femoral neck base area according to a mass center formula of the plane image;
and determining the central axis of the femoral neck according to the central coordinates of the femoral head and the central coordinates of the femoral neck base.
7. The deep learning and X-ray based total hip image processing method according to claim 1, wherein the post-operative leg length difference and offset are calculated from the osteotomy line position.
8. A total hip image processing device based on deep learning and X-ray, the device comprising:
the proportion calibration unit is used for really restoring the size of the X-ray image of the hip joint according to the image size of the reference object and the proportion of the actual size of the reference object;
the leg length difference determining unit is used for identifying the reduced X-ray image of the hip joint based on the first neural network model and determining the leg length difference;
the acetabular cup position determining unit is used for identifying the reduced X-ray image of the hip joint based on the second neural network model and determining the position of the acetabular cup;
the femoral stem prosthesis specification determining unit is used for identifying the reduced X-ray image of the hip joint based on the third neural network model and the fourth neural network model and determining the femoral stem prosthesis specification;
the osteotomy line determining unit is used for determining the position of the osteotomy line according to the rotation center of the femoral stem prosthesis and the rotation center of the acetabular cup determined in the X-ray image recognition process of the hip joint;
the determining the osteotomy line position from the center of rotation of the femoral stem prosthesis and the center of rotation of the acetabular cup determined during X-ray image recognition of the hip joint comprises:
the rotation center of the femoral stem prosthesis is coincided with the rotation center of the acetabular cup, and the actual position of the femoral stem prosthesis is determined; the position of the osteotomy line is determined along the coating position of the femoral stem prosthesis.
9. A computer-readable storage medium storing computer instructions for causing a computer to execute the deep learning and X-ray based total hip image processing method according to any one of claims 1 to 7.
CN202010707817.8A 2020-07-06 2020-07-21 Full hip joint image processing method and device based on deep learning and X-ray Active CN111888059B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010643713 2020-07-06
CN2020106437135 2020-07-06

Publications (2)

Publication Number Publication Date
CN111888059A CN111888059A (en) 2020-11-06
CN111888059B true CN111888059B (en) 2021-07-27

Family

ID=73190359

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010707817.8A Active CN111888059B (en) 2020-07-06 2020-07-21 Full hip joint image processing method and device based on deep learning and X-ray

Country Status (2)

Country Link
CN (1) CN111888059B (en)
WO (1) WO2022007972A1 (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112842529B (en) * 2020-12-31 2022-02-08 北京长木谷医疗科技有限公司 Total knee joint image processing method and device
CN113133802B (en) * 2021-04-20 2022-12-23 四川大学 Bone surgery line automatic positioning method based on machine learning
CN113744214B (en) * 2021-08-24 2022-05-13 北京长木谷医疗科技有限公司 Femoral stem placing device based on deep reinforcement learning and electronic equipment
CN113974920B (en) * 2021-10-08 2022-10-11 北京长木谷医疗科技有限公司 Knee joint femur force line determining method and device, electronic equipment and storage medium
CN113907775A (en) * 2021-10-13 2022-01-11 瓴域影诺(北京)科技有限公司 Hip joint image quality judgment method and system
CN114419618B (en) * 2022-01-27 2024-02-02 北京长木谷医疗科技股份有限公司 Total hip replacement preoperative planning system based on deep learning
CN114742747B (en) * 2022-02-24 2023-04-18 北京长木谷医疗科技有限公司 Evaluation method and system for hip replacement postoperative image based on deep learning
CN114431957B (en) * 2022-04-12 2022-07-29 北京长木谷医疗科技有限公司 Total knee joint replacement postoperative revision preoperative planning system based on deep learning
CN115830247B (en) * 2023-02-14 2023-07-14 北京壹点灵动科技有限公司 Fitting method and device for hip joint rotation center, processor and electronic equipment
CN116597002B (en) * 2023-05-12 2024-01-30 北京长木谷医疗科技股份有限公司 Automatic femoral stem placement method, device and equipment based on deep reinforcement learning
CN116650110A (en) * 2023-06-12 2023-08-29 北京长木谷医疗科技股份有限公司 Automatic knee joint prosthesis placement method and device based on deep reinforcement learning
CN116993824A (en) * 2023-07-19 2023-11-03 北京长木谷医疗科技股份有限公司 Acetabular rotation center calculating method, device, equipment and readable storage medium
CN117437459B (en) * 2023-10-08 2024-03-22 昆山市第一人民医院 Method for realizing user knee joint patella softening state analysis based on decision network

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101815477A (en) * 2007-09-28 2010-08-25 株式会社力克赛 Preoperative plan making device for artificial knee joint replacement and operation assisting tool
CN103209652A (en) * 2010-08-13 2013-07-17 史密夫和内修有限公司 Surgical guides
CN106456196A (en) * 2014-02-11 2017-02-22 史密夫和内修有限公司 Anterior and posterior referencing sizing guides and cutting blocks and methods
CN107106307A (en) * 2015-01-06 2017-08-29 沃尔德玛链接有限公司 It is determined that being adapted to the measurer of the femoral implant size of the knee-joint prosthesis of patient
CN107252338A (en) * 2009-05-29 2017-10-17 史密夫和内修有限公司 Method and apparatus for performing arthroplasty of knee
CN110648337A (en) * 2019-09-23 2020-01-03 武汉联影医疗科技有限公司 Hip joint segmentation method, hip joint segmentation device, electronic apparatus, and storage medium
CN111179350A (en) * 2020-02-13 2020-05-19 张逸凌 Hip joint image processing method based on deep learning and computing equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6917827B2 (en) * 2000-11-17 2005-07-12 Ge Medical Systems Global Technology Company, Llc Enhanced graphic features for computer assisted surgery system
US11147626B2 (en) * 2017-03-14 2021-10-19 Stephen B. Murphy Systems and methods for determining leg length change during hip surgery

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101815477A (en) * 2007-09-28 2010-08-25 株式会社力克赛 Preoperative plan making device for artificial knee joint replacement and operation assisting tool
CN107252338A (en) * 2009-05-29 2017-10-17 史密夫和内修有限公司 Method and apparatus for performing arthroplasty of knee
CN103209652A (en) * 2010-08-13 2013-07-17 史密夫和内修有限公司 Surgical guides
CN106456196A (en) * 2014-02-11 2017-02-22 史密夫和内修有限公司 Anterior and posterior referencing sizing guides and cutting blocks and methods
CN107106307A (en) * 2015-01-06 2017-08-29 沃尔德玛链接有限公司 It is determined that being adapted to the measurer of the femoral implant size of the knee-joint prosthesis of patient
CN110648337A (en) * 2019-09-23 2020-01-03 武汉联影医疗科技有限公司 Hip joint segmentation method, hip joint segmentation device, electronic apparatus, and storage medium
CN111179350A (en) * 2020-02-13 2020-05-19 张逸凌 Hip joint image processing method based on deep learning and computing equipment

Also Published As

Publication number Publication date
WO2022007972A1 (en) 2022-01-13
CN111888059A (en) 2020-11-06

Similar Documents

Publication Publication Date Title
CN111888059B (en) Full hip joint image processing method and device based on deep learning and X-ray
CN112971981B (en) Deep learning-based total hip joint image processing method and equipment
US10991070B2 (en) Method of providing surgical guidance
CN114419618B (en) Total hip replacement preoperative planning system based on deep learning
EP1525560B1 (en) Automated measurement of objects using deformable models
CN114742747B (en) Evaluation method and system for hip replacement postoperative image based on deep learning
CN102132320A (en) Method and device for image processing, particularly for medical image processing
CN115456990B (en) CT image-based rib counting method, device, equipment and storage medium
CN113855233B (en) Surgical range determining method, device, electronic equipment and storage medium
CN109919943B (en) Automatic detection method and system for hip joint angle of infant and computing equipment
CN113077498A (en) Pelvis registration method, pelvis registration device and pelvis registration system
US11540794B2 (en) Artificial intelligence intra-operative surgical guidance system and method of use
CN113974920B (en) Knee joint femur force line determining method and device, electronic equipment and storage medium
CN113077499B (en) Pelvis registration method, pelvis registration device, and pelvis registration system
US20070230782A1 (en) Method, a Computer Program, and Apparatus, an Image Analysis System and an Imaging System for an Object Mapping in a Multi-Dimensional Dataset
CN114612400A (en) Knee joint femoral replacement postoperative evaluation system based on deep learning
CN113907775A (en) Hip joint image quality judgment method and system
CN114141337A (en) Method, system and application for constructing image automatic annotation model
Kotcheff et al. Shape model analysis of THR radiographs
Redhead et al. An automated method for assessing routine radiographs of patients with total hip replacements
CN114469341B (en) Acetabulum registration method based on hip joint replacement
CN110772278A (en) Implant body postoperative verification method, device and terminal
CN114299177B (en) Image processing method, image processing device, electronic equipment and storage medium
EP4216163A1 (en) Method and device for segmentation and registration of an anatomical structure
CN113112560B (en) Physiological point region marking method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Zhang Yiling

Inventor after: Liu Xingyu

Inventor before: Zhang Yiling

Inventor before: Liu Xingyu

Inventor before: An Yicheng

Inventor before: Chen Peng

GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 101102 room 402, 4th floor, building 28, yard 18, Kechuang 13th Street, Beijing Economic and Technological Development Zone, Daxing District, Beijing

Patentee after: Beijing Changmugu Medical Technology Co.,Ltd.

Address before: 101102 room 402, 4th floor, building 28, yard 18, Kechuang 13th Street, Beijing Economic and Technological Development Zone, Daxing District, Beijing

Patentee before: BEIJING CHANGMUGU MEDICAL TECHNOLOGY Co.,Ltd.